Skip to main content

Security Games over Lexicographic Orders

  • Conference paper
  • First Online:
Decision and Game Theory for Security (GameSec 2020)

Abstract

Security is rarely single-dimensional and is in most practical instances a tradeoff between dependent, and occasionally conflicting goals. The simplest method of multi-criteria optimization and games with vector-valued payoffs, is transforming such games into ones with scalar payoffs, and looking for Pareto-optimal behavior. This usually requires an explicit weighting of security goals, whereas practice often only lets us rank security goals in terms of importance, but hardly admits a crisp numerical weight being assigned. Our work picks up the issue of optimizing security goals in descending order of importance, coming to the computation of an optimal solution w.r.t. lexicographic orders. This is interesting in two ways, as it (i) is theoretically nontrivial since lexicographic orders do not generally admit representations by continuous utility functions, hence render Nash’s classical result inapplicable, and (ii) practically relevant since it avoids ambiguities by subjective (and perhaps unsupported) importance weight assignments. We corroborate our results by giving numerical examples, showing a method to design zero-sum games with a set of a-priori chosen Nash equilibria. This simple instance of mechanism design may be of independent interest.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    An ordering \(\le \) is called continuous, if all bounded sequences \((a_n)\) with \(a_n\le b\) for all \(n\in \mathbb {R}\), have a limit that also satisfies the same bound \(\lim _{n\rightarrow \infty }a_n\le b\), if that limit exists. The lexicographic order is discontinuous w.r.t. this definition, since the sequence \((1/n,0)\ge _{lex} (0,1)\) for all \(n\in \mathbb {N}\), but \(\lim _{n\rightarrow \infty }(1/n,0)=(0,0)\le _{lex}(0,1)\).

  2. 2.

    This is indeed the standard idea behind putting cryptographic hash fingerprints on download sites for open-source software, addressing the possibility of a forged installation bundle. The package’s fingerprint as put on the website next to the download is for verification against independent other mirrors that offer the same download.

  3. 3.

    Here, \(n_A\) and \(m_A\) are new variables to describe the shape; their values depend on how many equilibria we want to enforce, and whether these are linearly dependent. This determines the dimension of the nullspaces, which sets the values for \(n_A,m_A\).

References

  1. Avis, D., Rosenberg, G., Savani, R., Stengel, B.: Enumeration of nash equilibria for two-player games. Econ. Theory 42, 9–37 (2010)

    Article  MathSciNet  Google Scholar 

  2. Avis, D.: lrs home page (2020). http://cgm.cs.mcgill.ca/~avis/C/lrs.html

  3. Boyd, S.P., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  4. Brzuska, C., et al.: Security of sanitizable signatures revisited. In: Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443, pp. 317–336. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00468-1_18

    Chapter  Google Scholar 

  5. Cococcioni, M., Pappalardo, M., Sergeyev, Y.D.: Lexicographic multi-objective linear programming using grossone methodology: theory and algorithm. Appl. Math. Comput. 318, 298–311 (2018). https://doi.org/10.1016/j.amc.2017.05.058, https://linkinghub.elsevier.com/retrieve/pii/S0096300317303703

  6. Davidson, C.C., Andel, T.R.: Feasibility of applying Moving Target Defensive Techniques in a SCADA System. ACPI, Boston University, Boston (2016). https://doi.org/10.13140/RG.2.1.5189.5441, http://rgdoi.net/10.13140/RG.2.1.5189.5441

  7. Eaton, J.W., Bateman, D., Hauberg, S., Wehbring, R.: GNU Octave version 5.2.0 manual: a high-level interactive language for numerical computations (2020). https://www.gnu.org/software/octave/doc/v5.2.0/

  8. Ehrgott, M.: Discrete decision problems, multiple criteria optimization classes and lexicographic max-ordering. In: Fandel, G., Trockel, W., Stewart, T.J., van den Honert, R.C. (eds.) Trends in Multicriteria Decision Making. Lecture Notes in Economics and Mathematical Systems, vol. 465, pp. 31–44. Springer, Heidelberg (1998). https://doi.org/10.1007/978-3-642-45772-2_3

    Chapter  MATH  Google Scholar 

  9. Ehrgott, M.: A Characterization of Lexicographic Max-Ordering Solutions (1999). https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/484

  10. Fishburn, P.C.: Exceptional paper-lexicographic orders, utilities and decision rules: a survey. Manag. Sci. 20(11), 1442–1471 (1974). https://doi.org/10.1287/mnsc.20.11.1442, http://pubsonline.informs.org/doi/abs/10.1287/mnsc.20.11.1442

  11. Glicksberg, I.L.: A further generalization of the kakutani fixed point theorem, with application to nash equilibrium points, vol. 3, pp. 170–174 (1952). http://dx.doi.org/10.2307/2032478

  12. Grabisch, M.: The application of fuzzy integrals in multicriteria decision making. Eur. J. Oper. Res. 89(3), 445–456 (1996). https://doi.org/10.1016/0377-2217(95)00176-X, https://linkinghub.elsevier.com/retrieve/pii/037722179500176X

  13. Harsanyi, J.C.: Oddness of the number of equilibrium points: a new proof. Int. J. Game Theory 2(1), 235–250 (1973). https://doi.org/10.1007/BF01737572, http://link.springer.com/10.1007/BF01737572

  14. Herrmann, A., Morali, A., Etalle, S., Wieringa, R.: Risk and business goal based security requirement and countermeasure prioritization. In: Niedrite, L., Strazdina, R., Wangler, B. (eds.) BIR 2011. LNBIP, vol. 106, pp. 64–76. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29231-6_6

    Chapter  Google Scholar 

  15. Isermann, H.: Linear lexicographic optimization. Oper. Res. Spektrum 4(4), 223–228 (1982). https://doi.org/10.1007/BF01782758, http://link.springer.com/10.1007/BF01782758

  16. Karnin, E.D., Greene, J.W., Hellman, M.E.: On secret sharing systems. IEEE Trans. Inf. Theory IT 29(1), 35–41 (1983). http://ieeexplore.ieee.org/document/1056621/

  17. Konnov, I.: On lexicographic vector equilibrium problems. J. Optim. Theory Appl. 118(3), 681–688 (2003). https://doi.org/10.1023/B:JOTA.0000004877.39408.80

  18. Kotler, R.: How to prioritize IT security projects (2020). https://www.helpnetsecurity.com/2020/01/30/prioritize-it-security-projects/, library Catalog: www.helpnetsecurity.com

  19. Lozovanu, D., Solomon, D., Zelikovsky, A.: Multiobjective games and determining pareto-nash equilibria. Buletinul Academiei de Stiinte a Republicii Moldova Matematica 3(49), 115–122 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Makhorin, A.: GLPK - GNU Project - Free Software Foundation (FSF) (2012) https://www.gnu.org/software/glpk/

  21. McElice, R.J., Sarwate, D.V.: On sharing secrets and reed-solomon codes. Commun. ACM 24(9), 583–584 (1981)

    Article  MathSciNet  Google Scholar 

  22. Ogryczak, W.: Lexicographic max-min optimization for efficient and fair bandwidth allocation. In: International network optimization conference (INOC) (2007)

    Google Scholar 

  23. Ogryczak, W., Śliwiński, T.: On direct methods for lexicographic min-max optimization. In: Gavrilova, M., et al. (eds.) ICCSA 2006. LNCS, vol. 3982, pp. 802–811. Springer, Heidelberg (2006). https://doi.org/10.1007/11751595_85

    Chapter  Google Scholar 

  24. Park, K.-Y., Yoo, S.-G., Kim, J.: Security requirements prioritization based on threat modeling and valuation graph. In: Lee, G., Howard, D., Ślȩzak, D. (eds.) ICHIT 2011. CCIS, vol. 206, pp. 142–152. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24106-2_19

  25. Perkowski, M.: Why You Need To Prioritize Cyber Risks (2018). https://www.securityroundtable.org/everything-cant-be-urgent-why-you-need-to-prioritize-cyber-risks/

  26. Rabin, T., Ben-Or, M.: Verifiable secret sharing and multiparty protocols with honest majority. In: STOC 1989, pp. 73–85. ACM (1989). http://dx.doi.org/10.1145/73007.73014

  27. Rass, S., Wiegele, A., König, S.: Source code to run the examples in Appendix B.1 available from https://www.syssec.at/de/publikationen/description/games-over-lex-orders. online (2020)

  28. Rass, S., Schauer, S.: Game Theory for Security and Risk Management: From Theory to Practice. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-319-75268-6

    Book  MATH  Google Scholar 

  29. Rios Insua, D., Couce-Vieira, A., Rubio, J.A., Pieters, W., Labunets, K., Rasines, D.G.: An Adversarial Risk Analysis Framework for Cybersecurity. Risk Analysis (2019). https://doi.org/10.1111/risa.13331, https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.13331

  30. Rios Insua, D., Rios, J., Banks, D.: Adversarial Risk Analysis. J. Am. Stat. Assoc. 104(486), 841–854 (2009). http://pubs.amstat.org/doi/abs/10.1198/jasa.2009.0155

  31. Ross, T., Booker, J.M., Parkinson, W.J.: Fuzzy logic and probability applications: bridging the gap. In: ASA SIAM (2002)

    Google Scholar 

  32. Rothschild, C., McLay, L., Guikema, S.: Adversarial risk analysis with incomplete information: a level-k approach. Risk Anal. 32(7), 1219–1231 (2012). https://doi.org/10.1111/j.1539-6924.2011.01701.x, http://doi.wiley.com/10.1111/j.1539-6924.2011.01701.x

  33. Stanimirovic, I.P.: Compendious lexicographic method for multi-objective optimization. Facta Universitatis 27(1), 55–66 (2012), https://pdfs.semanticscholar.org/25c6/8b5d4d9adfef3684dddbf0096a38fcbd1923.pdf?_ga=2.42418213.1664777629.1591959638-2095801372.1591959638

  34. The Recorded Future Team: You Can’t Do Everything: The Importance of Prioritization in Security (2018). https://www.recordedfuture.com/vulnerability-threat-prioritization/, library Catalog: www.recordedfuture.com Section: Cyber Threat Intelligence

  35. Tompa, M., Woll, H.: How to share a secret with cheaters. J. Cryptol. 1(3), 133–138 (1989). https://doi.org/10.1007/BF02252871, http://link.springer.com/10.1007/BF02252871

  36. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. arXiv:1712.07107 [cs, stat] (2018), http://arxiv.org/abs/1712.07107, arXiv: 1712.07107

Download references

Acknowledgement

The authors would like to thank the anonymous reviewers for valuable and constructive feedback on this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Rass .

Editor information

Editors and Affiliations

Appendices

A Proof of Proposition 1

Towards a contradiction, suppose there were such a function f then, it obviously cannot be constant, for otherwise, it were meaningless. Thus, there must be some value x for which \(f(x,0)\ne f(x,1)\), and the interval \(I(x):=[f(x,0),f(x,1)]\) has nonzero width.

Furthermore, any two such intervals I(x), I(y) are disjoint: if there were xy such that the intervals overlap, then we would have \(f(x,0)<f(y,0)<f(x,1)<f(y,1)\), which, since f represents the ordering, entails \((x,0)<_{lex}(y,0)<_{lex}(x,1)<_{lex}(y,1)\), in which the first \(<_{lex}\) implies \(x<y\) and the second implies \(y<x\), which is not possible at the same time.

Let us pick some particular (arbitrary) x for which \(f(x,0)\ne f(x,1)\) in the following. Since f is continuous, so is the function \(f(h):=f(x+h,0)-f(x+h,1)\). Our choice of x makes \(f(0)>0\), and this relation holds in an entire compact neighborhood f of 0. The compactness of f implies that f attains a minimum \(\varepsilon >0\) on f.

Each \(h\in f\) gives rise to a set \(I(x+h)\), whose length is by construction \(\ge \varepsilon \). Furthermore, all these uncountably many sets are pairwise disjoint, so that adding up their lengths would add up to infinity.

This is, however, impossible given the fact that this all happens within the unit interval [0, 1], whose length is 1. This final contradiction refutes the initial assumption on the existence of a continuous function f to represent the lexicographic order.

B Proof of Lemma 2

Suppose that we have picked a set of vectors \(0\le \mathbf {x}_1^*,\ldots ,\mathbf {x}_{k_1}^*\in \mathbb {R}^n\) for \(k_1<n\), to be equilibrium strategies for player 1, and likewise, let \(0\le \mathbf {y}_1^*,\ldots ,\mathbf {y}_{k_2}^*\in \mathbb {R}^m\) with \(k_2<m\) be a set of chosen equilibria for player 2 in our zero-sum game to be constructed.

Let the matrix \(\mathbf {X}\) be such that all \(\mathbf {x}_i^*\in N(\mathbf {X})\), when \(N(\mathbf {X})\) denotes the null-space of the matrix \(\mathbf {X}\). This matrix is directly constructible by taking the singular value decomposition of the matrix whose rows are exactly the desired \(\mathbf {x}_i^*\). In defining \(\mathbf {X}\) in this way, each (mixed) strategy \(\mathbf {x}_i^*\) makes the other player indifferent in its response, since \(\mathbf {X}\cdot \mathbf {x}_i^*=0\).

Analogously, we can construct a matrix \(\mathbf {Y}\) whose null-space is spanned by \(\left\{ \mathbf {y}_1^*,\ldots ,\mathbf {y}_{k_2}^*\right\} \), thus achieving \((\mathbf {y}_i^*)^T\cdot \mathbf {Y}^T=0\) for all \(i=1,2,\ldots ,k_2\).

Finally, pick any random matrix \(\mathbf {Z}\) with a conformable shape to have the matrix product \(\mathbf {A}=\mathbf {X}^T\cdot \mathbf {Z}\cdot \mathbf {Y}\in \mathbb {R}^{n_A\times m_A}\) well-definedFootnote 3. By associativity, \(\mathbf {A}\) retains the properties of \(\mathbf {X}\) and \(\mathbf {Y}\), so that we still have \((\mathbf {x}_i^*)^T\cdot \mathbf {A}=0\) and \(\mathbf {A}\cdot \mathbf {y}_j^* =0\) for all ij. Now, take \(\mathbf {A}\) as the \((n_A\times m_A)\)-payoff matrix in the game. It is well known that we can obtain an equilibrium for a maximizing player by solving the linear program

$$\begin{aligned} (P)\quad \min \overbrace{\left( \begin{array}{c} \mathbf{0} \\ 1 \\ \end{array} \right) ^T}^{=:\mathbf {c}^T} \cdot \overbrace{\left( \begin{array}{c} v \\ \varvec{\mu } \\ \end{array} \right) }^{=:\mathbf {x}}\quad \text {s.t.}\quad&\overbrace{\left( \begin{array}{c|c} -\mathbf {A}^T &{} \mathbf{1}\\ \hline \mathbf{1} &{} 0 \\ \end{array} \right) }^{=:\mathbf {B}}\cdot \left( \begin{array}{c} \varvec{\mu } \\ v \\ \end{array} \right) \begin{array}{c} \ge \\ = \\ \end{array} \overbrace{\left( \begin{array}{c} \mathbf{0} \\ 1\\ \end{array} \right) }^{=:\mathbf {b}}\\&\text {and}~\mu _i\ge 0\text {~for all~}i=1,\ldots ,n_A \end{aligned}$$

in which the conditions given here in matrix notation evaluate to the minimization of the saddle-point value v, upper-bounding the payoff obtained from the matrix \(\mathbf {A}\) by playing the i-th row with probability \(\mu _i\), i.e., \(\varvec{\mu }^T\cdot \mathbf {A}\cdot \mathbf {e}_i\le v\) for all i when \(\mathbf {e}_i\) is the i-th unit vector. The lower block in the product \(\mathbf {B}\cdot \varvec{\mu }=1\) is then just the condition that the sum of all \(\mu _i\) should equal 1.

Now, look at the dual program for the other player being

$$\begin{aligned} (D)\quad \max \mathbf {b}^T\cdot \overbrace{\left( \begin{array}{c} \varvec{\nu } \\ v \\ \end{array} \right) }^{=:\mathbf {y}},\,\text {s.t.}\quad&\mathbf {y}^T\cdot \left( \begin{array}{c|c} -\mathbf {A} &{} \mathbf{1} \\ \hline \mathbf{1} &{} 0 \\ \end{array} \right) \begin{array}{c} \le \\ = \\ \end{array} \overbrace{\left( \begin{array}{c} \mathbf{0} \\ 1 \\ \end{array} \right) ^T}^{= \mathbf {c}^T}\\&\text {and}~\nu _i\ge 0\text { for all }i=1,\ldots ,m_A. \end{aligned}$$

The point of our construction is that in the matrix products \(\mathbf {B}\cdot \mathbf {x}\) and \(\mathbf {y}^T\cdot \mathbf {B}\), the following happens:

  • in (P), we get the expression \(-\mathbf {A}\cdot \varvec{\mu }=0\) for every \(\varvec{\mu }\in \left\{ \mathbf {x}_1^*,\ldots ,\mathbf {x}_{k_1}^*\right\} \) or linear combinations thereof. Thus, the constraint \(\ge 0\) on this row is satisfied with equality if \(v=0\).

  • Likewise, evaluating the constraints in (D) creates the inner term \(-\varvec{\nu }^T\mathbf {A}=0\) for all \(\varvec{\nu }\in \left\{ \mathbf {y}_1^*,\ldots ,\mathbf{y}_{k_2}^*\right\} \) (and any linear combinations thereof). Thus, the dual constraint \(\le 0\) is also satisfied with equality.

Now, an equilibrium \((\varvec{\mu },\varvec{\nu })\) in the zero-sum game \(\mathbf {A}\) is characterized by \(\varvec{\mu }\) being an optimum in (P) and \(\varvec{\nu }\) being an optimum in (D), and by strong duality, this happens if both are feasible for the respective constraints, and the respective optima are equal. Putting these conditions together, we find \((\varvec{\mu },\varvec{\nu })\) to be an equilibrium if and only if the following conditions are all satisfied:

  1. 1.

    \(\mathbf {B}\cdot \mathbf {x}\ge \mathbf {b}\), i.e., feasibility for (P): this holds by construction, even with equality in all rows.

  2. 2.

    \(\mathbf {y}^T\cdot \mathbf {B} \le \mathbf {c}^T\), i.e., feasibility for (D): this also holds by construction with equality.

  3. 3.

    \(\mathbf {c}^T\cdot \mathbf {x}\le \mathbf {y}^T\cdot \mathbf {b}\), which can only hold if the two values are equal. But we constructed all equilibria to create the value \(v=\varvec{\mu }^T\cdot \mathbf {A}=\mathbf {A}\cdot \varvec{\nu }=0\), so equality holds here too.

Thus, all pairs \((\mathbf {x}_i^*,\mathbf {y}_j^*)\) are equilibria of our matrix game \(\mathbf {A}\).

Remark 4

Switching the players’s directions between minimization and maximization, as well as changing the saddle-point value from \(v=0\) into some chosen \(v'\ne 0\) is easy by a proper affine transformation \(\mathbf {A}'\mapsto \pm \mathbf {A}+v'\).

It is easy to see that the so-constructed game has the designed equilibria, but also many others, since not only the convex-combination, but any linear combination of the chosen vectors will be in the nullspace. Let us take a short break here to numerically illustrate the intermediate construction.

1.1 B.1 Example

We implemented the algorithm in GNU Octave [7], with sources are available from [27]: for the example, let us fix the strategy spaces for player 1 and 2 to have five, resp. six, actions. Furthermore, let us pick two equilibria for player 1, and three equilibria (all mixed for both players) at random, sampling uniformly random values from [0, 1], and normalizing the vector to unit sum. For a random instance, these equilibria were

$$\begin{aligned} \begin{array}{rrrrrr} \text {strategy} &{} 1\;\; &{} 2\;\; &{} 3\;\; &{} 4\;\; &{} 5\;\; \\ \mathbf {x}_1^* =&{}(0.236624, &{} 0.259513, &{} 0.0116831, &{} 0.330247, &{} 0.161933) \\ \mathbf {x}_2^* =&{}(0.26241, &{} 0.117688, &{} 0.21289, &{} 0.284324, &{} 0.122688) \\ \end{array} \end{aligned}$$
(5)

and

$$\begin{aligned} \begin{array}{rrrrrrr} \text {strategy} &{} 1\;\; &{} 2\;\; &{} 3\;\; &{} 4\;\; &{} 5\;\; &{} 6\;\; \\ \mathbf {y}_1^* =&{}(0.110901, &{} 0.13516, &{} 0.220331, &{} 0.126352, &{} 0.238114, &{} 0.169142 ) \\ \mathbf {y}_2^* =&{}(0.1328, &{} 0.45488, &{} 0.0802542, &{} 0.040265, &{} 0.236992, &{} 0.0548095 ) \\ \mathbf {y}_3^* =&{}(0.148226, &{} 0.0651162, &{} 0.0286501, &{} 0.31977, &{} 0.375297, &{} 0.0629404 ) \\ \end{array} \end{aligned}$$
(6)

With these values, the matrices \(\mathbf {X}_1\) and \(\mathbf {Y}_1\) from the previous section are easily found using the null function in Octave (that internally computes a singular value decomposition), to find

$$\begin{aligned} \mathbf {X}=\left( \begin{array}{rrrrr} -0.490976 &{} 0.689481 &{} 0.497453 &{} -0.183545 &{} -0.0490909 \\ -0.617055 &{} -0.273987 &{} 0.0243345 &{} 0.724245 &{} -0.138025 \\ -0.263877 &{} -0.192006 &{} 0.0534469 &{} -0.120881 &{} 0.935967 \\ \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} \mathbf {Y}=\left( \begin{array}{rrrrrr} -0.554165 &{} 0.271458 &{} 0.237081 &{} 0.640731 &{} -0.372757 &{} -0.116278 \\ -0.72375 &{} -0.0592711 &{} 0.0703592 &{} -0.314963 &{} 0.585821 &{} -0.159171 \\ -0.278293 &{} 0.0898957 &{} -0.51704 &{} 0.0385987 &{} -0.0337394 &{} 0.802816 \\ \end{array}\right) \end{aligned}$$

and with a randomly chosen matrix \(\mathbf {Z}\), we find the payoff structure

$$\begin{aligned} \mathbf {A} = \left( \begin{array}{cccccc} 0.955986 &{} -0.272557 &{} 0.316327 &{} -0.405844 &{} 0.102397 &{} -0.662056 \\ 0.0454297 &{} -0.0580642 &{} -0.178636 &{} -0.187195 &{} 0.130912 &{} 0.204854 \\ -0.298982 &{} 0.05127 &{} -0.17908 &{} 0.0170827 &{} 0.0593234 &{} 0.292065 \\ -0.436331 &{} 0.223209 &{} -0.113101 &{} 0.453724 &{} -0.301461 &{} 0.340507 \\ -0.558309 &{} 0.0324137 &{} 0.0676309 &{} -0.0335246 &{} 0.251095 &{} -0.076376 \\ \end{array}\right) \end{aligned}$$

which is exactly the matrix (4) used in Sect. 4.2.

Solving the linear programs (P) and (D), we find the following mixed equilibrium for the game \(\mathbf {A}\):

$$\begin{aligned} \begin{array}{rllllll} \mathbf {x}^*= &{} (0.28381, &{} 0, &{} 0.37985, &{} 0.24622, &{} 0.09012), &{} \\ \text {and~}\mathbf {y}^*= &{} (0.06237, &{} 0, &{} 0.48687, &{} 0, &{} 0.11092, &{} 0.33984)\\ \end{array} \end{aligned}$$

which is not among the equilibria listed in (5) or (6). However, it is a simple matter to put the vectors \(\mathbf {x}^*,\mathbf {y}^*\) into \(\text {span}\left\{ \mathbf {x}_1^*,\mathbf {x}_2^*\right\} \) and \(\text {span}\left\{ \mathbf {y}_1^*,\mathbf {y}_2^*,\mathbf {y}_3^*\right\} \) via

$$\begin{aligned} \mathbf {x}^*&= -0.8298 \cdot \mathbf {x}_1^* + 1.8298 \cdot \mathbf {x}_2^*\quad \text {and}\\ \mathbf {y}^*&= 2.55931\cdot \mathbf {y}_1^* -0.62699\cdot \mathbf {y}_2^* - 0.93232\cdot \mathbf {y}_3^*. \end{aligned}$$

1.2 B.2 Restricting the Equilibria to the Desired Set

Now, to complete the proof of Lemma 2, it remains to modify the game so that no solution outside the convex hull of our chosen equilibrium points is possible.

The simplest method of to exclude equilibria outside the desired set is adding a penalty term to the goal function that vanishes on the desired set of optima. An obvious choice is letting \(\delta \) be a distance measure, such as

$$\begin{aligned} \delta (M,\mathbf {y}) := \inf \left\{ \left\| \mathbf {x}-\mathbf {y}\right\| : \mathbf {x}\in M\right\} , \end{aligned}$$

for a set \(M\subset \mathbb {R}^n\) and a point \(\mathbf {x}\in \mathbb {R}^n\), using any norm \(\left\| \cdot \right\| \) on \(\mathbb {R}^n\). Put \(E_1\) as the set of desired equilibria of player 1, and let \(E_2\) be the set of desired equilibria for player 2. Then, any action outside \(E_1\) shall decrease the revenue for player 1, while any deviation to the exterior of \(E_2\) shall increase the payoff for player 1, so that there is an incentive for player 1 to stay within the desired set of equilibria, and another incentive for player 2 to do the same (zero-sum game). Thus, we change the expected payoff function from \(u(\mathbf {x},\mathbf {y})=\mathbf {x}^T\cdot \mathbf {A}\cdot \mathbf {y}\) into

$$\begin{aligned} u(\mathbf {x},\mathbf {y}) = \mathbf {x}^T\cdot \mathbf {A}\cdot \mathbf {y} + \delta (\mathbf {y},\Delta (E_2)) - \delta (\mathbf {x},\Delta (E_1)). \end{aligned}$$
(7)

This function is no longer linear, and hence the optimization problems (P) and (D) no longer apply as such. But strong duality still holds, since Slater’s condition [3] is satisfied: note that the change in the payoff functional manifests itself in the primal problem (P) as the inequality \(u(\mathbf {x},\mathbf {e}_i)\le v\) for \(e_i\) being the i-th unit vector running over all strategies of the second player (the likewise converse inequality would arise in the dual problem (D)). This is due to the fact that we still do a min-max optimization \(\max _{\mathbf {x}}\min _{\mathbf {y}}u(\mathbf {x},\mathbf {y})\), where the inner optimization is easy because we have only a finite number of choices (or any convex combination of them), making \(\min _{\mathbf {y}}u(\mathbf {x},\mathbf {y})=\min _{i=1,\ldots ,m}u(\mathbf {x},\mathbf {e}_i)\).

More formally, let \(B^o := \left\{ (\mathbf {x},\mathbf {y}): \left\| \mathbf {x}\right\| _1< 1,\left\| \mathbf {y}\right\| _1< 1\right\} \) be the interior of the unit balls defining the feasible set of probability distributions, i.e., mixed strategies for both players. Moreover, let E be the convex hull of all equilibria that are admissible by design. For Slater’s condition, we look for an inner point that satisfies the constraints with strict inequality. Note that the affine hull \(\text {aff}(E)\) is unbounded, and therefore extends over the bounded convex set E. Moreover, by construction of the penalized utility (7), we have nonzero contributions of the distance terms outside E. Now, distinguish two cases:

Case 1: If \(B^o\setminus E=\emptyset \), then all probability distributions are admissible equilibria by design, and there is nothing to restrict (the penalty terms never become active, and always add zero to the overall utility).

Case 2: Otherwise, the affine hull \(\text {aff}(E)\) must contain a point \((\mathbf{x}_0,\mathbf {y}_0)\in (B^o\cap \, \)aff(E)\()\setminus E\) outside the admissible set E but in the interior of the unit ball. Look at the terms that sum up to the penalized utility:

$$\begin{aligned} \mathbf {x}^T\cdot \mathbf {A}\cdot \mathbf {y}&=0,\quad \text {because }(\mathbf {x}_0,\mathbf {y}_0)\text { are still in the nullspace of }\mathbf {A}; \\ \delta (\mathbf {x}_0,\Delta (E_1))&> 0, \quad \text {because we are outside }\Delta (E_1)\subset E;\\ \delta (\mathbf {y}_0,\Delta (E_2))&> 0, \quad \text {because we are outside }\Delta (E_2)\subset E. \end{aligned}$$

So, whenever \(\delta (\mathbf {x},\Delta (E_1))\ne \delta (\mathbf {y},\Delta (E_2))\), we are done since we have a nonzero utility for the respective player and hence a Slater point (for one of the players, i.e., either the primal or the dual problem). Otherwise, if \(\delta (\mathbf {x},\Delta (E_1))=\delta (\mathbf {y},\Delta (E_2))\), we can slightly move \(\mathbf {x}\) farer away from E, since \(B^o\) is an open set. This move from \(\mathbf {x}_0\) to \(\mathbf {x}_0'\) with \(\delta (\mathbf{x}_0,\Delta (E_1))\ne \delta (\mathbf {x}_0',\Delta (E_1))\) again makes the penalty term overall negative, and we have \((\mathbf {x}_0',\mathbf {y}_0)\) as the sought Slater point. The existence of a Slater point certifies strong duality to hold for the optimization problems. The design of the respective utilities (having opposite signs since we are playing a zero-sum regime) then assures that all feasible solutions must be inside the set \(\Delta (E_1)\times \Delta (E_2)\). By strong duality, no solution outside this region is possible, and Lemma 2 is proven.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rass, S., Wiegele, A., König, S. (2020). Security Games over Lexicographic Orders. In: Zhu, Q., Baras, J.S., Poovendran, R., Chen, J. (eds) Decision and Game Theory for Security. GameSec 2020. Lecture Notes in Computer Science(), vol 12513. Springer, Cham. https://doi.org/10.1007/978-3-030-64793-3_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64793-3_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64792-6

  • Online ISBN: 978-3-030-64793-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics