Skip to main content

Governing Black-Box Agents in Competitive Multi-Agent Systems

  • Conference paper
  • First Online:
Multi-Agent Systems (EUMAS 2021)

Abstract

Competitive Multi-Agent Systems (MAS) are inherently hard to control due to agent autonomy and strategic behavior, which is particularly problematic when there are system-level objectives to be achieved or specific environmental states to be avoided.

Existing solutions for this task mostly assume specific knowledge about agent preferences, utilities and strategies, neglecting the fact that actions are not always directly linked to genuine agent preferences, but can also reflect anticipated competitor behavior, be a concession to a superior adversary or simply be intended to mislead other agents. This assumption both reduces applicability to real-world systems and opens room for manipulation.

We therefore propose a new governance approach for competitive MAS which relies exclusively on publicly observable actions and transitions, and uses the acquired knowledge to purposefully restrict action spaces, thereby achieving the system’s objectives while preserving a high level of autonomy for the agents.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andrighetto, G., Governatori, G., Noriega, P., van der Torre, L.: Normative multi-agent systems (2013). https://doi.org/10.4230/DFU.Vol4.12111.i

  2. Asadi, M., Huber, M.: State space reduction for hierarchical reinforcement learning, January 2004

    Google Scholar 

  3. Bade, S.: Nash equilibrium in games with incomplete preferences. Econ. Theory 26(2), 309–332 (2005). www.jstor.org/stable/25055952

  4. Balke, T., et al.: Norms in MAS: definitions and related concepts, p. 31, January 2013

    Google Scholar 

  5. Boella, G., van der Torre, L., Verhagen, H.: Introduction to normative multiagent systems. Comput. Math. Organ. Theory 12(2), 71–79 (2006). https://doi.org/10.1007/s10588-006-9537-7

    Article  Google Scholar 

  6. Brafman, R.I., Tennenholtz, M.: On partially controlled multi-agent systems. J. Artif. Int. Res. 4(1), 477–507 (1996)

    MathSciNet  MATH  Google Scholar 

  7. Bulling, N., Dastani, M.: Norm-based mechanism design. Artif. Intell. 239(C), 97–142 (2016). https://doi.org/10.1016/j.artint.2016.07.001

    Article  MathSciNet  MATH  Google Scholar 

  8. Claus, C., Boutilier, C.: The dynamics of reinforcement learning in cooperative multiagent systems. In: Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI 1998/IAAI 1998, pp. 746–752. American Association for Artificial Intelligence, Menlo Park (1998). http://dl.acm.org/citation.cfm?id=295240.295800

  9. Dean, T., Givan, R., Leach, S.: Model reduction techniques for computing approximately optimal solutions for Markov decision processes. In: Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, pp. 124–131. UAI 1997, Morgan Kaufmann Publishers Inc., San Francisco (1997)

    Google Scholar 

  10. Dell’Anna, D., Dastani, M., Dalpiaz, F.: Runtime revision of norms and sanctions based on agent preferences. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2019, pp. 1609–1617. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2019). Event-place: Montreal QC, Canada

    Google Scholar 

  11. Durugkar, I., Liebman, E., Stone, P.: Balancing individual preferences and shared objectives in multiagent reinforcement learning, p. 2483, July 2020. https://doi.org/10.24963/ijcai.2020/343

  12. Fitoussi, D., Tennenholtz, M.: Choosing social laws for multi-agent systems: minimality and simplicity. Artif. Intell. 119(1), 61–101 (2000)

    Article  MathSciNet  Google Scholar 

  13. García-Camino, A., Rodríguez-Aguilar, J., Sierra, C., Vasconcelos, W.: A rule-based approach to norm-oriented programming of electronic institutions. SIGecom Exchanges 5, 33–40 (2006)

    Google Scholar 

  14. Gutierrez, J., Perelli, G., Wooldridge, M.: Imperfect information in reactive modules games. Inf. Comput. 261, 650–675 (2018)

    Article  MathSciNet  Google Scholar 

  15. Hernandez-Leal, P., Kartal, B., Taylor, M.E.: A survey and critique of multiagent deep reinforcement learning. Auton. Agent. Multi-Agent Syst. 33(6), 750–797 (2019). https://doi.org/10.1007/s10458-019-09421-1

    Article  Google Scholar 

  16. Hoen, P.J., Tuyls, K., Panait, L., Luke, S., La Poutré, J.A.: An overview of cooperative and competitive multiagent learning. In: Tuyls, K., Hoen, P.J., Verbeeck, K., Sen, S. (eds.) LAMAS 2005. LNCS (LNAI), vol. 3898, pp. 1–46. Springer, Heidelberg (2006). https://doi.org/10.1007/11691839_1

    Chapter  Google Scholar 

  17. Kim, S., Lewis, M.E., White, C.C.: State space reduction for nonstationary stochastic shortest path problems with real-time traffic information. IEEE Trans. Intell. Transp. Syst. 6(3), 273–284 (2005). https://doi.org/10.1109/TITS.2005.853695

    Article  Google Scholar 

  18. Koriche, F., Zanuttini, B.: Learning conditional preference networks. Artif. Intell. 174(11), 685–703 (2010). https://doi.org/10.1016/j.artint.2010.04.019. http://www.sciencedirect.com/science/article/pii/S000437021000055X

  19. Lecarpentier, E., Rachelson, E.: Non-stationary Markov decision processes a worst-case approach using model-based reinforcement learning, April 2019

    Google Scholar 

  20. Levy, Y.J., Solan, E.: Stochastic games. In: Meyers, R.A. (ed.) Encyclopedia of Complexity and Systems Science, pp. 1–23. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-642-27737-5_522-2

    Chapter  Google Scholar 

  21. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the eleventh international conference on international conference on machine learning, ICML1994, pp. 157–163. Morgan Kaufmann Publishers Inc., San Francisco (1994)

    Google Scholar 

  22. Liu, L., Chattopadhyay, A., Mitra, U.: On solving MDPs with large state space: exploitation of policy structures and spectral properties. IEEE Trans. Commun. 67(6), 4151–4165 (2019). https://doi.org/10.1109/TCOMM.2019.2899620

    Article  Google Scholar 

  23. Liu, T., Wang, J., Zhang, X., Cheng, D.: Game theoretic control of multiagent systems. SIAM J. Control. Optim. 57, 1691–1709 (2019)

    Article  MathSciNet  Google Scholar 

  24. Lüdtke, S., Schröder, M., Krüger, F., Bader, S., Kirste, T.: State-space abstractions for probabilistic inference: a systematic review. J. Artif. Int. Res. 63(1), 789–848 (2018). https://doi.org/10.1613/jair.1.11261

    Article  MathSciNet  MATH  Google Scholar 

  25. Majeed, S.J., Hutter, M.: Exact reduction of huge action spaces in general reinforcement learning (2020)

    Google Scholar 

  26. Meyer, J.J.C., Wieringa, R.J. (eds.): Deontic Logic in Computer Science: Normative System Specification. Wiley, USA (1994)

    Google Scholar 

  27. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning,ICML2016, vol. 48. pp. 1928–1937. JMLR.org (2016)

    Google Scholar 

  28. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–33 (2015)

    Article  Google Scholar 

  29. Morales, J.: On-line norm synthesis for open multi-agent systems. Ph.D. thesis, Universitat de Barcelona (2016)

    Google Scholar 

  30. Morris-Martin, A., De Vos, M., Padget, J.: Norm emergence in multiagent systems: a viewpoint paper. Auton. Agents Multi-Agent Syst. 33(6), 706–749 (2019). https://doi.org/10.1007/s10458-019-09422-0

    Article  Google Scholar 

  31. Nowé, A., Vrancx, P., De Hauwere, Y.M.: Game theory and multi-agent reinforcement learning. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning: State-of-the-Art, pp. 441–470. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_14

    Chapter  Google Scholar 

  32. Perelli, G.: Enforcing equilibria in multi-agent systems. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2019, pp. 188–196. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2019). Event-place: Montreal QC, Canada

    Google Scholar 

  33. Relund Nielsen, L., Jørgensen, E., Højsgaard, S.: Embedding a state space model into a Markov decision process. Ann. Oper. Res. 190(1), 289–309 (2011). https://doi.org/10.1007/s10479-010-0688-z

    Article  MathSciNet  MATH  Google Scholar 

  34. Rotolo, A.: Norm compliance of rule-based cognitive agents. In: IJCAI International Joint Conference on Artificial Intelligence, pp. 2716–2721, January 2011

    Google Scholar 

  35. Rotolo, A., van der Torre, L.: Rules, agents and norms: guidelines for rule-based normative multi-agent systems. In: Bassiliades, N., Governatori, G., Paschke, A. (eds.) RuleML 2011. LNCS, vol. 6826, pp. 52–66. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22546-8_6

    Chapter  Google Scholar 

  36. Shapley, L.S.: Stochastic games. Proc. Natl. Acad. Sci. U.S.A. 39(10), 1095–1100 (1953). https://pubmed.ncbi.nlm.nih.gov/16589380

  37. Shoham, Y., Powers, R., Grenager, T.: Multi-agent reinforcement learning: a critical survey, June 2003

    Google Scholar 

  38. Shoham, Y., Tennenholtz, M.: On social laws for artificial agent societies: off-line design. Artif. Intell. 73(1), 231–252 (1995). https://doi.org/10.1016/0004-3702(94)00007-N. http://www.sciencedirect.com/science/article/pii/000437029400007N

  39. Stirling, W.C., Felin, T.: Game theory, conditional preferences, and social influence. PLOS One 8(2), 1–11 (2013). https://doi.org/10.1371/journal.pone.0056751

    Article  Google Scholar 

  40. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford Book, Cambridge (2018)

    MATH  Google Scholar 

  41. Watkins, C.: Learning from delayed rewards, January 1989

    Google Scholar 

  42. Wolf, T.D., Holvoet, T.: Emergence and self-organisation: a statement of similarities and differences (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Pernpeintner .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pernpeintner, M., Bartelt, C., Stuckenschmidt, H. (2021). Governing Black-Box Agents in Competitive Multi-Agent Systems. In: Rosenfeld, A., Talmon, N. (eds) Multi-Agent Systems. EUMAS 2021. Lecture Notes in Computer Science(), vol 12802. Springer, Cham. https://doi.org/10.1007/978-3-030-82254-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82254-5_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82253-8

  • Online ISBN: 978-3-030-82254-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics