Abstract
The multiagent learning literature has looked at iterated two-player games to develop mechanisms that allow agents to learn to converge on Nash Equilibrium strategy profiles. An equilibrium configuration implies that there is no motivation for one player to change its strategy if the other does not. Often, in general sum games, a higher payoff can be obtained by both players if one chooses not to respond optimally to the other player. By developing mutual trust, agents can avoid iterated best responses that will lead to a lesser payoff Nash Equilibrium. In this paper we consider 1-level agents (modelers) who select actions based on expected utility considering probability distributions over the actions of the opponent(s). We show that in certain situations, such stochastically-greedy agents can perform better (by developing mutually trusting behavior) than those that explicitly attempt to converge to Nash Equilibrium. We also experiment with an interesting action revealation strategy that can give the revealer better payoff on convergence than a non-revealing approach. By revealing, the revealer enables the opponent to agree to a more trusted equilibrium.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. J. Brams. Theory of Moves. Cambridge University Press, Cambridge: UK, 1994.
C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 746–752, Menlo Park, CA, 1998. AAAI Press/MIT Press.
D. Fudenberg and K. Levine. The Theory of Learning in Games. MIT Press, Cambridge, MA, 1998.
J. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In J. Shavlik, editor, Proceedings of the Fifteenth International Conference on Machine Learning (ML’98), pages 242–250, San Francisco, CA, 1998. Morgan Kaufmann.
M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 157–163, San Mateo, CA, 1994. Morgan Kaufmann.
R. D. Luce and H. Raiffa. Games and Decisions: Introduction and Critical Survey. Dover, New York, NY, 1957.
O. L. Mangasarian and H. Stone. Two-person nonzero-sum games and quadratic programming. Journal of Mathematical Analysis and Applications, 9:348–355, 1964.
M. Mundhe and S. Sen. Evaluating concurrent reinforcement learners. Proceedings of the International Conference on Multiagent Systems, 2000.
J. F. Nash. Non-cooperative games. Annals of Mathematics, 54:286–295, 1951.
T. Sandholm and R.H. Crites. Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems, 37:147–166, 1995.
T. W. Sandholm and R. H. Crites. Multiagent reinforcement learning and iterated prisoner’s dilemma. Biosystems Journal, 37:147–166, 1995.
S. Sen, M. Sekaran, and J. Hale. Learning to coordinate without sharing information. In National Conference on Artificial Intelligence, pages 426–431, Menlo Park, CA, 1994. AAAI Press/MIT Press. (Also published in Readings in Agents, Michael N. Huhns and Munindar Singh (Editors), pages 509-514, Morgan Kaufmann Publishers Inc., San Francisco, CA, 1998.).
R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
C. J. C. H. Watkins and P. D. Dayan. Q-learning. Machine Learning, 3:279–292, 1992.
G. Weiβ. Learning to coordinate actions in multi-agent systems. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 311–316, August 1993.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mukherjee, R., Banerjee, B., Sen, S. (2001). Learning Mutual Trust. In: Falcone, R., Singh, M., Tan, YH. (eds) Trust in Cyber-societies. Lecture Notes in Computer Science(), vol 2246. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45547-7_9
Download citation
DOI: https://doi.org/10.1007/3-540-45547-7_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43069-8
Online ISBN: 978-3-540-45547-9
eBook Packages: Springer Book Archive