Skip to main content

Relational Markov Games

  • Conference paper
Logics in Artificial Intelligence (JELIA 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3229))

Included in the following conference series:

Abstract

Towards a compact and elaboration-tolerant first-order representation of Markov games, we introduce relational Markov games, which combine standard Markov games with first-order action descriptions in a stochastic variant of the situation calculus. We focus on the zero-sum two-agent case, where we have two agents with diametrically opposed goals. We also present a symbolic value iteration algorithm for computing Nash policy pairs in this framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Boutilier, C.: Sequential optimality and coordination in multiagent systems. In: Proceedings IJCAI 1999, pp. 478–485 (1999)

    Google Scholar 

  2. Boutilier, C., Reiter, R., Price, B.: Symbolic dynamic programming for first-order MDPs. In: Proceedings IJCAI 2001, pp. 690–700 (2001)

    Google Scholar 

  3. Boutilier, C., Reiter, R., Soutchanski, M., Thrun, S.: Decision-theoretic, high-level agent programming in the situation calculus. In: Proceedings AAAI 2000, pp. 355–362 (2000)

    Google Scholar 

  4. Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artif. Intell. 136(2), 215–250 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  5. Finzi, A., Lukasiewicz, T.: Game-theoretic agent programming in Golog. In: Proceedings ECAI 2004 (2004) (to appear)

    Google Scholar 

  6. Gardiol, N.H., Kaelbling, L.P.: Envelope-based planning in relational MDPs. In: Proceedings NIPS 2003 (2003)

    Google Scholar 

  7. Guestrin, C., Koller, D., Gearhart, C., Kanodia, N.: Generalizing plans to new environments in relational MDPs. In: Proceedings IJCAI 2003, pp. 1003–1010 (2003)

    Google Scholar 

  8. Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101, 99–134 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  9. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings ICML 1994, pp. 157–163 (1994)

    Google Scholar 

  10. McCarthy, J., Hayes, P.J.: Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence 4, 463–502 (1969)

    MATH  Google Scholar 

  11. Owen, G.: Game Theory, 2nd edn. Academic Press, London (1982)

    MATH  Google Scholar 

  12. Poole, D.: The independent choice logic for modelling multiple agents under uncertainty. Artif. Intell. 94, 7–56 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  13. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Chichester (1994)

    MATH  Google Scholar 

  14. Reiter, R.: The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In: Artificial Intelligence and Mathematical Theory of Computation: Papers in Honor of John McCarthy, pp. 359–380. Academic Press, London (1991)

    Google Scholar 

  15. Reiter, R.: Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  16. van der Wal, J.: Stochastic Dynamic Programming. Mathematical Centre Tracts, vol. 139. Morgan Kaufmann, San Francisco (1981)

    MATH  Google Scholar 

  17. von Neumann, J., Morgenstern, O.: The Theory of Games and Economic Behavior. Princeton University Press, Princeton (1947)

    Google Scholar 

  18. Yoon, S.W., Fern, A., Givan, B.: Inductive policy selection for first-order MDPs. In: Proceedings UAI 2002, pp. 569–576 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Finzi, A., Lukasiewicz, T. (2004). Relational Markov Games. In: Alferes, J.J., Leite, J. (eds) Logics in Artificial Intelligence. JELIA 2004. Lecture Notes in Computer Science(), vol 3229. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30227-8_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30227-8_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23242-1

  • Online ISBN: 978-3-540-30227-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics