Skip to main content

What to Communicate? Execution-Time Decision in Multi-agent POMDPs

  • Conference paper
Distributed Autonomous Robotic Systems 7

Summary

In recent years, multi-agent Partially Observable Markov Decision Processes (POMDP) have emerged as a popular decision-theoretic framework for modeling and generating policies for the control of multi-agent teams. Teams controlled by multi-agent POMDPs can use communication to share observations and coordinate. Therefore, policies are needed to enable these teams to reason about communication. Previous work on generating communication policies for multi-agent POMDPs has focused on the question of when to communicate. In this paper, we address the question of what to communicate. We describe two paradigms for representing limitations on communication and present an algorithm that enables multi-agent teams to make execution-time decisions on how to effectively utilize available communication resources.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bernstein D S, Zilberstein S, Immerman N (2000) The complexity of decentralized control of Markov decision processes. In: Uncertainty in Artificial Intelligence

    Google Scholar 

  2. Nair R, Pynadath D, Yokoo M, Tambe M, Marsella S (2003) Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In: International Joint Conference on Artificial Intelligence

    Google Scholar 

  3. Peshkin L, Kim K-E, Meuleau N, Kaelbling L P (2000) Learning to cooperate via policy search. In: Uncertainty in Artificial Intelligence

    Google Scholar 

  4. Pynadath D V and Tambe M (2002) The communicative multiagent team decision problem: Analyzing teamwork theories and models. In: Journal of AI Research

    Google Scholar 

  5. Roth M, Simmons R, Veloso M (2005) Reasoning about joint beliefs for execution-time communication decisions. In: International Joint Conference on Autonomous Agents and Multi Agent Systems

    Google Scholar 

  6. Xuan P, Lesser V, Zilberstein S (2000) Formal modeling of communication decisions in cooperative multiagent systems. In: Workshop on Game-Theoretic and Decision-Theoretic Agents

    Google Scholar 

  7. Goldman C V and Zilberstein S (2003) Optimizing information exchange in cooperative multi-agent systems. In: International Joint Conference on Autonomous Agents and Multi Agent Systems

    Google Scholar 

  8. Nair R, Roth M, Yokoo M, Tambe M (2004) Communication for improving policy computation in distributed POMDPs. In: International Joint Conference on Autonomous Agents and Multi Agent Systems

    Google Scholar 

  9. Emery-Montemerlo R, Gordon G, Schneider J, Thrun S (2004) Approximate solutions for partially observable stochastic games with common payoffs. In: International Joint Conference on Autonomous Agents and Multi Agent Systems

    Google Scholar 

  10. Doshi P and Gmytrasiewicz P J (2005) Approximating state estimation in multiagent settings using particle filters. In: International Joint Conference on Autonomous Agents and Multi Agent Systems

    Google Scholar 

  11. Roth M, Vail D, Veloso M (2003) A real-time world model for multi-robot teams with high-latency communication. In: International Joint Conference on Intelligent Robots and Systems

    Google Scholar 

  12. Bhasin K, Hayden J, Agre J R, Clare L P, Yan T Y (2001) Advanced communication and networking technologies for Mars exploration. In: International Communications Satellite Systems Conference and Exhibit

    Google Scholar 

  13. Rosencrantz M, Gordon G, Thrun S (2003) Decentralized sensor fusion with distributed particle filters. In: Uncertainty in Artificial Intelligence

    Google Scholar 

  14. Papadimitriou C H and Tsitsiklis J N (1987) The complexity of Markov decision processes. In: Mathematics of Operations Research

    Google Scholar 

  15. Cassandra, A R (2005) Tony’s POMDP page. At: http://www.cassandra.org/pomdp/code/index.shtml

    Google Scholar 

  16. Littman M L, Cassandra A R, Kaelbling L P (1995) Learning policies for partially observable environments: Scaling up. In: International Conference on Machine Learning

    Google Scholar 

  17. Kaelbling L P, Littman M L, Cassandra A R (1998) Planning and acting in partially observable domains In: Artificial Intelligence

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Tokyo

About this paper

Cite this paper

Roth, M., Simmons, R., Veloso, M. (2006). What to Communicate? Execution-Time Decision in Multi-agent POMDPs. In: Gini, M., Voyles, R. (eds) Distributed Autonomous Robotic Systems 7. Springer, Tokyo. https://doi.org/10.1007/4-431-35881-1_18

Download citation

  • DOI: https://doi.org/10.1007/4-431-35881-1_18

  • Publisher Name: Springer, Tokyo

  • Print ISBN: 978-4-431-35878-7

  • Online ISBN: 978-4-431-35881-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics