Skip to main content

Geometry of Policy Improvement

  • Conference paper
  • First Online:
Geometric Science of Information (GSI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10589))

Included in the following conference series:

  • 2299 Accesses

Abstract

We investigate the geometry of optimal memoryless time independent decision making in relation to the amount of information that the acting agent has about the state of the system. We show that the expected long term reward, discounted or per time step, is maximized by policies that randomize among at most k actions whenever at most k world states are consistent with the agent’s observation. Moreover, we show that the expected reward per time step can be studied in terms of the expected discounted reward. Our main tool is a geometric version of the policy improvement lemma, which identifies a polyhedral cone of policy changes in which the state value function increases for all states.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ay, N., Montúfar, G., Rauh, J.: Selection criteria for neuromanifolds of stochastic dynamics. In: Yamaguchi, Y. (ed.) Advances in Cognitive Neurodynamics (III), pp. 147–154. Springer, Dordrecht (2013). doi:10.1007/978-94-007-4792-0_20

    Chapter  Google Scholar 

  2. Hutter, M.: General discounting versus average reward. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds.) ALT 2006. LNCS, vol. 4264, pp. 244–258. Springer, Heidelberg (2006). doi:10.1007/11894841_21

    Chapter  Google Scholar 

  3. Kakade, S.: Optimizing average reward using discounted rewards. In: Helmbold, D., Williamson, B. (eds.) COLT 2001. LNCS, vol. 2111, pp. 605–615. Springer, Heidelberg (2001). doi:10.1007/3-540-44581-1_40

    Chapter  Google Scholar 

  4. Montúfar, G., Ghazi-Zahedi, K., Ay, N.: Geometry and determinism of optimal stationary control in partially observable Markov decision processes. arXiv:1503.07206 (2015)

  5. Ross, S.M.: Introduction to Stochastic Dynamic Programming. Academic Press Inc., Cambridge (1983)

    MATH  Google Scholar 

  6. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  7. Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems 12, pp. 1057–1063. MIT Press (2000)

    Google Scholar 

  8. Tsitsiklis, J.N., Van Roy, B.: On average versus discounted reward temporal-difference learning. Mach. Learn. 49(2), 179–191 (2002)

    Article  MATH  Google Scholar 

Download references

Acknowledgment

We thank Nihat Ay for support and insightful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guido Montúfar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Montúfar, G., Rauh, J. (2017). Geometry of Policy Improvement. In: Nielsen, F., Barbaresco, F. (eds) Geometric Science of Information. GSI 2017. Lecture Notes in Computer Science(), vol 10589. Springer, Cham. https://doi.org/10.1007/978-3-319-68445-1_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68445-1_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68444-4

  • Online ISBN: 978-3-319-68445-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics