Abstract
Hierarchical reinforcement learning methods have not been able to simultaneously abstract and reuse subtasks with discounted value functions. The contribution of this paper is to introduce two completion functions that jointly decompose the value function hierarchically to solve this problem. The significance of this result is that the benefits of hierarchical reinforcement learning can be extended to discounted value functions and to continuing (infinite horizon) reinforcement learning problems. This paper demonstrates the method with the an algorithm that discovers subtasks automatically. An example is given where the optimum policy requires a subtask never to terminate.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Parr, R.E.: Hierarchical Control and learning for Markov decision processes. PhD thesis, University of California at Berkeley (1998)
Precup, D.: Temporal Abstraction in Reinforcement Learning. PhD thesis, Univeristy of Massachusetts, Amherst (2000)
Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research 13, 227–303 (2000)
Dean, T., Givan, R.: Model minimization in markov decision processes. In: AAAI / IAAI, pp. 106–111 (1997)
Dayan, P., Hinton, G.E.: Feudal reinforcement learning. In: NIPS. Advances in Neural Information Processing Systems, vol. 5 (1992)
Kaelbling, L.P.: Hierarchical learning in stochastic domains: Preliminary results. In: Machine Learning Proceedings of the Tenth International Conference, pp. 167–173. Morgan Kaufmann, San Francisco (1993)
Hengst, B.: Discovering Hierarchy in Reinforcement Learning. PhD thesis, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hengst, B. (2007). Safe State Abstraction and Reusable Continuing Subtasks in Hierarchical Reinforcement Learning. In: Orgun, M.A., Thornton, J. (eds) AI 2007: Advances in Artificial Intelligence. AI 2007. Lecture Notes in Computer Science(), vol 4830. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76928-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-76928-6_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-76926-2
Online ISBN: 978-3-540-76928-6
eBook Packages: Computer ScienceComputer Science (R0)