Skip to main content

MGHRL: Meta Goal-Generation for Hierarchical Reinforcement Learning

  • Conference paper
  • First Online:
Distributed Artificial Intelligence (DAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12547))

Included in the following conference series:

  • 622 Accesses

Abstract

Most meta reinforcement learning (meta-RL) methods learn to adapt to new tasks by directly optimizing the parameters of policies over primitive action space. Such algorithms work well in tasks with relatively slight differences. However, when the task distribution becomes wider, it would be quite inefficient to directly learn such a meta-policy. In this paper, we propose a new meta-RL algorithm called Meta Goal-generation for Hierarchical RL (MGHRL). Instead of directly generating policies over primitive action space for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and generalized meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    To achieve parallel training for the two levels of our framework, we rewrite past experience transitions as hindsight action transitions, and supplement both levels with additional sets of transitions as was done in HAC.

  2. 2.

    We also evaluated PEARL (without HER) with sparse reward and it was not able to solve any of the tasks.

References

  1. Andrychowicz, M., et al.: Hindsight experience replay. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 5048–5058 (2017). http://papers.nips.cc/paper/7090-hindsight-experience-replay

  2. Bacon, P., Harb, J., Precup, D.: The option-critic architecture. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 1726–1734 (2017). http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14858

  3. Barto, A.G., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dyn. Syst. 13(1–2), 41–77 (2003). https://doi.org/10.1023/A:1022140919877

  4. Bengio, Y., Bengio, S., Cloutier, J.: Learning a synaptic learning rule. In: IJCNN-91-Seattle International Joint Conference on Neural Networks II, vol. 2, p. 969 (1991)

    Google Scholar 

  5. Bengio, Y., LeCun, Y. (eds.): 4th International Conference on Learning Representations, ICLR 2016 (2016). https://iclr.cc/archive/www/doku.php%3Fid=iclr2016:accepted-main.html

  6. Dayan, P., Hinton, G.E.: Feudal reinforcement learning. In: Advances in Neural Information Processing Systems 5, [NIPS Conference], pp. 271–278 (1992). http://papers.nips.cc/paper/714-feudal-reinforcement-learning

  7. Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res. 13, 227–303 (2000). https://doi.org/10.1613/jair.639

    Article  MathSciNet  MATH  Google Scholar 

  8. Duan, Y., Schulman, J., Chen, X., Bartlett, P.L., Sutskever, I., Abbeel, P.: RL\(^{2}\): fast reinforcement learning via slow reinforcement learning. CoRR abs/1611.02779 (2016). http://arxiv.org/abs/1611.02779

  9. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pp. 1126–1135 (2017). http://proceedings.mlr.press/v70/finn17a.html

  10. Frans, K., Ho, J., Chen, X., Abbeel, P., Schulman, J.: Meta learning shared hierarchies. In: 6th International Conference on Learning Representations, ICLR 2018 (2018)

    Google Scholar 

  11. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pp. 1856–1865 (2018). http://proceedings.mlr.press/v80/haarnoja18b.html

  12. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: 2nd International Conference on Learning Representations, ICLR 2014 (2014). http://arxiv.org/abs/1312.6114

  13. Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep visuomotor policies. J. Mach. Learn. Res. 17, 39:1–39:40 (2016). http://jmlr.org/papers/v17/15-522.html

  14. Levy, A., Konidaris, G., Platt Jr, R., Saenko, K.: Learning multi-level hierarchies with hindsight. In: 7th International Conference on Learning Representations, ICLR 2019 (2019). https://openreview.net/forum?id=ryzECoAcY7

  15. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: 6th International Conference on Learning Representations, ICLR 2018 (2018). https://openreview.net/forum?id=B1DmUzWAW

  16. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236

    Article  Google Scholar 

  17. Nachum, O., Gu, S., Lee, H., Levine, S.: Data-efficient hierarchical reinforcement learning. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pp. 3307–3317 (2018). http://papers.nips.cc/paper/7591-data-efficient-hierarchical-reinforcement-learning

  18. Parr, R., Russell, S.J.: Reinforcement learning with hierarchies of machines. In: Advances in Neural Information Processing Systems 10, [NIPS Conference], pp. 1043–1049 (1997). http://papers.nips.cc/paper/1384-reinforcement-learning-with-hierarchies-of-machines

  19. Plappert, M., et al.: Multi-goal reinforcement learning: challenging robotics environments and request for research. CoRR abs/1802.09464 (2018). http://arxiv.org/abs/1802.09464

  20. Rakelly, K., Zhou, A., Finn, C., Levine, S., Quillen, D.: Efficient off-policy meta-reinforcement learning via probabilistic context variables. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 5331–5340 (2019). http://proceedings.mlr.press/v97/rakelly19a.html

  21. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)

    Google Scholar 

  22. Rothfuss, J., Lee, D., Clavera, I., Asfour, T., Abbeel, P.: ProMP: proximal meta-policy search. In: 7th International Conference on Learning Representations, ICLR 2019 (2019). https://openreview.net/forum?id=SkxXCi0qFX

  23. Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.P.: Meta-learning with memory-augmented neural networks. In: Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, pp. 1842–1850 (2016). http://proceedings.mlr.press/v48/santoro16.html

  24. Schmidhuber, J.: Evolutionary principles in self-referential learning (1987)

    Google Scholar 

  25. Stadie, B.C., et al.: Some considerations on learning to explore via meta-reinforcement learning. CoRR abs/1803.01118 (2018). http://arxiv.org/abs/1803.01118

  26. Sutton, R.S., Precup, D., Singh, S.P.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112(1–2), 181–211 (1999). https://doi.org/10.1016/S0004-3702(99)00052-1

  27. Thrun, S., Pratt, L.Y.: Learning to Learn. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2

    Book  MATH  Google Scholar 

  28. Todorov, E., Erez, T., Tassa, Y.: MuJoCo: a physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, pp. 5026–5033 (2012). https://doi.org/10.1109/IROS.2012.6386109

  29. Vezhnevets, A.S., et al.: Feudal networks for hierarchical reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pp. 3540–3549 (2017). http://proceedings.mlr.press/v70/vezhnevets17a.html

  30. Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pp. 3630–3638 (2016). http://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning

  31. Wang, J.X., et al.: Learning to reinforcement learn. CoRR abs/1611.05763 (2016). http://arxiv.org/abs/1611.05763

  32. Xu, T., Liu, Q., Zhao, L., Peng, J.: Learning to explore via meta-policy gradient. In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pp. 5459–5468 (2018). http://proceedings.mlr.press/v80/xu18d.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haotian Fu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fu, H., Tang, H., Hao, J., Liu, W., Chen, C. (2020). MGHRL: Meta Goal-Generation for Hierarchical Reinforcement Learning. In: Taylor, M.E., Yu, Y., Elkind, E., Gao, Y. (eds) Distributed Artificial Intelligence. DAI 2020. Lecture Notes in Computer Science(), vol 12547. Springer, Cham. https://doi.org/10.1007/978-3-030-64096-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64096-5_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64095-8

  • Online ISBN: 978-3-030-64096-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics