Skip to main content

Situated Learning of Visual Robot Behaviors

  • Conference paper
Intelligent Robotics and Applications (ICIRA 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7101))

Included in the following conference series:

Abstract

This paper proposes a new robot learning framework to acquire scenario specific autonomous behaviors by demonstration. We extract visual features from the demonstrated behavior examples in an indoor environment and transfer it onto an underlying set of scenario aware robot behaviors. Demonstrations are performed using an omnidirectional camera as training instances in different indoor scenarios are registered.The features that distinguish the environment are identified and are used to classify the traversing scenarios. Once the scenario is identified, a behavior model trained by means of artificial neural network pertaining to the specific scenario is learned. The generalization ability of the behavior model is evaluated for seen and unseen data. As a comparison, the behaviors attained using a monolithic general purpose model and its generalization ability against the former is evaluated. The experimental results on the mobile robot indicate the acquired behavior is robust and generalizes meaningful actions beyond the specifics presented during training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abbeel, P., Coates, A., Ng, A.Y.: Autonomous helicopter aerobatics through apprenticeship learning. I. J. Robotic Res. 29(13), 1608–1639 (2010)

    Article  Google Scholar 

  2. Argall, B., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robotics and Autonomous Systems 57(5), 469–483 (2009)

    Article  Google Scholar 

  3. Calinon, S.: Robot Programming by Demonstration: A Probabilistic Approach. EPFL/CRC Press (2009)

    Google Scholar 

  4. da Fontoura Costa, L., Cesar Jr., R.M.: Shape Classification and Analysis: Theory and Practice, 2nd edn. CRC Press, Inc., Boca Raton (2009)

    Book  MATH  Google Scholar 

  5. Kohavi, R., John, G.: Wrappers for feature subset selection. Artificial Intelligence 97(1-2), 273–324 (1997)

    Article  MATH  Google Scholar 

  6. Martínez Mozos, O., Stachniss, C., Burgard, W.: Supervised learning of places from range data using adaboost. In: ICRA, pp. 1742–1747 (2005)

    Google Scholar 

  7. Mataric, M.J.: Learning in behavior-based multi-robot systems: Policies, models, and other agents. Cognitive Systems Research 2, 81–93 (2001)

    Article  Google Scholar 

  8. Narayanan, K.K., Posada, L.F., Hoffmann, F., Bertram, T.: Scenario and context specific visual robot behavior learning. In: IEEE Int. Conf. Robotics and Automation (ICRA), pp. 439–444 (May 2011)

    Google Scholar 

  9. Nicolescu, M., Jenkins, O., Olenderski, A., Fritzinger, E.: Learning behavior fusion from demonstration. Interaction Studies Journal, Special Issue on Robot and Human Interactive Communication (2007)

    Google Scholar 

  10. Pardowitz, M., Knoop, S., Dillmann, R., Zollner, R.D.: Incremental learning of tasks from user demonstrations, past experiences, and vocal comments. IEEE Transactions on Systems, Man, and Cybernetics, Part B 37(2), 322–332 (2007)

    Article  Google Scholar 

  11. Peng, H., Long, F., Ding, C.: Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 1226–1238 (2005)

    Article  Google Scholar 

  12. Posada, L.F., Narayanan, K.K., Hoffmann, F., Bertram, T.: Floor segmentation of omnidirectional images for mobile robot visual navigation. In: Proc. of IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, IROS (2010)

    Google Scholar 

  13. Schaal, S.: Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences (6), 233–242 (1999)

    Google Scholar 

  14. Sofman, B., Lin, E., Bagnell, J., Vandapel, N., Stentz, A.: Improving robot navigation through self-supervised online learning. In: Proceedings of Robotics: Science and Systems, Philadelphia, USA (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Narayanan, K.K., Posada, LF., Hoffmann, F., Bertram, T. (2011). Situated Learning of Visual Robot Behaviors. In: Jeschke, S., Liu, H., Schilberg, D. (eds) Intelligent Robotics and Applications. ICIRA 2011. Lecture Notes in Computer Science(), vol 7101. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25486-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25486-4_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25485-7

  • Online ISBN: 978-3-642-25486-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics