Skip to main content

Robot-Assisted Feeding: Generalizing Skewering Strategies Across Food Items on a Plate

  • Conference paper
  • First Online:
Robotics Research (ISRR 2019)

Abstract

A robot-assisted feeding system must successfully acquire many different food items. A key challenge is the wide variation in the physical properties of food, demanding diverse acquisition strategies that are also capable of adapting to previously unseen items. Our key insight is that items with similar physical properties will exhibit similar success rates across an action space, allowing the robot to generalize its actions to previously unseen items. To better understand which skewering strategy works best for each food item, we collected a dataset of 2450 robot bite acquisition trials for 16 food items with varying properties. Analyzing the dataset provided insights into how the food items’ surrounding environment, fork pitch, and fork roll angles affect bite acquisition success. We then developed a bite acquisition framework that takes the image of a full plate as an input, segments it into food items, and then applies our Skewering-Position-Action network (SPANet) to choose a target food item and a corresponding action so that the bite acquisition success rate is maximized. SPANet also uses the surrounding environment features of food items to predict action success rates. We used this framework to perform multiple experiments on uncluttered and cluttered plates. Results indicate that our integrated system can successfully generalize skewering strategies to many previously unseen food items.

R. Feng, Y. Kim, G. Lee and S. S. Srinivasa—These authors contributed equally to the work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jacobsson, C., Axelsson, K., Österlind, P.O., Norberg, A.: How people with stroke and healthy older people experience the eating process. J. Clin. Nurs. 9(2), 255–264 (2000)

    Article  Google Scholar 

  2. Brault, M.W.: Americans with disabilities: 2010. Current Population Rep. 7, 70–131 (2012)

    Google Scholar 

  3. Prior, S.D.: An electric wheelchair mounted robotic arm-a survey of potential users. J. Med. Eng. Technol. 14(4), 143–154 (1990)

    Article  Google Scholar 

  4. Stanger, C.A., Anglin, C., Harwin, W.S., Romilly, D.P.: Devices for assisting manipulation: a summary of user task priorities. IEEE Trans. Rehab. Eng. 2(4), 256–265 (1994)

    Article  Google Scholar 

  5. Kayser-Jones, J.S., Schell, E.S.: The effect of staffing on the quality of care at mealtime. Nurs. Outlook 45(2), 64–72 (1997)

    Article  Google Scholar 

  6. Chio, A., et al.: Caregiver time use in ALS. Neurology 67(5), 902–904 (2006)

    Article  Google Scholar 

  7. Bhattacharjee, T., Lee, G., Song, H., Srinivasa, S.S.: Towards robotic feeding: role of haptics in fork-based food manipulation. IEEE Robot. Autom. Lett. 4(2), 1485–1492 (2019)

    Article  Google Scholar 

  8. Gallenberger, D., Bhattacharjee, T., Kim, Y., Srinivasa, S.S.: Transfer depends on acquisition: analyzing manipulation strategies for robotic feeding. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2019)

    Google Scholar 

  9. Gordon, E.K.: A Dataset of Robot Bite Acquisition Trials on Solid Food Using Different Manipulation Strategies (2019). https://doi.org/10.7910/DVN/8A1XO3

  10. Gemici, M.C., Saxena, A.: Learning haptic representation for manipulating deformable food objects. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 638–645. IEEE (2014)

    Google Scholar 

  11. Beetz, M., et al.: Robotic roommates making pancakes. In: 2011 11th IEEE-RAS International Conference on Humanoid Robots, pp. 529–536, October 2011

    Google Scholar 

  12. Park, D., et al.: Toward active robot-assisted feeding with a general-purpose mobile manipulator: design, evaluation, and lessons learned. arXiv preprint arXiv:1904.03568 (2019)

  13. Kobayashi, Y., Ohshima, Y., Kaneko, T., Yamashita, A.: Meal support system with spoon using laser range finder and manipulator. Int. J. Robot. Autom. 31(3) (2016)

    Google Scholar 

  14. Mahler, J., et al.: Dex-Net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics (2017)

    Google Scholar 

  15. Pinto, L., Gupta, A.: Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. In: 2016 IEEE International Conference on Robotics and Automation (ICRA)

    Google Scholar 

  16. Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Robot. Res. 27(2), 157–173 (2008). https://doi.org/10.1177/0278364907087172

    Article  Google Scholar 

  17. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316–1322, May 2015

    Google Scholar 

  18. Hassannejad, H., Matrella, G., Ciampolini, P., De Munari, I., Mordonini, M., Cagnoni, S.: Food image recognition using very deep convolutional networks. In: Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Ser. MADiMa 2016. New York, USA, pp. 41–49. ACM (2016). http://doi.acm.org/10.1145/2986035.2986042

  19. Singla, A., Yuan, L., Ebrahimi, T.: Food/non-food image classification and food categorization using pre-trained googlenet model. In: Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Ser. MADiMa 2016, New York, USA, ACM (2016). https://doi.org/10.1145/2986035.2986039

  20. Yanai, K., Kawano, Y.: Food image recognition using deep convolutional network with pre-training and fine-tuning. In: 2015 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 1–6, June 2015

    Google Scholar 

  21. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. June 2018

    Google Scholar 

  22. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)

    Google Scholar 

  23. Chao, W.-L., Changpinyo, S., Gong, B., Sha, F.: An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 52–68. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_4

    Chapter  Google Scholar 

  24. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Weinberger Densely connected convolutional networks (2016)

    Google Scholar 

  25. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  26. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc (2012)

    Google Scholar 

  27. Jaco robotic arm. https://www.kinovarobotics.com/en/products/robotic-arm-series. Accessed 27 Aug 2018

  28. Force-torque sensor. https://www.ati-ia.com/products/ft/ft_models.aspx?id=Nano25. Accessed 27 Aug 2018

Download references

Acknowledgment

This work was funded by the National Institute of Health R01 (#R01EB019335), National Science Foundation CPS (#1544797), National Science Foundation NRI (#1637748), the Office of Naval Research, the RCTA, Amazon, and Honda.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gilwoo Lee .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 10030 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, R. et al. (2022). Robot-Assisted Feeding: Generalizing Skewering Strategies Across Food Items on a Plate. In: Asfour, T., Yoshida, E., Park, J., Christensen, H., Khatib, O. (eds) Robotics Research. ISRR 2019. Springer Proceedings in Advanced Robotics, vol 20. Springer, Cham. https://doi.org/10.1007/978-3-030-95459-8_26

Download citation

Publish with us

Policies and ethics