Skip to main content

Abstract

Upper Limbs are expected to have precise functionality to do normal activities and perform a specific occupation. The functionality loss will impair the performance accuracy of the tasks. The amputation of a limb can lead to a great reduction in the standard of life and daily activities. Each activity follows a particular intent pattern. The grasp is one of the primary activities to interact with the real or virtual worlds. The grasp intent incorporation in the development of an advanced prosthetic hand needs grasp intent detection using multimodal sensorial data. Multimodal sensorial data is expected to detect precise movements and reduce motion artifacts by using modern Tech innovations. We develop a great classification algorithm to predict the specific grasp intent depending on the type of object through continuous feedback during the approach towards the object using multisensorial data from IMU sensors, EMG and cameras. The deep learning (DL) approach has been developed to increase the accuracy of the grasp intent by continuously predicting the intent class while moving the hand towards the object. The deep learning network archives an accuracy of 92.3% over 89% as in literature using hybrid network Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM) networks on visual feed, inertial measurement unit (IMU) and electromyography (EMG) data.

Supported by organization IITM.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bao, T., Zaidi, S.A.R., Xie, S., Yang, P., Zhang, Z.Q.: A CNN-LSTM hybrid model for wrist kinematics estimation using surface electromyography. IEEE Trans. Instrum. Meas. 70, 1–9 (2020)

    Article  Google Scholar 

  2. Bitzer, S., Smagt, P.V.D.: Learning EMG control of a robotic hand: towards active prostheses, pp. 2819–2823. IEEE (2006)

    Google Scholar 

  3. DeGol, J., Akhtar, A., Manja, B., Bretl, T.: Automatic grasp selection using a camera in a hand prosthesis. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 431–434. IEEE (2016)

    Google Scholar 

  4. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database, pp. 248–255. IEEE (2009)

    Google Scholar 

  5. Farina, D., Merletti, R.: Comparison of algorithms for estimation of EMG variables during voluntary isometric contractions. J. Electromyogr. Kinesiol. 10(5), 337–349 (2000)

    Article  Google Scholar 

  6. Fu, X., Wang, J., Hu, Z., Guo, Y., Wang, R.: Automated segmentation for whole human eye OCT image using RM multistage mask R-CNN. Appl. Opt. 60(9), 2518–2529 (2021)

    Article  Google Scholar 

  7. Gers, F.A., Schmidhuber, E.: LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans. Neural Netw. 12(6), 1333–1340 (2001)

    Article  Google Scholar 

  8. Ghazaei, G., Alameer, A., Degenaar, P., Morgan, G., Nazarpour, K.: Deep learning-based artificial vision for grasp classification in myoelectric hands. J. Neural Eng. 14, 036025 (2017)

    Article  Google Scholar 

  9. Gigli, A., Gregori, V., Cognolato, M., Atzori, M., Gijsberts, A.: Visual cues to improve myoelectric control of upper limb prostheses, pp. 783–788. IEEE (2018)

    Google Scholar 

  10. Günay, S.Y., Quivira, F., Erdoğmuş, D.: Muscle synergy-based grasp classification for robotic hand prosthetics, pp. 335–338 (2017)

    Google Scholar 

  11. Günay, S.Y., Yarossi, M., Brooks, D.H., Tunik, E., Erdoğmuş, D.: Transfer learning using low-dimensional subspaces for EMG-based classification of hand posture, pp. 1097–1100. IEEE (2019)

    Google Scholar 

  12. Han, M., Günay, S.Y., Schirner, G., Padır, T., Erdoğmuş, D.: HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. Intell. Serv. Robot. 13(1), 179–185 (2019). https://doi.org/10.1007/s11370-019-00293-8

  13. Jogin, M., Madhulika, M., Divya, G., Meghana, R., Apoorva, S., et al.: Feature extraction using convolution neural networks (CNN) and deep learning. In: 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 2319–2323. IEEE (2018)

    Google Scholar 

  14. Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 775–788. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_56

  15. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (2012)

    Google Scholar 

  16. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Google Scholar 

  17. Leeb, R., Sagha, H., Chavarriaga, R., del R Millán, J.: A hybrid brain-computer interface based on the fusion of electroencephalographic and electromyographic activities. J. Neural Eng. 8, 025011 (2011)

    Google Scholar 

  18. Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37, 421–436 (2018)

    Google Scholar 

  19. Manaswi, N.K.: RNN and LSTM. In: Manaswi, N.K. (ed.) Deep Learning with Applications Using Python, pp. 115–126. Apress, Berkeley (2018). https://doi.org/10.1007/978-1-4842-3516-4_9

  20. Maufroy, C., Bargmann, D.: CNN-based detection and classification of grasps relevant for worker support scenarios using SEMG signals of forearm muscles. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 141–146. IEEE (2018)

    Google Scholar 

  21. Mopuri, K.R., Garg, U., Babu, R.V.: CNN fixations: an unraveling approach to visualize the discriminative image regions. IEEE Trans. Image Process. 28(5), 2116–2125 (2018)

    Google Scholar 

  22. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks, pp. 1316–1322. IEEE (2015)

    Google Scholar 

  23. Wong, S.C., Gatt, A., Stamatescu, V., McDonnell, M.D.: Understanding data augmentation for classification: when to warp? In: 2016 international conference on digital image computing: techniques and applications (DICTA), pp. 1–6. IEEE (2016)

    Google Scholar 

  24. Yap, H.K., Ang, B.W., Lim, J.H., Goh, J.C., Yeow, C.H.: A fabric-regulated soft robotic glove with user intent detection using EMG and RFID for hand assistive application. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3537–3542. IEEE (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Balaji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Balaji, P., Subudhi, D., Muniyandi, M. (2022). Grasp Intent Detection Using Multi Sensorial Data. In: Duffy, V.G. (eds) Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Anthropometry, Human Behavior, and Communication. HCII 2022. Lecture Notes in Computer Science, vol 13319. Springer, Cham. https://doi.org/10.1007/978-3-031-05890-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05890-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05889-9

  • Online ISBN: 978-3-031-05890-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics