Skip to main content

Advertisement

Log in

Rule-based multi-view human activity recognition system in real time using skeleton data from RGB-D sensor

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Identification of human activity with decent precision is a challenging task in the field of computer vision, especially when applying for surveillance purpose. A rule-based classifier method is proposed in this paper, which is capable of recognizing a view-invariant multiple human activity recognition in real time. A single Kinect sensor is used for the input of RGB-D data in real time. Initially, a skeleton-tracking algorithm is applied. After tracking the skeletons, activities are recognized from each individually tracked skeleton independently. Different rules are defined to recognize discrete skeleton positions and classify a particular order of multiple postures into activities. During the experimentation, we examine about 14 activities and found that the proposed method is robust and efficient concerning multiple views, scaling and phase variation activities during different realistic acts. A self-generated dataset in the controlled environment is used for the experiment. About 2 min of data was collected. Data from two different males were collected for multiple human activities. Experimental results show that the proposed method is flexible and efficient for multiple view activities as well as scale and phase variation activities. It provides a detection accuracy of 98%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  • Akdoğan E, Taçgın E, Adli MA (2009) Knee rehabilitation using an intelligent robotic system. J Intell Manuf 20(2):195

    Article  Google Scholar 

  • Bakalos N, Rallis I, Doulamis N, Doulamis A, Protopapadakis E, Voulodimos A (2019) Choreographic pose identification using convolutional neural networks. In: 2019 11th International conference on virtual worlds and games for serious applications (VS-Games). IEEE, pp 1–7

  • Bedregal BR, Dimuro GP (2006) Interval fuzzy rule-based hand gesture recognition. In: 12th GAMM-IMACS international symposium on scientific computing, computer arithmetic and validated numerics (SCAN 2006). IEEE, p 12

  • Bo AP, Hayashibe M, Poignet P (2011) Joint angle estimation in rehabilitation with inertial sensors and its integration with Kinect. In: 2011 Annual international conference of the IEEE engineering in medicine and biology society. IEEE, pp 3479–3483

  • Chen L, Wei H, Ferryman J (2013) A survey of human motion analysis using depth imagery. Pattern Recogn Lett 34(15):1995–2006

    Article  Google Scholar 

  • Clark RA, Pua YH, Fortin K, Ritchie C, Webster KE, Denehy L, Bryant AL (2012) Validity of the Microsoft Kinect for assessment of postural control. Gait Posture 36(3):372–377

    Article  Google Scholar 

  • Clark RA, Pua YH, Bryant AL, Hunt MA (2013) Validity of the Microsoft Kinect for providing lateral trunk lean feedback during gait retraining. Gait Posture 38(4):1064–1066

    Article  Google Scholar 

  • Cottone P, Maida G, Morana M (2014) User activity recognition via kinect in an ambient intelligence scenario. IERI Procedia 7:49–54

    Article  Google Scholar 

  • Dai X (2013) Vision-based 3D human motion analysis for fall detection and bed-exiting. Master thesis, Faculty of the Daniel Felix Ritchie School of Engineering and Computer Science, University of Denver, USA, August 2013

  • Davidson, A. Kinect. http://fivedots.coe.psu.ac.th/~ad/jg/nui15/kinect.jpg. Accessed 19 June 2014

  • Gasparrini S, Cippitelli E, Spinsante S, Gambi E (2014) A depth-based fall detection system using a Kinect® sensor. Sensors 14(2):2756–2775

    Article  Google Scholar 

  • Hachaj T, Ogiela MR (2014) Rule-based approach to recognizing human body poses and gestures in real time. Multimed Syst 20(1):81–99

    Article  Google Scholar 

  • Haritaoglu I, Harwood D, Davis LS (2000) W/sup 4: real-time surveillance of people and their activities. IEEE Trans Pattern Anal Mach Intell 22(8):809–830

    Article  Google Scholar 

  • Hbali Y, Hbali S, Ballihi L, Sadgal M (2017) Skeleton-based human activity recognition for elderly monitoring systems. IET Comput Vis 12(1):16–26

    Article  Google Scholar 

  • İnce ÖF, Ince IF, Yıldırım ME, Park JS, Song JK, Yoon BW (2019) Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor. ETRI J 42:78–89

    Article  Google Scholar 

  • Khari M, Garg AK, Crespo RG, Verdú E (2019) Gesture recognition of RGB and RGB-D static images using convolutional neural networks. Int J Interact Multimed Artif Intell 5(7):22–27

    Google Scholar 

  • Kushwaha AK, Srivastava S, Srivastava R (2017) Multi-view human activity recognition based on silhouette and uniform rotation invariant local binary patterns. Multimed Syst 23(4):451–467

    Article  Google Scholar 

  • Li M, Leung H, Shum HP (2016) Human action recognition via skeletal and depth based feature fusion. In: Proceedings of the 9th international conference on motion in games, pp 123–132

  • Ling J, Tian L, Li C (2016) 3D human activity recognition using skeletal data from RGBD sensors. In: International symposium on visual computing. Springer, Cham, pp 133–142

  • Liu S, Kong L, Wang H (2018) Human activities recognition based on skeleton information via sparse representation. J Comput Sci Eng 12(1):1–1

    Article  Google Scholar 

  • Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990

    Article  Google Scholar 

  • Taha A, Zayed HH, Khalifa ME, El-Horbaty ES (2015) Skeleton-based human activity recognition for video surveillance. Int J Sci Eng Res 6(1):993–1004

    Google Scholar 

  • Wang Q, Turaga P, Coleman G, Ingalls T (2014) Somatech: an exploratory interface for altering movement habits. In: CHI'14 extended abstracts on human factors in computing systems, pp 1765–1770

  • Ye M, Zhang Q, Wang L, Zhu J, Yang R, Gall J (2013) A survey on human motion analysis from depth data. In: Grzegorzek M, Theobalt C, Koch R, Kolb A (eds) Time-of-flight and depth imaging, sensors, algorithms, and applications. Springer, Berlin, pp 149–187

    Chapter  Google Scholar 

  • Zhu HM, Pun CM (2013) Human action recognition with skeletal information from depth camera. In: 2013 IEEE international conference on information and automation (ICIA). IEEE, pp 1082–1085

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neeraj Varshney.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent is obtained from all individual participants included in the study.

Additional information

Communicated by Suresh Chandra Satapathy.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Varshney, N., Bakariya, B., Kushwaha, A.K.S. et al. Rule-based multi-view human activity recognition system in real time using skeleton data from RGB-D sensor. Soft Comput 27, 405–421 (2023). https://doi.org/10.1007/s00500-021-05649-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-021-05649-w

Keywords

Navigation