Skip to main content

Advertisement

Log in

Fuzzy qualitative human model for viewpoint identification

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The integration of advance human motion analysis techniques in low-cost video cameras has emerged for consumer applications, particularly in video surveillance systems. These smart and cheap devices provide the practical solutions for improving the public safety and homeland security with the capability of understanding the human behaviour automatically. In this sense, an intelligent video surveillance system should not be constrained on a person viewpoint, as in natural, a person is not restricted to perform an action from a fixed camera viewpoint. To achieve the objective, many state-of-the-art approaches require the information from multiple cameras in their processing. This is an impractical solution by considering its feasibility and computational complexity. First, it is very difficult to find an open space in real environment with perfect overlapping for multi-camera calibration. Secondly, the processing of information from multiple cameras is computational burden. With this, a surge of interest has sparked on single camera approach with notable work on the concept of view specific action recognition. However in their work, the viewpoints are assumed in a priori. In this paper, we extend it by proposing a viewpoint estimation framework where a novel human contour descriptor namely the fuzzy qualitative human contour is extracted from the fuzzy qualitative Poisson human model for viewpoint analysis. Clustering algorithms are used to learn and classify the viewpoints. In addition, our system is also integrated with the capability to classify front and rear views. Experimental results showed the reliability and effectiveness of our proposed viewpoint estimation framework by using the challenging IXMAS human action dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Aggarwal J, Ryoo M (2011) Human activity analysis: a review. ACM Comput Surv 43(3):16:1–16:43

    Article  Google Scholar 

  2. Ahmad M, Lee SW (2006) HMM-based human action recognition using multiview image sequences. In: International Conference on Pattern Recognition (ICPR), pp 263–266

  3. Anderson D, Luke R, Keller J, Skubic M, Rantz M, Aud M (2009) Modeling human activity from voxel person using fuzzy logic. IEEE Trans Fuzzy Syst 17(1):39–49

    Article  Google Scholar 

  4. Ashraf N, Shen Y, Cao X, Foroosh H (2013) View invariant action recognition using weighted fundamental ratios. Comput Vis Image Underst 117(6):587–602

  5. Badi HS, Hussein S (2014) Hand posture and gesture recognition technology. Neural Comput Appl 25(3–4):871–878

  6. Chan CS, Liu H (2009) Fuzzy qualitative human motion analysis. IEEE Trans Fuzzy Syst 17(4):851–862

    Article  MathSciNet  Google Scholar 

  7. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE computer society conference on computer vision and pattern recognition (CVPR), vol 1, pp 886–893

  8. El-Baz A, Tolba A (2013) An efficient algorithm for 3d hand gesture recognition using combined neural classifiers. Neural Comput Appl 22(7–8):1477–1484

    Article  Google Scholar 

  9. Gorelick L, Galun M, Sharon E, Basri R, Brandt A (2006) Shape representation and classification using the poisson equation. IEEE Trans Pattern Anal Mach Intell 28(12):1991–2005

    Article  Google Scholar 

  10. Holte MB, Tran C, Trivedi MM, Moeslund TB (2011) Human action recognition using multiple views: a comparative perspective on recent developments. In: Proceedings of the 2011 joint ACM workshop on human gesture and behavior understanding, pp 47–52

  11. Ji X, Liu H (2010) Advances in view-invariant human motion analysis: a review. IEEE Trans Syst Man Cybern Part C Appl Rev 40(1):13–24

    Google Scholar 

  12. Lim MK, Tang S, Chan CS (2014) iSurveillance: intelligent framework for multiple events detection in surveillance videos. Expert Syst Appl 41(10):4704–4715

    Article  Google Scholar 

  13. Lao W, Han J, de With PH (2009) Automatic video-based human motion analyzer for consumer surveillance system. IEEE Trans Consum Electron 55(2):591–598

    Article  Google Scholar 

  14. Lim CH, Chan CS (2012) A fuzzy qualitative approach for scene classification. In: International conference on fuzzy systems (FUZZ), pp 1–8

  15. Lim CH, Chan CS (2013) Fuzzy action recognition for multiple views within single camera. In: IEEE international conference on fuzzy systems (FUZZ), pp 1–8

  16. Lim CH, Risnumawan A, Chan CS (2014) A scene image is nonmutually exclusive-a fuzzy qualitative scene understanding. IEEE Trans Fuzzy Syst 22(6):1541–1556

    Article  Google Scholar 

  17. Lim CH, Vats E, Chan CS (2015) Fuzzy human motion analysis: a review. Pattern Recognit 48(5):1773–1796

    Article  Google Scholar 

  18. Liu H, Brown DJ, Coghill GM (2008) Fuzzy qualitative robot kinematics. IEEE Trans Fuzzy Syst 16(3):808–822

    Article  Google Scholar 

  19. Liu H, Coghill GM, Barnes DP (2009) Fuzzy qualitative trigonometry. Int J Approx Reason 51(1):71–88

    Article  MATH  Google Scholar 

  20. Mel BW (1997) Seemore: combining color, shape, and texture histogramming in a neurally inspired approach to visual object recognition. Neural Comput 9(4):777–804

    Article  Google Scholar 

  21. Meng Q, Tholley I, Chung PW (2014) Robots learn to dance through interaction with humans. Neural Comput Appl 24(1):117–124

    Article  Google Scholar 

  22. Rogez G, Orrite C, Guerrero J, Torr PH (2014) Exploiting projective geometry for view-invariant monocular human motion analysis in man-made environments. Comput Vis Image Underst 120:126–140

    Article  Google Scholar 

  23. Rudoy D, Zelnik-Manor L (2012) Viewpoint selection for human actions. Int J Comput Vis 97(3):243–254

    Article  Google Scholar 

  24. Shen Q, Leitch R (1993) Fuzzy qualitative simulation. IEEE Trans Syst Man Cybern 23(4):1038–1061

    Article  Google Scholar 

  25. Weinland D, Boyer E, Ronfard R (2007) Action recognition from arbitrary views using 3d exemplars. In: International conference on computer vision (ICCV), pp 1–7

  26. Weinland D, Ronfard R, Boyer E (2006) Free viewpoint action recognition using motion history volumes. Comput Vis Image Underst 104(2):249–257

    Article  Google Scholar 

  27. Weinland D, Ronfard R, Boyer E (2011) A survey of vision-based methods for action representation, segmentation and recognition. Comput Vis Image Underst 115(2):224–241

    Article  Google Scholar 

  28. Yilma A, Shah M (2005) Recognizing human actions in videos acquired by uncalibrated moving cameras. In: International conference on computer vision (ICCV), pp 150–157

  29. Chan CS, Coghill GM, Liu H (2011) Recent advances in fuzzy qualitative reasoning. Int J Uncertain Fuzziness Knowledge-Based Syst 19(3):417–422

    Article  Google Scholar 

Download references

Acknowledgment

This research is supported by the Fundamental Research Grant Scheme (FRGS) MoE Grant FP027-2013A, H-00000-60010-E13110 from the Ministry of Education Malaysia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chern Hong Lim.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lim, C.H., Chan, C.S. Fuzzy qualitative human model for viewpoint identification. Neural Comput & Applic 27, 845–856 (2016). https://doi.org/10.1007/s00521-015-1900-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-015-1900-5

Keywords

Navigation