Skip to main content
Log in

A new pipeline for the recognition of universal expressions of multiple faces in a video sequence

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Facial expression recognition (FER) is a crucial issue in human–machine interaction. It allows machines to act according to facial expression changes. However, acting in real time requires recognizing the expressions at video speed. Usually, the video speed differs from one device to another. However, one of the standard settings for shooting videos is 24 fps. This speed is considered as the low end of what our brain can perceive as fluid video. From this perspective, to achieve a real-time FER, the image analysis must be completed, strictly, in less than 0.042 s no matter how the background complexity is or how many faces exists in the scene. In this paper, a new pipeline has been proposed to recognize the fundamental facial expressions for more than one person in real-world sequence videos. First, the pipeline takes as input a video and performs a face detection and tracking. Regions of Interest (ROI) are extracted from the detected face to extract the shape information when applying the histogram of oriented gradient (HOG) descriptor. The number of features yield by HOG descriptor is reduced by means of a linear discriminant analysis (LDA). Then, a deep data analysis was carried out, exploiting the pipeline, for the objective of setting up the LDA classifier. The analysis aimed at proving the suitability of the decision rule selected to separate the facial expression clusters in the LDA training phase. To conduct our analysis, we used ChonKanade (CK+) database and F-measure as an evaluation metric to calculate the average recognition rates. An automatic evaluation over time is proposed, where labelled videos is utilized to investigate the suitability of the pipeline in real-world condition. The pipeline results showed that the use of HOG descriptor and the LDA gives a high recognition rate of 94.66%. It should be noted that the proposed pipeline achieves an average processing time of 0.018 s, without requiring any device that speeds up the processing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Darwin, C., Prodger, P.: The expression of the emotions in man and animals. Oxford University Press, Oxford (1998)

    Google Scholar 

  2. Georghiades, A., Belhumeur, P., Kriegman, D.: Yale face database. Center for computational Vision and Control at Yale University, Vol. 2. http://cvc.yale.edu/projects/yalefaces/yalefa, p 6 (1997)

  3. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: 2005 IEEE International Conference on Multimedia and Expo, p 5. IEEE, (2005)

  4. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 94–101. IEEE, (2010)

  5. Ekman, P., Friesen, W.V.: Measuring facial movement. Environ. Psychol. Nonverbal Behav. 1(1), 56–75 (1976)

    Article  Google Scholar 

  6. Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007)

    Article  Google Scholar 

  7. Ji, Y., Idrissi, K.: Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recognit. Lett. 33(10), 1373–1380 (2012)

    Article  Google Scholar 

  8. Siddiqi, M.H., Ali, R., Khan, A.M., Kim, E.S., Kim, G.J., Lee, S.: Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection. Multimed. Syst. 21(6), 541–555 (2015)

    Article  Google Scholar 

  9. Fan, X., Tjahjadi, T.: A spatial-temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences. Pattern Recognit. 48(11), 3407–3416 (2015)

    Article  Google Scholar 

  10. Dornaika, F., Lazkano, E., Sierra, B.: Improving dynamic facial expression recognition with feature subset selection. Pattern Recognit. Lett. 32(5), 740–748 (2011)

    Article  Google Scholar 

  11. Shan, C., Gong, Shaogang, M., Peter W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009)

  12. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)

    Article  Google Scholar 

  13. Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 19(6), 1635–1650 (2010)

    Article  MathSciNet  Google Scholar 

  14. Ahmed, F., Bari, H., Hossain, E.: Person-independent facial expression recognition based on compound local binary pattern (clbp). Int. Arab J. Inf. Technol. 11(2), 195–203 (2014)

    Google Scholar 

  15. Uçar, A., Demir, Y., Güzeliş, C.: A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering. Neural Comput. Appl. 27(1), 131–142 (2016)

    Article  Google Scholar 

  16. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005, Vol. 1, pp 886–893. IEEE (2005)

  17. McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., Kaliouby, R.E.: Affdex sdk: a cross-platform real-time multi-face expression recognition toolkit. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3723–3726. ACM (2016)

  18. Senechal, T., McDuff, D., Kaliouby, R.: Facial action unit detection using active learning and an efficient non-linear kernel approximation. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 10–18 (2015)

  19. Carcagnì, P., Del Coco, M., Leo, M.: Distante, Cosimo: Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4(1), 645 (2015)

    Article  Google Scholar 

  20. Lekdioui, K., Messoussi, R., Ruichek, Y., Chaabi, Y., Touahni, R.: Facial decomposition for expression recognition using texture/shape descriptors and svm classifier. Signal Process. Image Commun. 58, 300–312 (2017)

  21. Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 454–459. IEEE (1998)

  22. Abdi, H., Williams, L.J.: Principal component analysis. Wiley interdiscip. Rev. Computat. Stat. 2(4), 433–459 (2010)

    Article  Google Scholar 

  23. Belhumeur, P. N., Hespanha, J. P., Kriegman, D. J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. Technical report, Yale University New Haven United States (1997)

  24. Siddiqi, M.H., Ali, R., Idris, M., Khan, A.M., Kim, E.S., Whang, M.C., Lee, S.: Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection. Multimed. Tools Appl. 75(2), 935–959 (2016)

    Article  Google Scholar 

  25. Tang, M., Chen, F.: Facial expression recognition and its application based on curvelet transform and pso-svm. Optik Int. J. Light Electron Opt. 124(22), 5401–5406 (2013)

    Article  Google Scholar 

  26. Siddiqi, M.H., Ali, R., Khan, A.M., Park, Y.-T., Lee, S.: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans. Image Process. 24(4), 1386–1398 (2015)

    Article  MathSciNet  Google Scholar 

  27. Krusienski, D.J., Sellers, E.W., McFarland, D.J., Vaughan, T.M., Wolpaw, J.R.: Toward enhanced p300 speller performance. J. Neurosci. Methods 167(1), 15–21 (2008)

    Article  Google Scholar 

  28. Wang, J., Yin, L., Wei, X., Sun, Y.: 3d facial expression recognition based on primitive surface feature distribution. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 1399–1406. IEEE (2006)

  29. Latifa, G., Mohamed, A, Najia, E.-S., Rostom, K.: A novel tool for automatic exploration of feature extraction and classication methods: a case of study in facial expression recognition. The manuscript is submitted for publication

  30. Hariharan, B., Malik, J., Ramanan, D.: Discriminative decorrelation for clustering and classification. In: European Conference on Computer Vision, pp. 459–472. Springer, New York (2012)

  31. Viola, P., Jones, M. J., Snow, D.: Detecting pedestrians using patterns of motion and appearance. In: null, p. 734. IEEE (2003)

  32. Tomasi, C., Kanade, T.: Detection and tracking of point features. (1991)

  33. Jeng, S.-H., Liao, Hong Y. M., Han, C. C., Chern, M. Y., Liu, Y. T.: Facial feature detection using geometrical face model: an efficient approach. Pattern Recognit. 31(3), 273–282 (1998)

  34. Belongie, S., Malik, J., Puzicha, J.: Shape context: a new descriptor for shape matching and object recognition. In: Advances in Neural Information Processing Systems, pp. 831–837 2001

  35. Marsland, S.: Machine Learning: An Algorithmic Perspective. Chapman and Hall, New York (2011)

    Book  Google Scholar 

  36. Happy, S.L., Routray, A.: Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2014)

    Article  Google Scholar 

  37. Bartlett, M., Littlewort, G., Wu, T., Movellan, J.: Computer expression recognition toolbox. In: 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 1–2. IEEE, (2008)

  38. Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inform. Process. Manag. 45(4), 427–437 (2009)

    Article  Google Scholar 

  39. Michel, P., El Kaliouby, R.: Real time facial expression recognition in video using support vector machines. In: Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 258–264. ACM (2003)

  40. Pardàs, M., Bonafonte, A.: Facial animation parameters extraction and expression recognition using hidden markov models. Signal Process. Image Commun. 17(9), 675–688 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Latifa Greche.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Greche, L., Akil, M., Kachouri, R. et al. A new pipeline for the recognition of universal expressions of multiple faces in a video sequence. J Real-Time Image Proc 17, 1389–1402 (2020). https://doi.org/10.1007/s11554-019-00896-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-019-00896-5

Keywords

Navigation