Abstract
Lip reading is a technique to recognize the spoken words base on lip movement. In this process, it is important to detect the correct features of the facial images. However, detection is not easy in the real situations because the facial images may be taken from various angles. To cope with this problem, lip reading from multi view facial images has been conducted in several research institutes. In this paper, we propose a lip reading approach using the 3D Active Appearance Models (AAM) features and the Hidden Markov Model (HMM)-based recognition model. The AAM is a parametric model constructed from both shape and appearance parameters. The parameters are compressed into the combination parameters in the AAM, and are used in lip reading or some other facial image processing applications. The 3D-AAM extends the traditional 2D shape model to 3D shape model built from three different view angles (frontal, left, and right profile). It provides an effective algorithm to align the model with the RGB and the 3D range images obtained by the RGBD-camera. The benefit of using 3D-AAM in lip reading is that it enables to recognize the spoken words from any angle of the facial images. In the experiment, we compared the accuracy of lip reading using 3D-AAM with that of the traditional 2D-AAM on various angles of facial images. Based on the result, we confirmed that 3D-AAM is effective in cross view lip reading despite using only the frontal images in the HMM training phase.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jang, K.S.: Lip contour extraction based on active shape model and snakes. J. Comput. Sci. 7(10), 148–153 (2007)
Xiaopeng, H., Hongxun, Y., Yuqi, W. Rong, C.: A PCA based visual DCT feature extraction method for lip-reading. In: 2006 Proceedings of International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2006, pp. 321–324 (2006)
Matthews, I., Potamianos, G., Neti, C., Luettin, J.: A comparison of model and transform-based visual features for audio-visual LVCSR. In: IEEE International Conference on Multimedia and Expo, ICME 2001, vol. 2, pp. 2–5 (2001)
Kumar, K., Chen, T., Stern, R.M.: Profile view lip reading. In: 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP 2007, Honolulu, HI, 2007, pp. IV-429–IV-432 (2007)
Lucey, P., Potamianos, G., Sridharan, S.: A unified approach to multi-pose audio-visual ASR. Interspeech Conf. 2007(1–4), 809–812 (2007)
Komai, Y., Yang, N., Takiguchi, T., Ariki, Y.: Robust AAM-based audio-visual speech recognition against face direction changes. In: Proceedings of the 20th ACM International Conference on Multimedia – MM 2012, pp. 1161–1164 (2012)
Lan, Y., Theobald, B.J., Harvey, R.: View independent computer lip-reading. In: 2012 IEEE International Conference on Multimedia and Expo, Melbourne, VIC, 2012, pp. 432–437 (2012)
Dopfer, A., Wang, H., Wang, C.: 3D active appearance model alignment using intensity and range data. Robot. Auton. Syst. 62(2), 168–176 (2014)
Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)
Dryden, I.L., Mardia, K.V.: Statistical Shape Analysis. Wiley, New York (1998)
Matsuura, H., Nitta, T.: Speaker independent large vocabulary word recognition based on SMQ/HMM. IEICE Trans. J76-D-2(12), 2486–2494 (1993). (in Japanese)
Takeda, K., Sagisaka, Y., Katagiri, S., Kuwabara, H.: A Japanese speech database for various kinds of research purposes. Acoust. Sci. Technol. 44(10), 747–754 (1998). (in Japanese)
Gross, R., Matthews, I., Baker, S.: Active appearance models with occlusion. Image Vis. Comput. 24(6), 593–604 (2006)
Acknowledgement
This work has been supported by a Grant-in-Aid for Scientific Research (C) 16K00234, Scientific Research (B) 16H03211, and Scientific Research (C) 16K00251 by MEXT, Japan, and the Futaba Electronics Memorial Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Watanabe, T., Katsurada, K., Kanazawa, Y. (2017). Lip Reading from Multi View Facial Images Using 3D-AAM. In: Chen, CS., Lu, J., Ma, KK. (eds) Computer Vision – ACCV 2016 Workshops. ACCV 2016. Lecture Notes in Computer Science(), vol 10117. Springer, Cham. https://doi.org/10.1007/978-3-319-54427-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-54427-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54426-7
Online ISBN: 978-3-319-54427-4
eBook Packages: Computer ScienceComputer Science (R0)