Abstract
The problem observed was the difficulty of people who are Deaf or Hard of Hearing (D/HH) to know what is being said or informed in an environment, especially at schools, when sign language interpreter is absent. Thus, the main goal was to investigate which variables most influence on the acceptance of a Speech-To-Text system with regard to the different profiles of people who are D/HH. For the purpose mentioned, we conducted a pilot study in two distinct field researches, in which 11 D/HH volunteers participated. During this study, we used two models as inspiration, TAM and UTAUT, for data collection, which was concerned with: written communication, educational barriers, technology use, habit of using captions and subtitles, emotions, technology acceptance, social influence, empowerment and privacy. In the case of emotions, we used Emotion-LIBRAS, an instrument for people who are D/HH to identify positive, negative or mixed emotions towards technology.
Chapter PDF
Similar content being viewed by others
Keywords
References
Bain, K., Basson, S., Faisman, A., Kanevsky, D.: Accessibility, transcription, and access everywhere. IBM Systems Journal 44(3) (2005)
Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35(8) (1989)
Nuance Dragon Mobile Developer (2013), http://dragonmobile.nuancemobiledeveloper.com/ (access date: September 18, 2013)
Pan, Y.-X., Jiang, D.-N., Yao, L., Picheny, M., Qin, Y.: Effects of Automated Transcription Quality on Non-native Speakers’ Comprehension in Real-time Computer-mediated Communication. In: CHI 2010, Atlanta, Georgia, USA, April 10-15 (2010)
Papadopoulos, M., Pearson, E.: Improving the Accessibility of the Traditional Lecture: An Automated Tool for Supporting Transcription. In: BCS HCI 2012, Birmingham, UK (2012)
Prietch, S.S., Filgueiras, L.V.L.: Assistive Technology in the Classroom Taking into Account the Deaf Student-Centered Design: the TApES project. In: EIST/CHI, Austin (2012)
Prietch, S.S., Filgueiras, L.V.L.: Developing Emotion-Libras 2.0: An Instrument to Measure the Emotional Quality of Deaf Persons while Using Technology. In: Emerging Research and Trends in Interactivity and the HCI, pp. 74–94. IGI Global, Portugal (2013b)
Prietch, S.S., Filgueiras, L.V.L.: Double Testing: Potential Website Resources for Deaf People and the Evaluation Instrument Emotion-LIBRAS. In: ChileCHI 2013, Temuco (2013c)
Primiani, R., Tibaldi, D., Garlaschelli, L.: Net4Voice – New Technologies for voice-converting in barrier-free learning environment. In: FECS 2008, Las Vegas, NV, USA (2008)
Ranchal, R., et al.: Using Speech Recognition for Real-Time Captioning and Lecture Transcription in the Classroom. IEEE Transactions on Learning Technologies (2013)
Rodríguez, M.C., Caminero, J., Van Kampen, A.: SignSpeak: Scientific understanding and vision-based technological development for continuous sign language recognition and translation. Research report, reviewer: Ruíz, G. M., Release version: V1.0 (2011)
Venkatesh, V., Morris, M., Davis, G., Davis, F.: User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3), 425–478 (2003)
Wald, M.: Using Automatic Speech Recognition to Enhance Education for All Students: Turning a Vision into Reality. In: 34th ASEE/IEEE, Savannah (2004)
Wald, M.: Captioning for Deaf and Hard of Hearing People by Editing Automatic Speech Recognition in Real Time. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2006. LNCS, vol. 4061, pp. 683–690. Springer, Heidelberg (2006)
Wald, M., Bain, K.: Enhancing the Usability of Real-Time Speech Recognition Captioning Through Personalised Displays and Real-Time Multiple Speaker Editing and Annotation. In: Stephanidis, C. (ed.) Universal Access in HCI, Part III, HCII 2007. LNCS, vol. 4556, pp. 446–452. Springer, Heidelberg (2007)
Wald, M.: Captioning Multiple Speakers Using Speech Recognition to Assist Disabled People. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 617–623. Springer, Heidelberg (2008)
Wald, M., Bain, K.: Universal access to communication and learning: the role of automatic speech recognition. Univ. Access Inf. Soc. 6, 435–447 (2008)
Wald, M.: Developing Assistive Technology to Enhance Learning for all Students. In: Assistive Technology from Adapted Equipment to Inclusive Environments. IOS Press (2009)
Wald, M.: Important New Enhancements to Inclusive Learning Using Recorded Lectures. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part I. LNCS, vol. 7382, pp. 108–115. Springer, Heidelberg (2012)
Wald, M., Li, Y.: Synote: Important Enhancements to Learning with Recorded Lectures. In: 12th IEEE International Conference on Advanced Learning Technologies (2012)
Zhili, L., Wanjie, T., Cheng, X.J.: A Study and Application of Speech Recognition Technology in Primary and Secondary School for Deaf/Hard of Hearing Students. In: 4th International Convention on Rehabilitation Engineering & Assistive Technology (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Prietch, S.S., de Souza, N.S., Filgueiras, L.V.L. (2014). A Speech-To-Text System’s Acceptance Evaluation: Would Deaf Individuals Adopt This Technology in Their Lives?. In: Stephanidis, C., Antona, M. (eds) Universal Access in Human-Computer Interaction. Design and Development Methods for Universal Access. UAHCI 2014. Lecture Notes in Computer Science, vol 8513. Springer, Cham. https://doi.org/10.1007/978-3-319-07437-5_42
Download citation
DOI: https://doi.org/10.1007/978-3-319-07437-5_42
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07436-8
Online ISBN: 978-3-319-07437-5
eBook Packages: Computer ScienceComputer Science (R0)