Skip to main content

Using the Transferable Belief Model for Multimodal Input Fusion in Companion Systems

  • Conference paper
Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction (MPRSS 2012)

Abstract

Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, multimodal user interface show great promise for a natural human computer interaction. While more and more sophisticated sensors become available, current systems capable of accepting multimodal inputs (e.g. speech and gesture) still lack the robustness of input interpretation needed for companion systems. We demonstrate how evidential reasoning can be applied in the domain of graphical user interfaces in order to provide such reliability and robustness expected by users. For this purpose an existing approach using the Transferable Belief Model from the robotic domain is adapted and extended.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atrey, P., Hossain, M., El Saddik, A., Kankanhalli, M.: Multimodal fusion for multimedia analysis: a survey. Multimedia Systems 16, 345–379 (2010)

    Article  Google Scholar 

  2. Cohen, P.R., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., Clow, J.: Quickset: multimodal interaction for distributed applications. In: Proc. of Multimedia 1997, pp. 31–40. ACM (1997)

    Google Scholar 

  3. Dumas, B., Ingold, R., Lalanne, D.: Benchmarking fusion engines of multimodal interactive systems. In: ICMI-MLMI 2009: Proc. of the 2009 International Conference on Multimodal Interfaces, pp. 169–176. ACM (2009)

    Google Scholar 

  4. Dumas, B., Lalanne, D., Ingold, R.: Description languages for multimodal interaction: a set of guidelines and its illustration with smuiml. Journal on Multimodal User Interfaces 3, 237–247 (2010)

    Article  Google Scholar 

  5. Dumas, B., Lalanne, D., Oviatt, S.: Multimodal Interfaces: A Survey of Principles, Models and Frameworks. In: Lalanne, D., Kohlas, J. (eds.) Human Machine Interaction. LNCS, vol. 5440, pp. 3–26. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  6. Holzapfel, H., Nickel, K., Stiefelhagen, R.: Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3d pointing gestures. In: Proc. of ICMI 2004, pp. 175–182. ACM (2004)

    Google Scholar 

  7. Nigay, L., Coutaz, J.: A generic platform for addressing the multimodal challenge. In: CHI 1995: Proc. of the SIGCHI Conference on Human Factors in Computing Systems, pp. 98–105. ACM (1995)

    Google Scholar 

  8. Oviatt, S.: Multimodal Interfaces, ch. 21, 2nd edn., pp. 413–432. CRC Press (September 2007)

    Google Scholar 

  9. Pfleger, N.: Context based multimodal fusion. In: Proc. of ICMI 2004, pp. 265–272. ACM (2004)

    Google Scholar 

  10. Reddy, B.S., Basir, O.A.: Concept-based evidential reasoning for multimodal fusion in human-computer interaction. Appl. Soft Comput. 10(2), 567–577 (2010)

    Article  Google Scholar 

  11. Sharma, R., Pavlovic, V., Huang, T.: Toward multimodal human-computer interface. Proc. of the IEEE 86(5), 853–869 (1998)

    Article  Google Scholar 

  12. Smets, P.: The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 447–458 (1990)

    Article  Google Scholar 

  13. Smets, P.: Data fusion in the transferable belief model. In: Proc. of the Third International Conference on Information Fusion, FUSION 2000, pp. PS21–PS33 (2000)

    Google Scholar 

  14. Wendemuth, A., Biundo, S.: A Companion Technology for Cognitive Technical Systems. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds.) COST 2102. LNCS, vol. 7403, pp. 89–103. Springer, Heidelberg (in press or to appear 2012)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schüssel, F., Honold, F., Weber, M. (2013). Using the Transferable Belief Model for Multimodal Input Fusion in Companion Systems. In: Schwenker, F., Scherer, S., Morency, LP. (eds) Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction. MPRSS 2012. Lecture Notes in Computer Science(), vol 7742. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37081-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37081-6_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37080-9

  • Online ISBN: 978-3-642-37081-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics