Skip to main content

View Synthesis with Kinect-Based Tracking for Motion Parallax Depth Cue on a 2D Display

  • Conference paper
  • First Online:
Proceedings of the 9th International Conference on Computer Recognition Systems CORES 2015

Abstract

Recent advancements in 3D video generation, processing, compression, and rendering increase accessibility to 3D video content. However, the majority of 3D displays available on the market belong to the stereoscopic display class and require users to wear special glasses in order to perceive depth. As an alternative, autostereoscopic displays can render multiple views without any additional equipment. The depth perception on stereoscopic and autostereoscopic displays is realized via a binocular depth cue called stereopsis. Another important depth cue, that is not exploited by autostereoscopic displays, is motion parallax which is a monocular depth cue. To enable the motion parallax effect on a 2D display, we propose to use the Kinect sensor to estimate the pose of the viewer. Based on pose of the viewer the real-time view synthesis software adjusts the view and creates the motion parallax effect on a 2D display. We believe that the proposed solution can enhance the content displayed on digital signature displays, kiosks, and other advertisement media where many users observe the content during move and use of the glasses-based 3D displays is not possible or too expensive.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Advanced video coding for generic audiovisual services. ITU-T Recommendation H.264 and ISO/IEC 14496–10 (MPEG-4 AVC) (2013)

    Google Scholar 

  2. Anisiewicz, J., Jakubicki, B., Sobecki, J., Wantuła, Z.: Configuration of Complex Interactive Environments. In: Proceedings of the New Research in Multimedia and Internet Systems, pp. 239–249. Springer International Publishing, Berlin (2015)

    Google Scholar 

  3. Boev, A., Raunio, K., Georgiev, M., Gotchev, A., Egiazarian, K.: OpenGL-based control of semi-active 3D display. In: Proceedings of the 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video, Istanbul, pp. 125–128 (2008)

    Google Scholar 

  4. Boev, A., Gotchev, A., Egiazarian, K.: Stereoscopic artifacts on portable auto-stereoscopic displays: what matters?. In: Proceedings of the Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, pp. 24–29 (2009)

    Google Scholar 

  5. Cutting, J.E., Vishton, P.M.: Perceiving layout and knowing distances: the integration, relative potency and contextual use of different information about depth. Perception of Space and Motion, pp. 69–117. Academic Press, San Diego (1995)

    Chapter  Google Scholar 

  6. Dąbała, Ł., Rokita, P.: Simulated holography based on stereoscopy and face tracking. In: Chmielewski, Leszek J., Kozera, Ryszard, Shin, Bok-Suk, Wojciechowski, Konrad (eds.) Computer Vision and Graphics. Lecture Notes in Computer Science, vol. 8671, pp. 163–170. Springer, Heidelberg (2014)

    Google Scholar 

  7. Fehn, C.: Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV. In: Proceedings of the SPIE Conference Stereoscopic Displays and Virtual Reality Systems XI, USA, pp. 93–104 (2004)

    Google Scholar 

  8. Fusiello, A., Trucco, E., Verri, A.: A compact algorithm for rectification of stereo pairs. J. Mach. Vis. Appl. 12(1), 16–22 (2000)

    Article  Google Scholar 

  9. Harrison, C., Hudson, S.E.: Pseudo-3D video conferencing with a generic webcam. In: Proceedings of the 10th IEEE International Symposium on Multimedia, Berkeley, USA, pp. 236–241 (2008)

    Google Scholar 

  10. Lackner, K., Boev, A., Gotchev, A.: Binocular depth perception: does head parallax help people see better in depth?. In: Proceeding of the 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video, Budapest, Hungary (2014)

    Google Scholar 

  11. Lambooija, M.T., Ijsselsteijn, W., Heynderickx, I.: Visual discomfort in stereoscopic displays: a Review. In: Proceedings of the SPIE 6490, Stereoscopic Displays and Virtual Reality Systems XIV (2007)

    Google Scholar 

  12. Mikkola, M., Boev, A., Gotchev, A.: Relative importance of depth cues on portable autostereoscopic display. In: Proceedings of the 3rd Workshop on Mobile Video Delivery, pp. 63–68 (2010)

    Google Scholar 

  13. Nawrot, M.: Depth from motion parallax scales with eye movement gain. J. Vis. 3(11), 17 (2003)

    Article  Google Scholar 

  14. Pastoor, S., Liu, J., Renault, S.: An experimental multimedia system allowing 3-d visualization and eye-controlled interaction without user-worn devices. IEEE Trans. Multimed. 1(1), 41–52 (1999)

    Article  Google Scholar 

  15. Rusanovskyy, D., Muller, K., Vetro, A.: Common test conditions of 3DV core experiments (2013). ISO/IEC JTC1/SC29/WG11 JCT3V-E1100

    Google Scholar 

  16. Seuntiens, P.J., Meesters, L.M., Ijsselsteijn, W.A.: Perceptual attributes of crosstalk in 3D images. J. Disp. 26(4), 177–183 (2005)

    Article  Google Scholar 

  17. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, pp. 1297–1304 (2011)

    Google Scholar 

  18. Smolyanskiy, N., Huitema, C., Liang, L., Anderson, S.E.: Real-time 3D face tracking based on active appearance model constrained by depth data. Image and Vision Computing 32(11), 860–869 (2014)

    Article  Google Scholar 

  19. Zhang, C., Yin, Z., Florencio, D.: Improving depth perception with motion parallax and its application in teleconferencing. In: Proceedings of the IEEE International Workshop on Multimedia Signal Processing, Rio de Janeiro, pp. 1–6 (2009)

    Google Scholar 

Download references

Acknowledgments

This work was partially funded by the Poznan University of Technology 04/45/DSPB/0104 grant. The authors express gratitude to the ENGINE project under the EU 7th Framework Programme for research, grant agreement no 316097 for support during the preparation of the publication.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michał Joachimiak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Joachimiak, M., Wasielica, M., Skrzypczyński, P., Sobecki, J., Gabbouj, M. (2016). View Synthesis with Kinect-Based Tracking for Motion Parallax Depth Cue on a 2D Display. In: Burduk, R., Jackowski, K., Kurzyński, M., Woźniak, M., Żołnierek, A. (eds) Proceedings of the 9th International Conference on Computer Recognition Systems CORES 2015. Advances in Intelligent Systems and Computing, vol 403. Springer, Cham. https://doi.org/10.1007/978-3-319-26227-7_79

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26227-7_79

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26225-3

  • Online ISBN: 978-3-319-26227-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics