Skip to main content

Virtual View Synthesis and Artifact Reduction Techniques

  • Chapter
  • First Online:
3D-TV System with Depth-Image-Based Rendering
  • 2013 Accesses

Abstract:

With texture and depth data, virtual views are synthesized to produce a disparity-adjustable stereo pair for stereoscopic displays, or to generate multiple views required by autostereoscopic displays. View synthesis typically consists of three steps: 3D warping, view merging, and hole filling. However, simple synthesis algorithms may yield some visual artifacts, e.g., texture flickering, boundary artifact, and smearing effect, and many efforts have been made to suppress these synthesis artifacts. Some employ spatial/temporal filters to smooth depth maps, which mitigate depth errors and enhance temporal consistency; some use a cross-check technique to detect and prevent possible synthesis distortions; some focus on removing boundary artifacts and others attempt to create natural texture patches for the disoccluded regions. In addition to rendering quality, real-time implementation is necessary for view synthesis. So far, the basic three-step rendering process has been realized in real time through GPU programming and a design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mark WR, McMillan L Bishop G (1997) Post-Rendering 3D Warping.” In: Proceedings of the Symposium on interactive 30 graphics, pp 7–16, Providence, Rhode Island, Apr 1997

    Google Scholar 

  2. Fehn C (2003) A 3D-TV approach using depth-image-based rendering (DIBR). In: Proceedings of visualization, imaging and image processing (VIIP), pp 482–487

    Google Scholar 

  3. Tian D, Lai P, Lopez P, Gomila C (2009) View synthesis techniques for 3D video. In: Proceedings applications of digital image processing XXXII, vol 7443, pp 74430T–1–11

    Google Scholar 

  4. Müller K, Smolic A, Dix K, Merkle P, Kauff P, Wiegand T(2008) View synthesis for advanced 3D video systems. EURASIP Journal on Image and Video Processing, vol 2008, Article ID 438148

    Google Scholar 

  5. Mori Y, Fukushima N, Yendo T, Fujii T, Tanimoto M (2009) View generation with 3D warping using depth information for FTV. Sig Processing: Image Commun 24(1–2):65–72

    Article  Google Scholar 

  6. Zinger S, Do L, de With PHN (2010) Free-viewpoint depth image based rendering. J Vis Commun and Image Represent 21:533–541

    Article  Google Scholar 

  7. Domański M, Gotfryd M, Wegner K (2009) View synthesis for multiview video transmission. In: International conference on image processing, computer vision, and pattern recognition, Las Vegas, USA, Jul 2009, pp 13–16

    Google Scholar 

  8. Bertalmio M, Sapiro G, Caselles ,V, Ballester C (2000) Image inpainting. In: Proceedings of ACM conference on computer graphics (SIGGRAPH), pp 417–424, New Orleans, LU, July 2000

    Google Scholar 

  9. Criminisi A, Perez P, Toyama K (2004) Region filling and object removal by exemplar-based inpainting. IEEE Trans Image Process 13(9):1200–1212

    Article  Google Scholar 

  10. Telea A (2004) An image inpainting technique based on the fast marching method. J Graph Tools 9(1):25–36

    Article  Google Scholar 

  11. Oh K, Yea S, Ho Y (2009) Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video. In: Proceedings of the picture coding symposium (PCS), Chicago, pp 233–36

    Google Scholar 

  12. Zhao Y, Yu L (2010) A perceptual metric for evaluating quality of synthesized sequences in 3DV system. In: Proceedings of visual communications and image processing (VCIP), Jul 2010

    Google Scholar 

  13. Daribo I, Saito H (2010) Bilateral depth-discontinuity filter for novel view synthesis. In: Proceedings of IEEE international workshop on multimedia signal processing (MMSP), Saint-Malo, France, Oct 2010, pp 145–149

    Google Scholar 

  14. Park JK, Jung K, Oh J, Lee S, Kim JK, Lee G, Lee H, Yun K, Hur N, Kim J (2009) Depth-image-based rendering for 3DTV service over T-DMB. J Vis Commun Image Represent 24(1–2):122–136

    Google Scholar 

  15. Fu D, Zhao Y, Yu L (2010) Temporal consistency enhancement on depth sequences. In: Proceedings of picture coding symposium (PCS), Dec 2010, pp 342–345

    Google Scholar 

  16. Zhang L, Tam WJ (2005) Stereoscopic image generation based on depth images for 3D TV. IEEE Trans Broadcast 51(2):191–199

    Article  Google Scholar 

  17. Yang L, Yendoa T, Tehrania MP, Fujii T, Tanimoto M (2010) Artifact reduction using reliability reasoning for image generation of FTV. J Vis Commun Image Represent, vol 21, pp 542–560, Jul–Aug 2010

    Google Scholar 

  18. Yang L, Yendo T, Tehrani MP, Fujii T, Tanimoto M(2010) Error suppression in view synthesis using reliability reasoning for FTV. In: Proceedings of 3DTV Conference (3DTV-CON), Tampere, Finland

    Google Scholar 

  19. Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vision 47(1–3):7–42

    Article  MATH  Google Scholar 

  20. Vandewalle P, Gunnewiek RK, Varekamp C (2010) Improving depth maps with limited user input. In: Proceedings of the stereoscopic displays and applications XXI, vol 7524

    Google Scholar 

  21. Fieseler M, Jiang X (2009) Registration of depth and video data in depth image based rendering In: Proceedings of 3DTV Conference (3DTV-CON), pp 1–4

    Google Scholar 

  22. Zhao Y, Zhu C, Chen Z, Tian D, Yu L (2011) Boundary artifact reduction in view synthesis of 3D video: from perspective of texture-depth alignment. IEEE Trans Broadcast 57(2):510–522

    Article  Google Scholar 

  23. Lee C, Ho YS (2008) Boundary filtering on synthesized views of 3D video. In: Proceedings international conference on future generation communication and networking symposia, Sanya, pp 15–18

    Google Scholar 

  24. Zitnick CL, Kang SB, Uyttendaele M, Winder S, Szeliski R (2004) High-quality video view interpolation using a layered representation. In: Proceedings of ACM SIGGRAPH, pp 600–608

    Google Scholar 

  25. Daribo I, Pesquet-Popescu B (2010) Depth-aided image inpainting for novel view synthesis. In: Proceedings of IEEE international workshop on multimedia signal processing (MMSP)

    Google Scholar 

  26. Schmeing M, Jiang X (2010) Depth image based rendering: a faithful approach for the disocclusion problem. In: Proceedings of 3DTV conference, pp 1–4

    Google Scholar 

  27. Ndjiki-Nya P, Köppel M, Doshkov D, Lakshman H, Merkle P, Müller K, Wiegand T (2011) Depth image-based rendering with advanced texture synthesis for 3-D video. IEEE Trans Multimedia 13(3):453–465

    Article  Google Scholar 

  28. Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Proceedings of IEEE international conference computer vision, pp 839–846

    Google Scholar 

  29. Wang Z, Sheikh HR (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):1–14

    Article  Google Scholar 

  30. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698

    Article  Google Scholar 

  31. Middlebury Stereo Vision Page (2007) Available: http://vision.middlebury.edu/stereo/

  32. Tanimoto M, Fujii T, Tehrani MP, Suzuki K, Wildeboer M (2009) Depth estimation reference software (DERS) 3.0. ISO/IEC JTC1/SC29/WG11 Doc. M16390, Apr 2009

    Google Scholar 

  33. Tong X, Yang P, Zheng X, Zheng J, He Y (2010) A sub-pixel virtual view synthesis method for multiple view synthesis. In: Proceedings picture coding symposium (PCS), Dec 2010, pp 490–493

    Google Scholar 

  34. H. Furihata, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto, “Novel view synthesis with residual error feedback for FTV,” in Proc. Stereoscopic Displays and Applications XXI, vol. 7524, Jan. 2010, pp. 75240L-1-12

    Google Scholar 

  35. Shin H, Kim Y, Park H, Park J (2008) Fast view synthesis using GPU for 3D display. IEEE Trans Consum Electron 54(4):2068–2076

    Article  MathSciNet  Google Scholar 

  36. Rogmans S, Lu J, Bekaert P, Lafruit G (2009) Real-Time Stereo-Based View Synthesis Algorithms: A Unified Framework and Evaluation on Commodity GPUs. Sig Processing: Image Commun 24(1–2):49–64

    Article  Google Scholar 

  37. Feldmann I et al (2008) HHI test material for 3D video. ISO/IEC JTC1/SC29/WG11 Doc. M15413, Apr 2008

    Google Scholar 

  38. Schechner YY, Kiryati N, Basri R (2000) Separation of transparent layers using focus. Int J Comput Vis 39(1):25–39

    Article  MATH  Google Scholar 

  39. Zhao Y (2011) Depth no-synthesis-error model for view synthesis in 3-D video. IEEE Trans Image Process 20(8):2221–2228

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

The authors would like to thank Middlebury College, Fraunhofer Institute for Telecommunications Heinrich Hertz Institute (HHI) and Philips for kindly providing the multi-view images, “Book_arrival” and “Mobile” sequences. This work is partially supported by the National Basic Research Program of China (973) under Grant No.2009CB320903 and Singapore Ministry of Education Academic Research Fund Tier 1 (AcRF Tier 1 RG7/09).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yin Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Zhao, Y., Zhu, C., Yu, L. (2013). Virtual View Synthesis and Artifact Reduction Techniques. In: Zhu, C., Zhao, Y., Yu, L., Tanimoto, M. (eds) 3D-TV System with Depth-Image-Based Rendering. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9964-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-9964-1_5

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-9963-4

  • Online ISBN: 978-1-4419-9964-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics