Skip to main content

3D Sensing Techniques for Multimodal Data Analysis and Integration in Smart and Autonomous Systems

  • Conference paper
  • First Online:
Communications, Signal Processing, and Systems (CSPS 2017)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 463))

Abstract

For smart and autonomous systems, 3D positioning and measurement is essential as the precision can severely affect the applicability of the techniques for a number of applications. In this paper, we summarize and compare different techniques and sensors that can be potentially used in multimodal data analysis and integration. These will provide useful guidance for the design and implementation of relevant systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Han, J., et al.: Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)

    Google Scholar 

  2. Fanello, S.R., et al.: HyperDepth: learning depth from structured light without matching. In: Proceedings of the IEEE Conference on CVPR, pp. 5441–5450 (2016)

    Google Scholar 

  3. Zanuttigh, P., et al.: Time-of-Flight and Structured Light Depth Cameras. Springer, Heidelberg (2016)

    Google Scholar 

  4. Ke, F., et al.: A flexible and high precision calibration method for the structured light vision system. Optik-Int. J. Light Electron Opt. 127(1), 310–314 (2016)

    Google Scholar 

  5. Ren, M., et al.: Novel projector calibration method for monocular structured light system based on digital image correlation. Optik 132, 337–347 (2017)

    Google Scholar 

  6. http://hptg.com/industrial/

  7. http://www.ti.com/tool/OPT8241-CDK-EVM

  8. http://www.pmdtec.com/

  9. https://www.softkinetic.com/

  10. Ghamisi, P., et al.: LiDAR data classification using extinction profiles and a composite Kernel support vector machine. IEEE Geosci. Remote Sens. Lett. 14, 659–663 (2017)

    Google Scholar 

  11. Fersch, T., et al.: A CDMA modulation technique for automotive time-of-flight LiDAR systems. IEEE Sens. J. 17, 3507–3516 (2017)

    Google Scholar 

  12. Kang, Z., et al.: A bayesian-network-based classification method integrating airborne LiDAR data with optical images. IEEE J-STARS 10, 1651–1661 (2016)

    Google Scholar 

  13. Altmann, Y., et al.: Robust spectral unmixing of sparse multispectral Lidar waveforms using gamma Markov random fields. arXiv preprint arXiv:1610.04107 (2016)

  14. Martín, A.J., et al.: EMFi-based ultrasonic sensory array for 3D localization of reflectors using positioning algorithms. IEEE Sens. J. 15(5), 2951–2962 (2016)

    Google Scholar 

  15. Paajanen, M., et al.: ElectroMechanical Film (EMFi)—a new multipurpose electret material. Sens. Actuators A: Phys. 84(1), 95–102 (2000)

    Google Scholar 

  16. Khyam, M.O., Pickering, M.R., et al.: Pseudo-orthogonal chirp-based multiple ultrasonic transducer positioning. IEEE Sens. J. 17, 3832–3843 (2017)

    Google Scholar 

  17. Khyam, M.O., et al.: High-precision OFDM-based multiple ultrasonic transducer positioning using a robust optimization approach. IEEE Sens. J. 16(13), 5325–5336 (2016)

    Google Scholar 

  18. Chen, C., et al.: Real-time human action recognition based on depth motion maps. J. Real-time Image Process. 12(1), 155–163 (2016)

    Google Scholar 

  19. Corti, A., et al.: A metrological characterization of the Kinect V2 time-of-flight camera. Robot. Auton. Syst. 75, 584–594 (2016)

    Google Scholar 

  20. Supancic, J.S., et al.: Depth-based hand pose estimation: data, methods, and challenges. In: Proceedings of the IEEE International Conference on CV, pp. 1868–1876 (2016)

    Google Scholar 

  21. http://appleinsider.com/articles/15/04/07/apple-3d-mapping-technology-accurately-estimates-hand-poses-for-ui-control

  22. Das, R., et al.: GeroSim: a simulation framework for gesture driven robotic arm control using Intel RealSense. In: IEEE International Conference on ICPEICES, pp. 1–5, 4 July 2016

    Google Scholar 

  23. Lan, Y., et al.: Data fusion-based real-time hand gesture recognition with Kinect V2. In: 9th International Conference on Human System Interactions (HSI). IEEE (2016)

    Google Scholar 

  24. https://get.google.com/tango/

  25. Chen, L., et al.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013)

    Google Scholar 

  26. Allodi, M., et al.: Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In: Intelligent Vehicles Symposium. IEEE (2016)

    Google Scholar 

  27. Yao, Y., et al.: Integration of indoor and outdoor positioning in a three-dimension scene based on LIDAR and GPS signal. In: Proceedings of the 2nd IEEE ICCC Conference, pp. 1772–1776 (2016)

    Google Scholar 

  28. Nakajima, K., et al.: 3D environment mapping and self-position estimation by a small flying robot mounted with a movable ultrasonic range sensor. J. Elect. Sys. Inf. Tech. 4, 289–298 (2017)

    Google Scholar 

  29. Anghel, A., et al.: Combining spaceborne SAR images with 3D point clouds for infrastructure monitoring applications. ISPRS J. Photogramm. Remote Sens. 111, 45–61 (2016)

    Google Scholar 

  30. Raucoules, D., et al.: Time-variable 3D ground displacements from high-resolution synthetic aperture radar (SAR). Remote Sens. Environ. 139, 198–204 (2013)

    Google Scholar 

  31. Nitti, D.O., et al.: Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 15(8), 18334–18359 (2015)

    Google Scholar 

  32. Penner, J.F., et al.: Ground-based 3D radar imaging of trees using a 2D synthetic aperture. Electronics 6(1), 11 (2017)

    Google Scholar 

  33. Basaca-Preciado, L.C., et al.: Optical 3D laser measurement system for navigation of autonomous mobile robot. Opt. Lasers Eng. 54, 159–169 (2015)

    Google Scholar 

  34. http://www.velodynelidar.com/hdl-64e.html

  35. Wu, Z., et al.: A novel stereo positioning method based on optical and SAR sensor. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 339–342. IEEE (2014)

    Google Scholar 

  36. Ren, J., et al.: A general framework for 3D soccer ball estimation and tracking. In: 2004 International Conference on Image Processing, ICIP 2004, vol. 3, pp. 1935–1938 (2004)

    Google Scholar 

  37. Ren, J., et al.: Tracking the soccer ball using multiple fixed cameras. Comput. Vis. Image Underst. 113(5), 633–642 (2009)

    Google Scholar 

  38. Feng, Y., et al.: Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications. IEEE Trans. Broadcast. 57(2), 500–509 (2011)

    Google Scholar 

  39. Ren, J., et al.: Real-time modeling of 3-D soccer ball trajectories from multiple fixed cameras. IEEE Trans. Circ. Syst. Video Technol. 18(3), 350–362 (2008)

    Google Scholar 

  40. Ren, J., et al.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010)

    Google Scholar 

  41. Liu, Z., et al.: Template deformation based 3D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2017)

    Google Scholar 

  42. Ren, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Natural Science Foundation of China (61672008), Guangdong Provincial Application-oriented Technical Research and Development Special fund project (2016B010127006, 2015B010131017), the Natural Science Foundation of Guangdong Province (2016A030311013, 2015A030313672), and International Scientific and Technological Cooperation Projects of Education Department of Guangdong Province (2015KGJHZ021).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinchang Ren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, Z. et al. (2019). 3D Sensing Techniques for Multimodal Data Analysis and Integration in Smart and Autonomous Systems. In: Liang, Q., Mu, J., Jia, M., Wang, W., Feng, X., Zhang, B. (eds) Communications, Signal Processing, and Systems. CSPS 2017. Lecture Notes in Electrical Engineering, vol 463. Springer, Singapore. https://doi.org/10.1007/978-981-10-6571-2_71

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-6571-2_71

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-6570-5

  • Online ISBN: 978-981-10-6571-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics