Abstract
In recent years, positioning for simple static scenes has been unable to meet the requirements of people’s production and life. People want to achieve accurate positioning in practical scenarios such as airports, exhibition halls and stations. Therefore, the research on visual SLAM positioning in complex dynamic scenes is increasing day by day. This article reviews the research results of SLAM positioning methods and visual SLAM positioning methods for complex scenes in recent years. Firstly, the development process of laser SLAM, visual SLAM, semantic SLAM and multi-sensor fusion is introduced, but the focus is on visual SLAM. Secondly, the paper summarizes the methods of moving object detection and visual SLAM localization in complex dynamic scenes. Then the paper describes the development of deep learning and multi-sensor fusion in visual SLAM positioning based on complex dynamic scenes. Finally, the shortcomings of visual SLAM positioning methods based on complex scenes are summarized and the research prospects are prospected.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Smith, R., Self, M., Cheeseman, P.: Estimating uncertain spatial relationships in robotics. In: Proceedings of IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3 April 1987
Sileshi, B.G., Oliver, J., Toledo, R., Goncalves, J., Costa, P.: On the behaviour of low cost laser scanners in HW/SW particle filter SLAM applications. Robot. Auton. Syst. 80(C), 11–23 (2016)
Thallas, A., Tsardoulias, E., Petrou, L.: Particle filter—scan matching hybrid SLAM employing topological information. In: 24th Mediterranean Conference on Control and Automation, Athens, Greece, 21–24 June 2016
Song, W., Yang, Y., Fu, M., Kornhauser, A., Wang, M.: Critical rays self-adaptive particle filtering SLAM. J. Intell. Robot. Syst. 92(1), 107–124 (2017). https://doi.org/10.1007/s10846-017-0742-z
Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans. Rob. 23(1), 34–46 (2007)
Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007
Mur-Artal, R., Montiel, J.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)
Stühmer, J., Gumhold, S., Cremers, D.: Real-time dense geometry from a handheld camera. In: Goesele, M., Roth, S., Kuijper, A., Schiele, B., Schindler, K. (eds.) DAGM 2010. LNCS, vol. 6376, pp. 11–20. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15986-2_2
Konolige, K., Grisetti, G., Kümmerle, R.: Efficient sparse pose adjustment for 2D mapping. In: International Conference on Intelligent Robots and Systems, pp. 22–29 (2010)
Engel, J., Sturm, J., Cremers, D.: Semi-dense visual odometry for a monocular camera. In: IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013
Strasdat, H., Montiel, J.M.M., Davison, A.J.: Visual SLAM: why filter? Image Vis. Comput. 30(2), 65–77 (2012)
Hess, W., Kohler, D., Rapp, H., Andor, D.: Real-time loop closure in 2D LIDAR SLAM. In: IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016
Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052 (2007)
Sim, R., Elinas, P., Griffin, M.: Vision-based SLAM using the Rao-Blackwellised particle filter. In: IJCAI Workshop on Reasoning with Uncertainty in Robotics, vol. 9(4), pp. 500–509 (2005)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94
Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF, vol. 58, no. 11, pp. 2564–2571 (2011)
Engel, J., Sturm, J., Cremers, D.: Semi-dense visual odometry for a monocular camera. In: IEEE International Conference on Computer Vision, pp. 1449–1456 (2013)
Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_54
Newcombe, R.A., Izadi, S., Hilliges, O., et al.: KinectFusion: real-time dense surface mapping and tracking. In: IEEE International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)
Kerl, C., Sturm, J., Cremers, D.: Dense visual SLAM for RGB-D cameras. In: International Conference on Intelligent Robots and Systems, pp. 2100–2106 (2014)
Yang, S., Scherer, S.A., Yi, X., et al.: Multi-camera visual SLAM for autonomous navigation of micro aerial vehicles. Robot. Auton. Syst. 93(1), 116–134 (2017)
Bao, S.Y., Bagra, M., Chao, Y.W., et al.: Semantic structure from motion with points, regions, and objects. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2703–2710 (2012)
Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., et al.: SLAM++: simultaneous localisation and mapping at the level of objects. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1352–1359 (2013)
Vineet, V., Miksik, O., Lidegaard, M., et al.: Incremental dense semantic stereo fusion for large- scale semantic scene reconstruction. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 75–82 (2015)
Bowman, S.L., Atanasov, N., Daniilidis, K., et al.: Probabilistic data association for semantic slam. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 1722–1729 (2017)
Schönberger, J.L., Pollefeys, M., Geiger, A., et al.: Semantic visual localization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–21 (2008)
Rünz, M., Agapito, L.: MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 1–10 (2018)
Jie, Z., Jun, Z.: A study on laser and visual mapping of mobile robots with improved ICP algorithm. Mech. Electr. Eng. (12) 1480–1484 (2017)
Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Rob. 3(4), 1–17 (2017)
Lei, Y.: Research on simultaneous positioning and mapping of indoor robots. Guangxi University of Science and Technology (2019)
Yin, L., Peng, J., Jiang, G., Ou, Y.: Research on synchronous positioning and mapping of low-cost laser and vision. Integr. Tech. (2) (2019)
Cheng, Y.H., Wang, J.: A motion image detection method based on the inter-frame difference method. Appl. Mech. Mater. 490–491, 1283–1286 (2014)
Min, H., Shu, H., Liu, Q., Xia, Y., Gang, C.: Moving object detection method based on NMI features motion detection frame difference. Adv. Sci. Lett. 6(1), 477–480 (2012)
Stauffer, C., Grimson, E.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)
Zhou, J., Wu, X., Zhang, C., Lu, W.: A moving object detection method based on hybrid Gaussian model of sliding window. J. Electron. Inf. Tech. 35(07), 1650–1656 (2013)
Hahnel, D., Triebel, R., Burgard, W., et al.: Map building with mobile robots in dynamic environments. In: IEEE International Conference on Robotics and Automation, pp. 1557–1563. IEEE, Piscataway (2003)
Tan, W., Liu, H.M., Dong, Z.L., et al.: Robust monocular SLAM in dynamic environments. In: 12th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 209–218. IEEE Piscataway (2013)
Oh, S., Hahn, M., Kim, J.: Dynamic EKF-based SLAM for autonomous mobile convergence platforms. Multimedia Tools Appl. 74(16), 6413–6430 (2014). https://doi.org/10.1007/s11042-014-2093-0
Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352. IEEE, Piscataway (2015)
Kumar, S., Dhiman, V., Ganesh, M.R., Corso, J.: Spatiotemporal Articulated Models for Dynamic SLAM (2016)
Sun, Y., Liu, M., Meng, Q.H.: Improving RGB-D SLAM in dynamic environments: a motion removal approach. Robot. Auton. Syst. 89, 110–122 (2017)
Sun, Y., Liu, M., Meng, Q.H.: Invisibility: a moving-object removal approach for dynamic scene modelling using RGB-D camera. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017
Barsan, I.A., Liu, P., Pollefeys, M., Geiger, A.: Robust dense mapping for large-scale dynamic environments. In: International Conference on Robotics and Automation (ICRA), pp. 7510–7517 (2018)
Sun, Y., Liu, M., Meng, M.: Motion removal for reliable RGB-D SLAM in dynamic environments. Robot. Auton. Syst. 108, 115–128 (2018)
Berta, B., Facil, J.M., Javier, C., Jose, N.: DynaSLAM: tracking, mapping and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. (2018)
Yu, C. et al.: DS-SLAM: a semantic visual SLAM towards dynamic environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1168–1174 (2018)
Xu, B., Li, W., Tzoumanikas, D., Bloesch, M., Davison, A., Leutenegger, S.: MID-Fusion: octree-based object-level multi-instance dynamic SLAM. In: International Conference on Robotics and Automation, Montreal, Canada, 20–24 May 2019
Du, Z., Ma, Y., Li, X., Lu, H.: Fast scene reconstruction based on improved SLAM. Comput. Mater. Continua 61(1), 243–254 (2019)
Wu, X., Luo, C., Zhang, Q., Zhou, J., Yang, H., Li, Y.: Text detection and recognition for natural scene images using deep convolutional neural networks. Comput. Mater. Continua 61(1), 289–300 (2019)
Liu, Z., Xiang, B., Song, Y., Lu, H., Liu, Q.: An improved unsupervised image segmentation method based on multi-objective particle, swarm optimization clustering algorithm. Comput. Mater. Continua 58(2), 451–461 (2019)
Zhang, J., Shi, C., Wang, Y.: SLAM method based on visual features in dynamic scenes. Computer program, pp. 1–8, 04 November 2019.http://kns.cnki.net/kcms/ Detail/31.1289.tp.20191025.1559.006.html
Gao, C., Zhang, Y., Wang, X., Deng, Y., Jiang, H.: Semi-direct method RGB-D SLAM algorithm for indoor dynamic environment. Robot 41(03), 372–383 (2019)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Grant No. 61640305). This research was financially supported by the project of Thousands outstanding young teachers’ training in higher education institutions of Guangxi, The Young and Middle-aged Teachers Research Fundamental Ability Enhancement of Guangxi University (ID: 2019KY0621), Natural Science Foundation of Guangxi Province (No. 2018GXNSFAA281164). Guangxi Colleges and Universities Key Laboratory Breeding Base of System Control and Information Processing, Hechi University research project start-up funds (XJ2015KQ004), Supported by Colleges and Universities Key Laboratory of Intelligent Integrated Automation (GXZDSY2016-04), Hechi City Science and Technology Project (1694-3-2), Research on multi robot cooperative system based on artificial fish swarm algorithm (2017CFC811).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, H., Peng, J. (2020). Visual SLAM Location Methods Based on Complex Scenes: A Review. In: Sun, X., Wang, J., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2020. Lecture Notes in Computer Science(), vol 12240. Springer, Cham. https://doi.org/10.1007/978-3-030-57881-7_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-57881-7_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57880-0
Online ISBN: 978-3-030-57881-7
eBook Packages: Computer ScienceComputer Science (R0)