Vision-Based SLAM for Uavs in Dynamic Environments

Authors

  • Pingrui Huang

DOI:

https://doi.org/10.54097/hset.v70i.12200

Keywords:

Visual simultaneous localization and mapping, Semi-direct, Visual odometer.

Abstract

Visual SLAM (Simultaneous Localization And Mapping) is a technique mainly used for robot navigation and positioning. It uses vision sensors to map the environment and estimate its own location. In dynamic environment, they are inevitably affected by dynamic objects, which leads to the decrease of accuracy and stability of the system. In this paper, the visual odometer in SLAM system is studied, and the semi-direct visual odometer (SVO) algorithm is improved on the basis of ORB-SLAM framework. The basic principle of the algorithm is to treat pixel matching between adjacent image frames as an optimization problem, and use direct registration method to obtain camera pose for sparse feature blocks, avoiding a lot of feature extraction and matching. When a key frame appears, ORB (Oriented FAST and Rotated BRIEF) features of the image are extracted, and feature matching is used to track local maps to obtain more mapping relationships. Moreover, camera pose and three-dimensional structure of the scene are estimated by minimizing reprojection errors, thus improving positioning accuracy and map construction quality. Finally, experiments on several public data sets show that the improved semi-direct visual odometer scheme in this paper still has good robustness to moving objects in dynamic scenes. While maintaining high accuracy, it obtains better real-time performance by tracking visual feature points.

Downloads

Download data is not yet available.

References

Fischler M A, Bolles R C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography [J]. Communications of the ACM, 1981, 24(6): 381-395.

Wang Y, Huang S. Towards dense moving object segmentation based robust dense RGB-D SLAM in dynamic scenarios[C]//2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014: 1841-1846.

Tiwari L, Ji P, Tran Q H, et al. Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction [J]. 2020.

Hu H, Sun H, Ye P, et al. Multiple maps for the feature-based monocular SLAM system [J]. Journal of Intelligent & Robotic Systems, 2019, 94(2): 389-404.

Li J, Pei L, Danping Z, et al. Attention-SLAM: A Visual Monocular SLAM Learning from Human Gaze [J]. IEEE Sensors Journal, 2020,21(05):6408-6420.

Gastón C, Matías A, Nitsche, Taihú P, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, 2019, 116.

Limeng L, Mei W. An Improved Stereo SLAM System through Combination of Points and Line Segments[J]. Computer Measurement & Control, 2019.

Lee S H, Civera J. Loosely Coupled semi-direct monocular SLAM [J]. IEEE Robotics and Automation Letters, 2018, 4(2): 399-406.

Li Y, Fan S, Sun Y, et al. Bundle adjustment method using sparse BFGS solution [J]. Remote Sensing Letters, 2018, 9(8):789-798.

Yusefi A, Durdu A, Sungur C. ORB-SLAM-based 2D Reconstruction of Environment for Indoor Autonomous Navigation of UAVs[J]. European Journal of Science and Technology, 2020(Special):466-472.

Downloads

Published

15-11-2023

How to Cite

Huang, P. (2023). Vision-Based SLAM for Uavs in Dynamic Environments. Highlights in Science, Engineering and Technology, 70, 255-265. https://doi.org/10.54097/hset.v70i.12200