The Application of Multi-sensor Fusion Navigation in Unmanned Driving
DOI:
https://doi.org/10.54097/p5mss687Keywords:
Multi-sensor Fusion Navigation; Unmanned Driving; Application.Abstract
With the popularization of autonomous driving technology, multi-sensor fusion technology is becoming increasingly important in unmanned driving. This paper mainly studies two types of sensors, namely visual cameras and LiDARs, as well as two multi-sensor fusion algorithms, namely Kalman filtering and Simultaneous Localization and Mapping (SLAM). Compared with using a separate sensor, the two sensors can have advantages such as a wide measurement range, high precision, and reduced influence of weather factors. The Kalman filter can effectively reduce the error in navigation and positioning. In SLAM technology, by comparing visual SLAM technology with laser SLAM technology and analyzing the SLAM technology that fuses with other sensors, it can be concluded that multi-sensor fusion has the advantages of enhancing robustness in complex environments, optimizing positioning accuracy, and balancing real-time performance and global consistency. Therefore, a more in-depth study of the combination and fusion algorithms among various sensors can enhance real-time performance, accuracy and robustness. In this way, we can achieve true driverless technology sooner.
References
[1] Wu Yi. Research on Automatic Calibration Technology of LiDAR and Vision System for Autonomous Driving [D]. University of Chinese Academy of Sciences (Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences), 2024.
[2] Kong X, Shi S, Yan B, et al. Research on Camera Rotation Strategies for Active Visual Perception in the Self-Driving Vehicles [J]. Actuators, 2024, 13(8): 317.
[3] Yang Yifan. Research on Autonomous Driving Environment Perception Method Based on Multi-Sensor Fusion [D]. Lanzhou University of Technology, 2024.
[4] Zhang Guoxin. Research on 3D Object Detection Algorithm in Autonomous Driving Scenarios [D]. Hebei University of Science and Technology, 2024.
[5] Cheng Xiaoxiao. Research on LiDAR SLAM Optimization Based on Multi-Sensor Fusion and Practical Application of Indoor and Outdoor Unmanned Driving [D]. Shandong University of Technology, 2024.
[6] Hussain T, Khan N M, Yang B, et al. LiDAR point cloud transmission: Adversarial perspectives of spoofing attacks in autonomous driving [J]. Computers & Security, 2025, 157: 104544.
[7] Wei Chao, Wu Xitao, Zhu Gengting, et al. Research on Obstacle Detection and Tracking for Unmanned Vehicles Based on Fusion of Visual Camera and LiDAR [J]. Journal of Mechanical Engineering, 2025, 61(02): 296-309+320.
[8] Miao G, Wang W, Tang J, et al. Vehicle Collision Warning Based on Combination of the YOLO Algorithm and the Kalman Filter in the Driving Assistance System [J]. Journal of Advanced Transportation, 2025, 2025(1): 1188373.
[9] Shen Xi. Research on Multi-Sensor Real-Time Localization and Mapping Technology for Autonomous Driving [D]. North China University of Technology, 2024.
[10] Qin Fuhong. Research on Improvement of SLAM Algorithm Based on Multi-Sensor Data Fusion [D]. University of Electronic Science and Technology of China, 2025.
[11] Zeng Ruiqi, Ji Xinchun, Wei Dongyan, et al. Fusion Application of Visual and Laser Information in SLAM [J]. Journal of Navigation and Positioning, 2025, 13(03): 116-129.
[12] Chen J, Zhang W, Wei D, et al. Optimized visual inertial SLAM for complex indoor dynamic scenes using RGB-D camera [J]. Measurement, 2025, 245: 116615.
[13] Chen J, Jia K, Wei Z. Small but Mighty: A Lightweight Feature Enhancement Strategy for LiDAR Odometry in Challenging Environments [J]. Remote Sensing, 2025, 17(15): 2656.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







