Research on the Fast-GwcNet 3D Reconstruction Method for Crime Scenes
DOI:
https://doi.org/10.54097/1mazg225Keywords:
Crime Scenes, Fast-GwcNet 3D Reconstruction Method, S2A Attention Mechanism, MPM ModuleAbstract
The 3D reconstruction of the case scene is a key technology for judicial evidence collection and physical evidence analysis, which restores the details of the 3D scene of the x scene through high-precision stereo matching, which significantly surpasses the clue mining ability of 2D images. However, traditional algorithms are prone to matching ambiguity in complex scene environments (such as weakly textured physical evidence, dynamic occlusion, and non-Lambert surfaces), and at the same time, to meet the needs of accuracy and efficiency of 3D reconstruction of case scenes, this paper designs a lightweight stereo matching algorithm based on the GwcNet basic model, in which the S2A attention mechanism is introduced to improve the feature extraction, the MPM module and a new 3D convolution fusion multi-level convolution fusion with cost filtering cost are used. At the same time, the SAFM module is used to fuse the multi-level disparity map of parallax prediction, so that the matching accuracy and running time of the experimental results of the KITTI2012 and KITTI2015 datasets can be improved, and the running time is reduced, which has significant judicial application value.
Downloads
References
[1] Moritz Menze, Andreas Geiger. Object Scene Flow for Autonomous Vehicles. [J], Computer Vision and Pattern Recognition, 2015: 3061-3070.
[2] Tianyuan Y, Mao Y, Jiawei Y, Yicheng L, Yue W, Hang Z, et al. PreSight: Enhancing Autonomous Vehicle Perception with City-Scale NeRF Priors[J], ECCV 2024, 2024.
[3] Jonas K, Songyou P, Zuzana K, Marc P, Torsten S, et al. Wild Gaussians: 3D Gaussian Splatting in the Wild[J], NeurIPS 2024, 2024.
[4] Thomas M, Alex E, Christoph S, Alexander K, et al. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. [J], ACM Transactions on Graphics, 2022, 41(4): 1-15.
[5] Hao W, Jing H, Huili C, Haozhe L, Yu-Kun L, Lu F, Kun L, et al. Crowd3D: Towards Hundreds of People Reconstruction from a Single Image.[J], Computing Research Repository, 2023: 8937-8946.
[6] Nikolaus M, Eddy I, Philip H, Philipp F, Daniel C, Alexey D, Thomas B, et al. A Large Dataset To Train Convolutional Networks For Disparity, Optical Flow, And Scene Flow Estimation[J], Computing Research Repository, 2016, abs/ 1512. 02134 (1): 4040-4048.
[7] Alex K, Hayk M, Saumitro D, Peter H, Ryan K, Abraham B, Adam B, et al. End-to-End Learning of Geometry and Context for Deep Stereo Regression.[C], IEEE International Conference on Computer Vision, 2017.
[8] Jia-Ren Chang, Yong-Sheng Chen. Pyramid Stereo Matching Network[J], CoRR, 2018, abs/1803.08669: 5410-5418.
[9] Xiaoyang G, Kai Y, Wukui Y, Xiaogang W, Hongsheng L, et al. Group-Wise Correlation Stereo Network[J], Computer Vision and Pattern Recognition, 2019: 3268-3277.
[10] Sameh K, Sean F, Christoph R, Adarsh K, Julien V, Shahram I, et al. StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth Prediction.[J], Computer Vision – ECCV 2018 Lecture Notes in Computer Science, 2018: 596-613.
[11] Feihu Z, Victor A P, Ruigang Y, Philip H S T, et al. GA-Net: Guided Aggregation Net for End-To-End Stereo Matching[J], 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, abs/1904.06587: 185-194.
[12] Nikolaus M, Eddy I, Philip H, Philipp F, Daniel C, Alexey D, Thomas B, et al. A Large Dataset To Train Convolutional Networks For Disparity, Optical Flow, And Scene Flow Estimation[J], Computing Research Repository, 2016, abs/1512. 02134(1): 4040-4048.
[13] Jure Zbontar, Yann LeCun. Computing the Stereo Matching Cost with a Convolutional Neural Network[C], Computer Vision and Pattern Recognition, 2015: 1592-1599.
[14] Zhengfa L, Yiliu F, Yulan G, Hengzhu L, Wei C, Linbo Q, Li Z, Jianfeng Z, et al. Learning for Disparity Estimation Through Feature Constancy[C], Computer Vision and Pattern Recognition, 2017.
[15] Guorun Y, Hengshuang Z, Jianping S, Zhidong D, Jiaya J, et al. SegStereo: Exploiting Semantic Information for Disparity Estimation. [J], arXiv (Cornell University), 2018, abs/1807. 11699: 660-676.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Frontiers in Computing and Intelligent Systems

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.