A Non-contact Real-time Diameter Measurement Method for Steel Wire Ropes via W-EDSR and W-DeblurNet

Authors

  • Hang Shen
  • Jiafeng Ren
  • Kuosheng Jiang
  • Yuanyuan Zhou

DOI:

https://doi.org/10.54097/hewqnm23

Keywords:

Steel Wire Rope, Super-resolution, Machine Vision, Deep Learning

Abstract

Steel wire ropes are widely used in hoisting, transportation, mining and marine engineering, and their complex braided structure provides excellent tensile strength and fatigue resistance. To overcome the limitations of conventional methods for detecting steel wire rope diameters—particularly in terms of accuracy, robustness, and adaptability to complex working environments—this study proposes a non-contact, real-time detection approach based on super-resolution (SR) technology. The method employs an RGB-Depth camera to simultaneously acquire both visual and depth data of the steel wire rope, and leverages an enhanced super-resolution network (W-EDSR) in combination with a self-supervised deblurring network (SRN-WDeblur) to enhance image clarity and mitigate motion blur. Following this, a semantic segmentation and edge extraction algorithm, augmented with depth information, is applied to derive continuous and smooth rope boundaries. Experimental results indicate that this method achieves 99.35% accuracy for static steel wire rope measurements and maintains 99.24% accuracy when the rope moves at 10 m/s, significantly outperforming traditional approaches and demonstrating strong robustness against motion blur. This research provides a reliable and efficient solution for online monitoring of mine hoisting steel wire ropes, contributing to enhanced intelligence and safety in mining operations.

Downloads

Download data is not yet available.

References

[1] T. Wang and V. J. L. Gan, “Automated joint 3D reconstruction and visual inspection for buildings using computer vision and transfer learning,” Automation in Construction, vol. 149, p. 104810, May 2023, doi: 10.1016/j.autcon.2023.104810.

[2] Y. D. V. Yasuda, F. A. M. Cappabianco, L. E. G. Martins, and J. A. B. Gripp, “Aircraft visual inspection: A systematic literature review,” Computers in Industry, vol. 141, p. 103695, Oct. 2022, doi: 10.1016/j.compind.2022.103695.

[3] L. Ren, Z. Liu, and J. Zhou, “Shaking Noise Elimination for Detecting Local Flaw in Steel Wire Ropes Based on Magnetic Flux Leakage Detection,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–9, 2021, doi: 10.1109/TIM.2021.3112792.

[4] H. Wang, Q. Li, S. Han, P. Li, J. Tian, and S. Zhang, “Wire Rope Damage Detection Signal Processing Using K-Singular Value Decomposition and Optimized Double-Tree Complex Wavelet Transform,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–12, 2022, doi: 10. 1109/ TIM. 2022. 3216670.

[5] J. Tian, C. Zhao, W. Wang, and G. Sun, “Detection Technology of Mine Wire Rope Based on Radial Magnetic Vector With Flexible Printed Circuit,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–10, 2021, doi: 10.1109/TIM.2021.3096288.

[6] Y. L. Guo, G. X. Wu, X. L. Liu, and X. L. Xu, “REVIEW OF FAULT DIAGNOSIS METHODS FOR ROTATING MACHINERY BASED ON DEEP LEARNING,” in The 8th International Symposium on Test Automation & Instrumentation (ISTAI 2020), Nov. 2020, pp. 175–180. doi: 10.1049/icp.2021.1316.

[7] K. Qiu, L. Tian, and P. Wang, “An Effective Framework of Automated Visual Surface Defect Detection for Metal Parts,” IEEE Sensors Journal, vol. 21, no. 18, pp. 20412–20420, Sep. 2021, doi: 10.1109/JSEN.2021.3095410.

[8] X. Li, J. Li, Y. Qu, and D. He, “Semi-supervised gear fault diagnosis using raw vibration signal based on deep learning,” Chinese Journal of Aeronautics, vol. 33, no. 2, pp. 418–426, Feb. 2020, doi: 10.1016/j.cja.2019.04.018.

[9] Y. Zhang, G. Cao, and J. Cao, “Target-less approach of wire rope rotation measurement,” Measurement, vol. 221, p. 113489, Nov. 2023, doi: 10.1016/j.measurement.2023.113489.

[10] A. Kazerouni, S. Karimijafarbigloo, R. Azad, Y. Velichko, U. Bagci, and D. Merhof, “Fusenet: Self-Supervised Dual-Path Network For Medical Image Segmentation,” in 2024 IEEE International Symposium on Biomedical Imaging (ISBI), May 2024, pp. 1–5. doi: 10.1109/ISBI56570.2024.10635112.

[11] A. Albanese, M. Nardello, G. Fiacco, and D. Brunelli, “Tiny Machine Learning for High Accuracy Product Quality Inspection,” IEEE Sensors Journal, vol. 23, no. 2, pp. 1575–1583, Jan. 2023, doi: 10.1109/JSEN.2022.3225227.

[12] P. Zhou, G. Zhou, S. Wang, H. Wang, Z. He, and X. Yan, “Visual Sensing Inspection for the Surface Damage of Steel Wire Ropes With Object Detection Method,” IEEE Sensors Journal, vol. 22, no. 23, pp. 22985–22993, Dec. 2022, doi: 10.1109/JSEN.2022.3214109.

[13] X. Wang and Z. Kan, “Defect Detection of Steel Wire Rope in Coal Mine Based on Improved YOLOv5 Deep Learning,” Journal of Information Processing Systems, vol. 19, no. 6, pp. 745–755, doi: 10.3745/JIPS.04.0293.

[14] Y. Dong, Y. Pan, D. Wang, and T. Cheng, “Corrosion detection and evaluation for steel wires based on a multi-vision scanning system,” Construction and Building Materials, vol. 322, p. 125877, Mar. 2022, doi: 10.1016/j.conbuildmat. 2021.125877.

[15] P. Zhou, G. Zhou, H. Wang, D. Wang, and Z. He, “Automatic Detection of Industrial Wire Rope Surface Damage Using Deep Learning-Based Visual Perception Technology,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–11, 2021, doi: 10.1109/TIM.2020.3011762.

[16] J. Yu, X. Cheng, and Q. Li, “Surface Defect Detection of Steel Strips Based on Anchor-Free Network With Channel Attention and Bidirectional Feature Fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–10, 2022, doi: 10.1109/TIM.2021.3136183.

[17] A. Assadzadeh, M. Arashpour, I. Brilakis, T. Ngo, and E. Konstantinou, “Vision-based excavator pose estimation using synthetically generated datasets with domain randomization,” Automation in Construction, vol. 134, p. 104089, Feb. 2022, doi: 10.1016/j.autcon.2021.104089.

[18] Y. Wang, J. Luo, C. Liu, X. Yuan, K. Wang, and C. Yang, “Layer-Wise Residual-Guided Feature Learning With Deep Learning Networks for Industrial Quality Prediction,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022, doi: 10.1109/TIM.2022.3214611.

[19] L. Wu et al., “Deep learning-based super-resolution with feature coordinators preservation for vision-based measurement,” Structural Control and Health Monitoring, vol. 29, no. 12, p. e3107, 2022, doi: 10.1002/stc.3107.

[20] X. Ge, H. Cui, Z. Xu, M. He, and X. Han, “Super-Resolution Image Reconstruction Method for Micro Defects of Metal Engine Blades,” gxxb, vol. 43, no. 2, p. 0210001, Feb. 2023, doi: 10.3788/AOS221263.

[21] L. Schermelleh et al., “Super-resolution microscopy demystified,” Nat Cell Biol, vol. 21, no. 1, pp. 72–84, Jan. 2019, doi: 10.1038/s41556-018-0251-8.

[22] Y. Wang et al., “Remote sensing image super-resolution and object detection: Benchmark and state of the art,” Expert Systems with Applications, vol. 197, p. 116793, Jul. 2022, doi: 10.1016/j.eswa.2022.116793.

[23] C.-Q. Feng, B.-L. Li, Y.-F. Liu, F. Zhang, Y. Yue, and J.-S. Fan, “Crack assessment using multi-sensor fusion simultaneous localization and mapping (SLAM) and image super-resolution for bridge inspection,” Automation in Construction, vol. 155, p. 105047, Nov. 2023, doi: 10.1016/j. autcon. 2023.105047.

[24] L. Chen, K. Meng, H. Zhang, J. Zhou, and P. Lou, “SR-FABNet: Super-Resolution branch guided Fourier attention detection network for efficient optical inspection of nanoscale wafer defects,” Advanced Engineering Informatics, vol. 65, p. 103200, May 2025, doi: 10.1016/j.aei.2025.103200.

[25] G. Wang et al., “Efficient multi-branch dynamic fusion network for super-resolution of industrial component image,” Displays, vol. 82, p. 102633, Apr. 2024, doi: 10.1016/j. displa. 2023. 102633.

[26] X. Sun et al., “A Multiscale Attention Mechanism Super-Resolution Confocal Microscopy for Wafer Defect Detection,” IEEE Transactions on Automation Science and Engineering, vol. 22, pp. 1016–1027, 2025, doi: 10.1109/ TASE. 2024. 3358 693.

[27] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jul. 2017, pp. 1132–1140. doi: 10.1109/CVPRW.2017.151.

[28] X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia, “Scale-Recurrent Network for Deep Image Deblurring,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 8174–8182. doi: 10.1109/ CVPR. 2018.00853.

[29] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.

Downloads

Published

29-12-2025

Issue

Section

Articles

How to Cite

Shen, H., Ren, J., Jiang, K., & Zhou, Y. . (2025). A Non-contact Real-time Diameter Measurement Method for Steel Wire Ropes via W-EDSR and W-DeblurNet. Frontiers in Computing and Intelligent Systems, 14(3), 7-18. https://doi.org/10.54097/hewqnm23