Attention-Enhanced Deep U-Net for Chip Defect Segmentation: Performance and Limitations
DOI:
https://doi.org/10.54097/zysa7g15Keywords:
Deep U-Net, semiconductor defect segmentation, attention mechanism.Abstract
In semiconductor manufacturing, reliable quality control means finding defects in very large, high-resolution micrographs that only affect a few pixels. This study reevaluates U-Net for chip-defect segmentation and methodically investigates the efficacy of attention mechanisms in enhancing performance. A deep U-Net with a ResNet-34 encoder is trained on 525 annotated 1024 1024 images of metal interconnects and vias (80/20 split). The training process employs Augmentations for data augmentation and a hybrid loss (Focal Generalized Dice) to mitigate extreme class imbalance. Three attention modules—Squeeze-and-Excitation (SE), Efficient Channel Attention (ECA), and CBAM—are integrated into the decoder and compared against a non-attention baseline. Performance is evaluated using Intersection-over-Union (IoU), Dice coefficient, background accuracy, and visual inspection of boundary continuity. All variants achieve strong results: the baseline reaches IoU 0.976, Dice 0.983, and background accuracy 0.995; CBAM slightly improves aggregate scores (IoU 0.978, Dice 0.985), while SE provides smoother edges. However, qualitative analysis indicates that ECA and CBAM often suppress low-contrast traces and cause breaks in thin, elongated interconnects; metric differences across models remain within 0.003 IoU. These results indicate that for structured, low-texture chip imagery with constrained data, a meticulously calibrated U-Net effectively captures critical cues, while generic attention modules yield minimal or detrimental impacts online continuity. Future research should focus on multiscale fusion and boundary-aware objectives instead of additional attention mechanisms and investigate transfer learning and domain adaptation to enhance applicability.
Downloads
References
[1] Ranneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Proc Int Conf Med Image Comput Assist Interv (MICCAI). 2015; 9351: 234 – 41.
[2] Woo S, Park J, Lee J, Kweon IS. CBAM: Convolutional block attention module. In: Proc Eur Conf Comput Vis (ECCV). 2018; 3 – 19.
[3] Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR). 2020; 11534 – 42.
[4] Wang G, Chen J, Mo L, Wu P, Yi X. Border-enhanced triple attention mechanism for high-resolution remote sensing images and application to land cover classification. Remote Sens. 2024; 16 (15): 2814.
[5] Ranneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Proc Int Conf Med Image Comput Comput Assist Interv (MICCAI). 2015; 9351: 234 – 41.
[6] Deng J, Ma Y, Li DA, Zhao J, Liu Y, Zhang H. Classification of breast density categories based on SE-attention neural networks. Comput Methods Programs Biomed. 2020; 193: 105489.
[7] Zhigang L, Baoshan S, Kaiyu B. Optimization of YOLOv7 based on PConv, SE attention and wise-IoU. Int J Comput Intell Appl. 2024; 23 (1): 2350033.
[8] Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit. 2020; 11534 – 42.
[9] Ni H, Shi Z, Karungaru S, Lv S, Li X, Wang X, Zhang J. Classification of typical pests and diseases of rice based on the ECA attention mechanism. Agriculture. 2023; 13 (5): 1066.
[10] Woo S, Park J, Lee JY, Kweon IS. CBAM: Convolutional block attention module. In: Proc Eur Conf Comput Vis (ECCV). 2018; 3 – 19.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








