Deep Learning-Based Low-Light Image Enhancement: A Review
DOI:
https://doi.org/10.54097/3y2azk75Keywords:
Low-light image; Image enhancement; Deep learning; Supervised Learning; Unsupervised Learning;Abstract
With the rapid development of deep learning in the field of computer vision, low-light image enhancement (LLIE) has shifted from traditional handcrafted prior-based methods to data-driven learning paradigms. Deep models can automatically learn the complex non-linear mapping between low-light and normal-exposure images through end-to-end training, showing remarkable advantages in brightness improvement, detail restoration, and noise suppression. According to the dependence on labeled data and the form of supervision signals during training, existing deep learning-based LLIE methods are mainly divided into five categories: supervised, reinforcement, unsupervised, zero-shot, and semi-supervised learning. Among them, supervised learning is the most mature and mainstream technical route, which has evolved from convolutional neural networks and Retinex decomposition to Transformer and state-space models (e.g., Mamba), achieving accurate recovery of illumination, texture, and color with stable performance. Reinforcement learning formulates LLIE as a sequential decision-making problem but is still in the exploratory stage due to low learning efficiency and high sensitivity to reward function design. Unsupervised and zero-shot learning methods solve the problem of difficult data annotation in real scenarios by designing unpaired, no-reference, or self-supervised mechanisms, respectively. Semi-supervised learning balances data acquisition cost and enhancement performance by combining a small amount of labeled data with a large number of unlabeled data. This paper summarizes the core principles, representative works, advantages, and existing challenges of various methods, pointing out that supervised learning dominates LLIE due to its stability and precision, while other methods provide effective supplements in data-scarce or complex scenarios. Future research should focus on improving the adaptability of models to different signal-to-noise ratio regions and balancing global illumination consistency and high-frequency detail preservation.
References
[1] Lore K G, Akintayo A, Sarkar S. LLNet: A deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650-662.
[2] Wei C, Wang W, Yang W, et al. Deep retinex decomposition for low-light enhancement[J]. arXiv preprint arXiv: 1808.04560, 2018.
[3] Kind Zhang Y, Zhang J, Guo X. Kindling the darkness: A practical low-light image enhancer[C]//Proceedings of the 27th ACM international conference on multimedia. 2019: 1632-1640.
[4] Zhang Y, Guo X, Ma J, et al. Beyond brightening low-light images[J]. International Journal of Computer Vision, 2021, 129: 1013-1037.
[5] Bai X, Wang Y, Hu B, et al. DRWKV: Focusing on Object Edges for Low-Light Image Enhancement[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2026: 1554-1564.
[6] Xu X, Wang R, Fu C W, et al. Snr-aware low-light image enhancement[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 17714-17724.
[7] Zamir S W, Arora A, Khan S, et al. Restormer: Efficient transformer for high-resolution image restoration[C]// Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 5728-5739.
[8] Wang T, Zhang K, Shen T, et al. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method[C]//Proceedings of the AAAI conference on artificial intelligence. 2023, 37(3): 2654-2662.
[9] Cai Y, Bian H, Lin J, et al. Retinexformer: One-stage retinex-based transformer for low-light image enhancement[C]// Proceedings of the IEEE/CVF international conference on computer vision. 2023: 12504-12513.
[10] Weng J, Yan Z, Tai Y, et al. Mamballie: Implicit retinex-aware low light enhancement with global-then-local state space[J]. arXiv preprint arXiv:2405.16105, 2024.
[11] Bai J, Yin Y, He Q, et al. Retinexmamba: Retinex-based mamba for low-light image enhancement[C]//International conference on neural information processing. Singapore: Springer Nature Singapore, 2024: 427-442.
[12] Wang S, Tao Q, Tang Z. RESVMUNetX: A Low-Light Enhancement Network Based on VMamba[J]. arXiv e-prints, 2024: arXiv: 2407.09553.
[13] Zou W, Gao H, Yang W, et al. Wave-mamba: Wavelet state space model for ultra-high-definition low-light image enhancement[C]//Proceedings of the 32nd ACM International Conference on Multimedia. 2024: 1534-1543.
[14] Deng R, Jiang A, Peng L, et al. Codebook Knowledge with Mamba-Transformer For Low-Light Image Enhancement[C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2026: 3720-3729.
[15] Park J, Lee J Y, Yoo D, et al. Distort-and-recover: Color enhancement using deep reinforcement learning[C]// Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 5928-5936.
[16] Zhang R, Guo L, Huang S, et al. Rellie: Deep reinforcement learning for customized low-light image enhancement[C]// Proceedings of the 29th ACM international
[17] Jiang Y, Gong X, Liu D, et al. Enlightengan: Deep light enhancement without paired supervision[J]. IEEE transactions on image processing, 2021, 30: 2340-2349.
[18] Ni Z, Yang W, Wang H, et al. Cycle-interactive generative adversarial network for robust unsupervised low-light enhancement[C]//Proceedings of the 30th ACM International Conference on Multimedia. 2022: 1484-1492.
[19] Jiang H, Luo A, Liu X, et al. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 161-179.
[20] Lin Y, Ye T, Chen S, et al. Aglldiff: Guiding diffusion models towards unsupervised training-free Real-world low-light image enhancement[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2025, 39(5): 5307-5315.
[21] Wei W, Feng X, Song W, et al. Nonuniform low-light image enhancement based on game-retinex variational and adaptive vector-valued gamma correction[J]. Information Sciences, 2026: 123063.
[22] Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In CVPR, pages 1777–1786, 2020
[23] Li C, Guo C, Loy C C. Learning to enhance low-light image via zero-reference deep curve estimation[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 44(8): 4225-4238.
[24] Pan Q, Zhang Z, Tian N. Zero-Reference Generative Exposure Correction and Adaptive Fusion for Low-Light Image Enhancement J. Neurocomputing, 2025, 636: 129992
[25] Peng Y, Guo X, Xu M, et al. Wavelet-Guided Zero-Reference Diffusion for Unsupervised Low-Light Image Enhancement J. Electronics, 2025, 14(22): 4460
[26] Yang W, Wang S, Fang Y, et al. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 3063-3072.]
[27] Yang W, Wang S, Fang Y, et al. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality[J]. IEEE Transactions on Image Processing, 2021, 30: 3461-3473.
[28] Jiang N, Cao Y, Zhang X Y, et al. Low-light image enhancement with quality-oriented pseudo labels via semi-supervised contrastive learning[J]. Expert Systems with Applications, 2025, 276: 127106.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Pengyun Shi

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







