A Lightweight Dual ‐ Branch Image Dehazing Network based on Associative Learning

: Haze degrades the clarity, contrast, and details of images, resulting in a decrease in image quality. Image dehazing provides a means to obtain clearer and more accurate image information. Traditional methods for haze removal typically rely on manually designed features and models, limiting their performance in complex scenes. In recent years, the rapid advancement of deep learning has offered new insights into addressing the image dehazing problem. This paper proposes a lightweight dual-branch image dehazing network based on associative learning (LDANet). The network consists of a lightweight dehazing sub-network (LDSN) and a lightweight image enhancement subnetwork (LESN). To reduce computational and parameter complexity, the Tied Block Convolution (TBC) is employed, allowing for parameter sharing among components. Lastly, through associative learning, their distinctive features are mapped. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our approach in qualitative comparisons and quantitative evaluations compared to other state-of-the-art methods. Our method holds significant practical value for real-world image dehazing scenarios.


Introduction
In domains such as transportation, aerospace, and road monitoring, haze significantly reduces visibility, affecting various aspects of real-life scenarios. Therefore, dehazing is a crucial task aimed at mitigating the impact of haze on different systems and enhancing their usability. The Atmospheric Scattering Model (ASM) [1] is a mathematical model that describes the scattering and absorption of light during its propagation through the atmosphere. It is employed to explain the process of light propagation in the atmosphere, particularly under conditions of haze, atmospheric pollution, or long-distance observations. The ASM can be formally expressed as follows: where I and J represent the observed hazy and haze-free images, respectively, t x e denotes the medium transmission map, β and d x represent the atmospheric scattering parameters and scene depth, respectively, while x denotes the pixel position. Based on different principles, current image dehazing algorithms can be categorized into two main types: image enhancement-based and image restoration-based dehazing algorithms.
The remaining sections of this paper are structured as follows. In Section 2, we provide an exposition on the research status of image dehazing. In Section 3, the proposed network is presented in detail. In Section 4, we objectively analyze the experimental results, including evaluation metrics and comparative images. In Section 5, we summarize this paper.

Image Enhancement-based Dehazing Algorithms
By adjusting the grayscale levels, the contrast of an image can be enhanced, thus improving its visual effect. Histogram equalization algorithms are widely used in digital image processing, including both global and local histogram equalization methods [2]. Stark [2] and Kim et al. [4] proposed adaptive histogram equalization algorithms and partially overlapping sub-block histogram equalization algorithms, respectively. Russo [5] performed equalization on degraded images at multiple scales. Dippel et al. [6] compared two multi-resolution analysis methods, Laplacian pyramid and wavelet transform, which exhibit good local characteristics. The Retinex model [7], based on the theory of color constancy, includes both single-scale and multi-scale Retinex algorithms. However, these methods overlook the fundamental cause of image blur, leading to subpar results.

Image Restoration-based Dehazing Algorithms
These methods mainly include those based on prior knowledge and those based on learning. He et al. [8] proposed the Dark Channel Prior (DCP) dehazing algorithm, which utilizes the Atmospheric Scattering Model (ASM) for uniform dehazing. Tan [9] constructed a Markov random field model to estimate the cost function of edge intensity, resulting in significant improvement in image details. Cai et al. [10] applied deep convolutional neural networks to dehazing scenarios, where their DehazeNet model, established using a deep CNN structure, provided a novel estimation of atmospheric degradation transmission. Li et al. [11] designed an end-to-end AOD-Net network, which directly obtains clear images without estimating medium transmission and atmospheric light. Ren et al. [12] proposed the MSCNN algorithm, which employs a multi-scale network structure to obtain clear images. Chen et al. [13] introduced the GCANet model to address the issue of grid artifacts in the image restoration process. Qin et al. [14] proposed the FFA-Net model, which incorporates attention mechanisms to effectively remove non-uniform haze.

Network Architecture
In this section, we will provide a brief overview of the proposed network, LDANet. As shown in Figure 1, the network takes a hazy image as input and outputs a clean image. The network consists of two main components: the dehazing subnetwork (LDSN) and the image enhancement subnetwork (LESN). The LDSN is designed as an encoder-decoder structure to roughly remove haze from the input image. It incorporates the Tied Block Convolution (TBC) lightweight convolution with shared parameters, which significantly reduces computational and parameter costs, saving time and resources. This design choice is practical and meaningful. The LESN serves as a complement to the LDSN features. It employs the SimAM attention mechanism to adaptively focus on different features and contextual information, highlighting salient features and improving the model's generalization ability. Importantly, no additional parameters are introduced in this process. Finally, through associative learning, the two subnetworks are trained to capture their correlation and jointly generate a clear image as the final output.

Loss Function
We use Charbonnier penalty loss [15] and Structural Similarity (SSIM) loss [16] to jointly compute the total loss function for network optimization, aiming to more accurately evaluate image quality. The mathematical expressions are as follows: , , where represents the generated image and represents the target image. represents the total loss function, and are parameters.

Datasets
To thoroughly demonstrate the performance of the proposed model, we conducted experiments on several dehazing datasets, including synthetic datasets and heterogeneous datasets. The synthetic dataset utilized the RESIDE dataset [17], which is widely used in the field of image dehazing and consists of haze images synthesized using prior information. It consists of indoor training set (ITS), outdoor training set (OTS), and testing set (SOTS). The proposed model not only showed superior results on the synthetic dataset but, more importantly, achieved satisfactory performance on the heterogeneous dehazing datasets. Therefore, we conducted experiments and comparisons on the I-HAZE [18], O-HAZE [19], and NH-HAZE [20] datasets.

Quality Evaluation Metrics
In the field of image dehazing, it is crucial to objectively evaluate the performance of an algorithm. Evaluating the performance of an image dehazing algorithm typically involves various evaluation metrics. In this paper, we used Peak Signal-to-Noise Ratio (PSNR) [21] and Structural Similarity Index (SSIM) [22] to measure the reconstruction quality of the images. The PSNR value reflects the difference between the original image and the reconstructed image, where a higher value indicates better image restoration. SSIM measures the similarity between the original image and the reconstructed image by comparing their contrast and structural information. The SSIM value ranges from 0 to 1, with a value closer to 1 indicating higher image similarity and better image restoration. Table 1 summarizes the image quality evaluation metrics of our proposed method compared to DCP, FFANet, and MSBDN methods on SOTS, I-HAZE, O-HAZE, and NH-HAZE datasets. Each value in the table represents the average result of the tests. It is evident that our proposed method outperforms the other methods by a significant margin in terms of numerical values.

Conclusion
Inspired by deep learning and influenced by popular neural networks, this paper proposes a lightweight dual-branch image dehazing network based on correlation learning to eliminate the effects of haze in images and improve the usability of systems in domains such as transportation, surveillance, and aviation. Specifically, the proposed network consists of a haze removal sub-network and an image enhancement sub-network. The Tied Block Convolution (TBC) is employed with parameter sharing to reduce computational and parameter complexity. Finally, through correlation learning, the different features of these subnetworks are mapped. The proposed model demonstrates qualitative and quantitative advantages, yielding superior dehazing results. Moreover, it has low computational and parameter requirements, saving computational resources, and thus holds significant practical value for real-world image dehazing. Future applications and improvements can be explored in the domain of video dehazing. Additionally, this paper does not cover the scenario of nighttime dehazing, which is another area worthy of investigation.