Research on the Application of Convolutional Neural Network Model in Night Surveillance Video Image Enhancement

: With the popularization of professional digital imaging equipment, digital image processing is popularly used in many fields such as industrial production, video surveillance, intelligent transportation, remote sensing and monitoring, and plays a significant role. In low illumination environments such as cloudy days, nights, indoor and object occlusion, imaging devices often capture images with low brightness and contrast, severe loss of detail information, and a large amount of noise. Enhancing low illumination images can enhance their clarity, highlight the texture details of the scene, greatly enhance the quality of the image, and provide data quality assurance for completing tasks such as target recognition and tracking, image segmentation, etc. This paper proposes a low light image enhancement algorithm based on CNN model to address the problem of low brightness and unclear monitoring video images in nighttime scenes due to lighting conditions. This algorithm can effectively improve the quality of low light images and exhibit superiority on multiple public datasets. The algorithm proposed in this article not only effectively improves the brightness of the image, but also enhances the detail clarity of the image to a certain degree, and can avoid color distortion and halo phenomena to a certain degree.


Introduction
With the rapid growth of China's economy, various sensing devices based on visible light imaging have been increasingly widely used in many fields, such as fashionable vision based unmanned driving systems, various monitoring systems based on visible light, and so on [1].With the massive growth of image data, the issue of image quality is becoming increasingly prominent, and a significant factor affecting image quality is the intensity of ambient light [2].High quality video images are crucial for achieving automation technology, task decision-making, behavior recognition, and classification.The high-resolution and clear photos captured by cameras provide significant evidence support for subsequent image recognition and decision-making tasks.
For video surveillance and photography, taking community surveillance as an example, surveillance personnel hope to be able to monitor the scene dynamics in front of the camera in real-time through video images.For example, they can accurately determine whether they are suspects based on the identity, behavior, and other information of the people in the video, and monitor vehicles based on the vehicle information in the video [3].However, in the case of weak lighting or insufficient exposure, the collected images have shortcomings such as low brightness, color saturation, and blurry details, resulting in reduced image quality and loss of edge details, which seriously affects the use of the image.In order to enhance the visual effect of such images, it is necessary to perform enhancement processing.The basic enhancement methods mainly include enhancing image edges and details by adjusting contrast; Improving image clarity by adjusting the dynamic range to suppress noise and other means; By increasing the brightness of darker areas, the image brightness is maintained evenly; By adjusting the color saturation of the image to achieve good visual effects, etc. [4].In recent years, more and more researchers in the area of computer vision have been dedicated to the study of low illumination image clarity processing.Currently, existing algorithms can enhance low illumination degraded images of natural scenes, but the enhancement of image details still needs improvement.With the continuous upgrading of hardware performance and the rapid growth of deep learning technology, a large number of low light image enhancement algorithms based on deep learning have also emerged, achieving better results than traditional algorithms.However, there are still shortcomings, and there are still many challenges in the research of low light image enhancement algorithms [5].
The goal of low light image enhancement technology is to convert noisy and poorly lit images into clear images equivalent to those captured in environments with low noise and sufficient lighting [6].Image denoising and enhancement technology is aimed at enhancing image quality and laying the foundation for image recognition.Whether it is enhancing the useful information in the image or restoring the original effect, it aims to enhance the quality of the image and the visual effect, such as removing interfering data and making the image appear clearer.The low light enhancement algorithm can redisplay the details buried in darkness in the image, adjust the brightness of the image, and eliminate noise interference, thereby enhancing the overall effect of the image.This article proposes a low light image enhancement algorithm based on CNN model, which can properly enhance the quality of low light images and perform well on multiple common datasets.

CNN Structure
CNNs are formed by combining artificial neural networks and deep learning algorithms, and have significant effects in image processing [7].CNNs have powerful feature extraction and learning abilities, which can extract significant feature information from input information [8].Therefore, it is widely used in other visual tasks such as scene labeling, object detection, image segmentation, and image classification.The structure of the CNN is shown in Figure 1.In addition to including input and output layers, it also includes hidden layers in the middle.The input layer is mainly used to obtain information about input data such as images, audio, text, etc. [9].The hidden layer is a multi-layer nonlinear structure of neural layers, consisting of three levels: linear activation response, detection level, and pooling function.The hidden layer is mainly used for feature transformation, usually in a fully connected structure, which takes the output of all nodes in the upper layer as the input of the lower layer nodes.In practical applications, the setting of the number of hidden layers is influenced by the functionality and application of the network, and a network with more layers is called a deep neural network.The output layer is the result of feature extraction of images through CNNs, and the data format at the output end needs to be set according to specific tasks.CNNs achieve feature extraction and classification of images in hidden layers, so optimizing the convolutional layer and single-layer perceptron can enhance the precision of feature extraction and optimize the effectiveness of classification [10].

Low Light Image Enhancement Algorithm
In strong and weak lighting environments, most digital images are captured with low contrast or high or low brightness, resulting in blurry details in the image.The RGB color model is often used in computer technology and is the most widely used compared to other color models.This color model is based on the structure of the human eye, that is, all colors in nature can be represented or synthesized using the three primary colors of red, green, and blue.If images are collected in harsh environmental conditions such as rain or insufficient lighting, the quality of the images will be very low, resulting in problems such as darker images, low contrast, and missing details.In response to the shortcomings of current low light image enhancement algorithms such as excessive enhancement or unnatural effects, artifacts, and strong saturation, this article combines the advantages of color model transformation algorithms and CNNs to enhance low light images.The expression for color space model transformation is as follows: For HSI space, the values of each color component are . Due to the differences in hue H , the conversion from HSI color space to RGB color space needs to be adjusted according to the changes in H .
During the imaging process, if the lighting intensity of the surrounding environment is weak, the light shining on the object is uneven, and the backlight is not uniform, it will cause the obtained image to be dark in local areas or overall, seriously affecting the optical quality of the image.In order to learn the relationship between low light images and their illumination maps, we extract overlapping pixel blocks from low light images and represent each pixel point using highdimensional vectors through 1 n filters.The mathematical expression is as follows: Among them, p is the input image block of size n n  , while 1 W and 1 B are the weights and biases of the convolutional kernel.The size of 1 , where 1 f is the size of a single convolutional kernel and 1 n is the number of convolutional kernels.The factors that affect the final imaging include two parts: one is external light, and the other is the reflection of light by the object, which can be described by equation ( 6): Among them,   y x S , means the low illumination image obtained,   y x R , means the intrinsic reflectance of the object, which is the intrinsic image, and   y x L , means the ambient light shining on the object, which is the illumination image.In order to demonstrate the practicality of the method proposed in this paper, experiments need to be conducted.The experiment builds an image blur enhancement experimental platform in the MATLAB environment, and the experimental data comes from a color image library.Generate a low light image based on the input normal light image, with half of the low light image and half of the normal light image in a batch.The batch size is set to 32, including 16 normal light images and 16 low light images.Scale all images uniformly to 96 * 96 size.To validate the low light image detection algorithm proposed in this article, we randomly selected 500 images from the artificially synthesized dataset mentioned above, including 250 normal light images and 250 low light images.Due to the artificially synthesized data, the parameter settings are consistent with the training set, and the test results are shown in Table 1.

Composite Image Testing
According to Table 1, when testing 250 normal light images in the artificially synthesized dataset, 9 images were mistakenly classified as low light images, with an accuracy rate of 96.4%;When testing another 250 low light images, 13 were mistakenly identified as normal light images, with an accuracy rate of 94.8%.Therefore, these 500 artificially synthesized datasets can be obtained for testing, and a total of 22 images were misjudged, with an error rate of 4.4%, or a total accuracy rate of 95.6%.

Processing Time of Different Algorithms for Images with Different Resolutions
In addition to evaluating the quality of the image restored by the algorithm, attention should also be paid to its time cost.Therefore, we tested the processing time of each algorithm for images with different resolutions, as shown in Figure 2. The HE and SRIE algorithms are both running in Matlab2017, while the algorithm in this article is running in the deep learning framework Python, and GPU acceleration is used in the code.From Figure 2, it can be seen that the SRIE algorithm produces good image restoration results, but the time cost is high, which is not conducive to solving real-time processing problems.The HE algorithm has the lowest time consumption, within 0.1s, but its performance is not good, often accompanied by color distortion, detail loss, and overexposure.The time consumption of the algorithm in this chapter is proportional to the size of the image, and it can effectively preserve the original color of the image while meeting real-time requirements.Therefore, the algorithm in this paper also has advantages in terms of processing speed.

Conclusion
With the growth of AI technology, more and more largescale network surveillance cameras are being deployed in various applications.Online detection and recognition technology based on video images has been popularly used in fields such as scientific research, medicine, industry, and security detection.In the area of image processing, low light image enhancement is a highly concerned research direction, and it is also the foundation for the growth of other computer vision tasks.High quality video images are crucial for achieving automation technology and task decision-making, behavior recognition and classification.The high-resolution and clear photos captured by cameras provide significant evidence support for subsequent image recognition and decision-making tasks.To enhance low light images, not only should we consider the enhancement of image brightness, but also focus on efficiently handling various degradation problems such as noise, color deviation, and missing details in dark areas of the image.This paper proposes a low light image enhancement algorithm based on CNN model to address the problem of low brightness and unclear monitoring video images in nighttime scenes due to lighting conditions.This algorithm can effectively enhance the quality of low light images and exhibit superiority on multiple public datasets.The algorithm proposed in this article not only effectively enhances the brightness of the image, but also enhances the detail clarity of the image to a certain degree, and can avoid color distortion and halo phenomena to a certain degree.The low illumination image enhancement algorithm proposed in this article can enhance the brightness of images to a certain degree, but there are still many areas that need improvement, such as insufficient brightness improvement for images with very weak lighting conditions and lack of good anti-noise performance.This has also been a long-term research direction in this field.

Figure 2 .
Figure 2. Processing time of different algorithms for images with different resolutions

Table 1 .
Test results of artificial synthetic data