Research on Sensor Optimization Technology of Driverless Vehicle

: Driverless cars in operation, the perception of the surrounding environment demand is very rich, it is a kind of automatic detection of road information, detection of obstacles, calculation of obstacle location, speed and other functions, but due to the limitations of technology and detection means, the perception data of self-driving cars is not accurate enough, prone to safety accidents. Therefore, the optimization of various sensors can greatly improve the safety performance of unmanned vehicles, thereby greatly promoting the development of unmanned technology. Environmental perception technology is one of the core technologies of unmanned cars, environmental perception information comprehensiveness and accuracy is the guarantee of safe driving of unmanned cars, this paper elaborates on image recognition, sensor layout, sensor perception range and accuracy, sensor anti-interference ability and rapid processing of sensor massive data in environmental perception technology.


Introduction
Autonomous Vehicle is a major branch in the field of Intelligent Vehicle (IV), which realizes autonomous driving through the integrated application of positioning and navigation, environment awareness, dynamic planning and decision making, automatic control and other technologies. As one of the key technologies of driverless vehicles, environment perception technology has become a research hotspot in this field [3]. Environmental perception technology mainly detects the state of the vehicle itself and environmental information such as roads, traffic signs, traffic lights, vehicles, pedestrians and obstacles around the vehicle through sensor technologies such as cameras, radar and ultrasound [4]. Comprehensive and accurate environmental perception information can provide sufficient data guarantee for unmanned driving decision-making, so as to ensure the safety and stability of unmanned driving.

Current International Development
As early as the 1980s, the United States believed that the world had entered the era of sensors, and established the National Technical Group (BTG) to help the government organize and lead the sensor technology development work of major companies and national enterprises and institutions, and six of the 22 technologies vital to the long-term security and economic prosperity of the United States are directly related to sensor information processing technology [5]. Japan takes the development and utilization of sensor technology as one of the six core technologies for national development. There are 70 key research projects in the 1990s formulated by the Japanese Science and Technology Agency, of which 18 are closely related to sensor technology [6]. Sensors, communications and computers are known as the three pillars of modern information systems. Because of its high technical content, strong penetration ability, and broad market prospects, it has attracted wide attention from countries all over the world.
Sensors are widely used in resource detection, Marine, environmental monitoring, security, medical diagnosis, household appliances, agricultural modernization, and other fields. On the military side, the United States has equipped its F-22 fighters with a new multi-spectral sensor to achieve a fully passive search and tracking, which can be used in a variety of bad weather conditions such as fog, smoke or rain, not only can fight all weather, but also improve the stealth capability. The UK has more than 4,000 sensors in use on the space shuttle to monitor information from the spacecraft, verify the correctness of the design and make diagnoses when problems are encountered [7]. Japan has installed sensors on its Radar-4 satellite that can photograph ground targets around the clock.
The fastest growing demand for sensors in the world is in the automotive market, as well as in the communications market. The key to the level of automotive electronic control system is the number of sensors [8], at present, an ordinary family car is installed dozens to hundreds of sensors, and the number of luxury car sensors reaches more than 200. Our country is a big car production country, with an annual output of more than 10 million cars, but the sensors used in cars are almost monopolized by foreign countries.

Development Situation of Our Country
As early as the 1960s, China began to get involved in the sensor manufacturing industry [9], during the "Eighth Five-Year Plan" period, China listed sensor technology as a national key scientific and technological research project, and built a "State key Laboratory of sensor technology", "National Sensor Engineering Center" and other research and development bases. Moreover, MEMS and other research projects have been included in the national high-tech development focus.
At present, the sensor industry has been regarded as a hightech industry with a promising future in China, which has attracted the attention of the world with its high technical content, good economic benefits, strong penetration and wide market prospects [10]. Our country's industrial modernization process and the rapid growth of more than 20% of the electronic information industry, driving the rapid rise of the sensor market.
Our country's mobile phone production exceeded 750 million [11], and the growth of the mobile phone market has brought new opportunities to the sensor market, which accounts for a quarter of the sensor market. China is a big producer of white electricity, the total output in 2009 reached more than 300 million units [12], with sensors accounting for 1/5 of the market, sensors in the medical environmental protection professional equipment applications of rapid growth, accounting for about 15% of the market share.
At the same time, the problem of sensor development in our country has become increasingly prominent. Although there are many sensor enterprises in China, most of them are oriented to the low-end field, the technical foundation is weak, and the research level is not high. Many companies are quoting foreign chip processing, independent research and development of fewer products, independent innovation ability is weak, and almost no market share in the high-end field.
In addition, research institutes have been in line with international standards in the research of sensor technology, but the bottleneck of industrialization has been delayed. At present, the research institutions engaged in sensor technology research and development in China are mainly universities, the Chinese Academy of Sciences and relevant ministries, and the technical strength of enterprises is weak, many are cooperating with foreign countries, or secondary packaging. In developed countries, the research and development and industrialization of sensors are more dominated by enterprises. So, how should China's sensor industry break through the current development bottleneck?
In recent years, China has also continuously increased its attention to the sensor industry, and introduced a series of policies to promote its development. In July 2011, the "Twelfth Five-Year Plan" of China's electronic components pointed out that 500 billion yuan will be invested during the "Twelfth Five-Year Plan" period [13], mainly in the research and development and industrialization of new electronic components. In February this year, the "Accelerate the development of sensors and intelligent instrumentation industry Action Plan" jointly issued by the Ministry of Industry and Information Technology and other four ministries [14], also formulated specific industrial development goals, and gave a development roadmap from 2013 to 2025.
According to the national plan, the future will establish an innovative industrial cluster of more than 10 billion yuan in the field of sensors, as well as industry leaders with an output value of more than 1 billion yuan and small and fine enterprises with an output value of more than 50 million yuan. For the realization of the above goals, we should start from two aspects, one is to take the road of industrialization, the other is to adopt the overall solution model.
In the industrialization of sensor technology, in addition to the need for mature markets and products, as well as sufficient capital and talent, the long-term business philosophy is also the basis for the success of sensor industrialization. The cycle of sensor development and promotion is relatively long, and it is often difficult to see results in the short term. For example, Hanwei, from entrepreneurship to listing, has gone through a total of 10 years. The model of the overall solution is an effective path after Hanwei practice. Although the sensor is a key device with high technical content, it needs to depend on other systems and specific applications, and it is difficult to form a large output value and scale. Therefore, he suggested starting from the core components, extending to the downstream industry chain, and providing customers with an overall solution. With this total solution model, we are able to get first-hand user experience information and refine and improve the sensor based on this information. At the same time, because the profit of the end application is relatively high, the enterprise can put the money earned in the end application into the front-end core technology research and development, so that the research and development also have the follow-up strength.

Driverless Environment Awareness Technology
Unmanned vehicles use a variety of sensors for environment perception, which need to be calibrated after the sensors are installed in the fixed position of the vehicle [15]. In the process of driving, the requirements for environment perception are extremely diverse and complex. As a ground autonomous driving robot, it should have the ability to extract road information, detect obstacles, and calculate the position and speed of obstacles relative to the vehicle [16]. That is, the perception of the road environment of the unmanned vehicle usually includes at least structured road and unstructured road detection, pedestrian and vehicle detection in the driving environment, and the detection of traffic lights and traffic signs.

Structured Road Inspection
Structured road detection is to accurately obtain the position and direction of the vehicle relative to the lane by understanding the information of the standardized road with clear lane marking lines and road boundaries [17].

Common Assumptions for Structured Roads
Since the road conditions are different from each other, only a simplified road scene can be provided. Therefore, the road shape hypothesis, road width and road flatness hypothesis, road feature consistency hypothesis and area of interest hypothesis are established to help identify structured roads. Under the official industry standards, the design and construction of structured roads are relatively regular, with clear distinctions between road and non-road lane lines. In the visual navigation system, the lane line is approximated by a straight line based on the assumption that the direction of the lane line not far from the camera changes little, that is, the curvature changes little [18]. Through the lane line edge point search and lane line edge curve fitting, the straight lane fitting is realized. The algorithm flow is shown in Figure 1.

Curve Detection
The curve is an indispensable road form in the highway, so it is necessary to detect the boundary of the curved lane line from the road image [19], judge the direction of the road bend, and determine the curvature radius of the curve to provide effective information for the unmanned vehicle. The alignment of the general highway plane is mainly divided into straight line, circular curve and cycloidal line, so the top view is selected for fitting [20]. The detection methods of curves at home and abroad are mainly based on road models. It is generally divided into three steps: establishing the curve model and completing the assumption of the road shape; The lane line pixels were extracted, and the pixels of each lane line were extracted from the foreground pixels. The lane model is fitted and the optimal parameters of the curve mathematical model are determined by using the detected pixels [21].

Detection in Complex Environments
In the actual case of image preprocessing, there are often complicated situations. The uneven change of ambient light leads to multiple pure white and pure black areas in the image extracted by the camera, which makes the image recognition algorithm lose its target. Image preprocessing is often used to solve this problem. Among them: Gamma adjustment, gray mapping adjustment, histogram adjustment and other methods. Since the navigation image of unmanned vehicle in vehicle vision has high requirements for image gray information, image authenticity and image real-time, image preprocessing methods must meet the requirements of fast, simple, smooth and natural synthetic image and less synthetic trace. Multiple exposure can be achieved by setting the shutter length, alternating exposure with different cameras in the binocular camera, etc. [22][23] [24].

Unstructured Road Detection
For unstructured roads such as rural roads and field dirt roads, road detection based on machine learning is used to process images and data in combination with the detected environmental information and the model in the prior knowledge base. At the same time, the prediction model is modified according to the different environment to achieve the effect of continuous update of the model [25].

Target Detection in Driving Environment
According to different detection targets, different sensor data and different processing algorithms are selected to realize the target detection in the driving environment [26].

Pedestrian Detection
Pedestrian detection based on HOG feature. HOG feature is a kind of intensive descriptor for local overlapping area of image. It constructs human features by calculating the gradient direction histogram of local area [27]. This method is a detection method that extracts HOG feature of image and then makes decision by SVM [28]. The pedestrian detection based on Stixel model can detect the target accurately by fusing LiDAR and video data. Using LiDAR data to extract the region of interest, and then using video image to recognize the properties of the target [29], it can effectively realize the complementarity between different mode sensors and improve the performance of the sensor. It is divided into three steps: first, the LiDAR data is processed to obtain the region of interest; Then the image data is prepared to train the imagebased pedestrian detection algorithm. Finally, the trained classifier is used to detect pedestrians based on the region of interest.

Detecting Test of Vehicle
The V-disparity method is an obstacle detection method based on stereo vision [30]. The algorithm flow is as follows: First, stereo image pairs are obtained, then dense parallax map is calculated, and V-disparity chart is established. By analyzing the V-disparity chart, road surface in the driving environment can be extracted, and the position of obstacles on the outlet surface can be calculated. The combination of vision and LiDAR information avoids the problem that machine vision is affected by light and LiDAR data is insufficient, and realizes the complementarity of sensor information. By establishing the coordinate conversion model between LiDAR, camera and car body, the LiDAR data and image pixel data are unified into the same coordinate for recognition and processing. According to the characteristics of LiDAR data, a suitable clustering method is selected, and the new shape matching and template matching of LiDAR data after clustering are determined to determine the region of interest. Vehicle detection in the region of interest is carried out through HAAR-like features combined with AdaBoss algorithm, and then Kalman predictive tracking is realized through vehicle data features in LiDAR [31].

Traffic Light Detection
The system structure of traffic light recognition can be divided into image acquisition module, image preprocessing module, recognition module and tracking module. The traffic light in a single frame image can be detected by using the traffic light recognition method based on color vision. In order to prevent false detection or tracking loss, target tracking algorithm based on color histogram can be used. CAMSHIFT (Continuously Adaptive Mean SHIFT) algorithm, which can effectively solve the problem of target deformation and occlusion, and the operation efficiency is high [32][33].

Traffic Sign Detection
Traffic sign detection includes three aspects: color segmentation, shape detection and pictographic recognition. When the lighting conditions are good, the threshold of color segmentation needs to be selected through the image sampling of the outdoor environment. The chroma and saturation information of the HSV color space can be used to separate the traffic sign from the background. Usually, traffic signs and driving directions are not perpendicular. Ellipse detection based on random continuous sampling is often used to judge circular marks. The edge line after color segmentation can be obtained by Hough line transformation. Select the relevant template to roughly divide the processed image into red prohibit sign, blue permit sign and yellow warning sign. Classifier is designed for each kind of traffic sign. Firstly, OTSU threshold segmentation algorithm is used to pre-process the detected marks, which can effectively avoid the errors caused by light shadows and occlusion. Then, based on the image obtained by the algorithm, the radial features are extracted by the moment operation, and the multi-layer perceptron is selected to achieve the goal of kernel

Optimization of Sensors
According to the field distribution of patents related to environmental perception technology, environmental perception technology can be divided into two branches: sensing mode and target detection [37]. In practical applications, sensing mode is the means and target detection are the result, as shown in Table 1.   At present, there are six kinds of sensors commonly used in driverless cars: GPS, IMU inertial measurement unit, camera, LiDAR, millimeter wave radar, ultrasonic radar [38].
The farthest detection distance of millimeter wave radar is about 250 meters, strong anti-interference ability to temperature and weather, detection Angle range of 10-70 degrees, strong detection ability to distance and depth of field information, weak recognition ability of road signs, mainly used for vehicles, pedestrians and obstacles detection [39]; The farthest detection distance of LiDAR (divided into twodimensional LiDAR and three-dimensional LiDAR) is about 200 meters, and the detection Angle range is 15-360 degrees, which is not sensitive to light transformation, and the night sensing ability is strong, rich in information, but the recognition of speed and road signs is weak, mainly used for the detection of lanes, curbs, vehicles, pedestrians and obstacles, and the cost is high. The farthest detection distance of the camera, about 50 meters of short focus, about 100 meters of middle focus, about 200 meters of long focus, is affected by the weather and light, there is no direct distance information, temperature stability is strong, mainly used for signal lights, traffic signs, lanes, roadside detection; The farthest detection distance of ultrasonic wave is about 10 meters, and the detection Angle is 120 degrees. Due to the limited detection distance, it is mainly used to detect obstacles to avoid collision and rubbing, but it is small in size and low in cost. GPS, using the triangulation method, is the timedifference GPS used in practical applications. The frequency of GPS positioning is 10Hz, that is, 100ms positioning once, the frequency is low, and it cannot follow the ideal trajectory, so other sensor signals need to be introduced to improve the positioning frequency of the unmanned vehicle. IMU inertial measurement unit, the longitude and latitude information obtained by GPS as input signals to the IMU, IMU and then through the serial line connected to the controller, in order to obtain higher frequency positioning results. Based on Newton's laws of mechanics, by measuring the acceleration of the carrier in the inertial reference frame, integrating it with time, and transforming it into the navigation coordinate system, information such as velocity, yaw Angle and position can be obtained in the navigation coordinate system. All kinds of sensors have their own characteristics and advantages and disadvantages, a variety of sensor fusion is the inevitable trend of automatic driving, equipped with enough cameras, LiDAR, millimeter-wave radar, ultrasonic sensors, inertial measurement units and global navigation satellite system sensors, can improve the robustness of automatic driving function. Environmental perception technology is a comprehensive technical research subject, its ultimate goal is how to achieve the integrity and accuracy of perception at the lowest possible cost and at the fastest speed. Therefore, how to rationally arrange a variety of sensors, the range and accuracy of sensor perception, how to avoid the interference of other things and how to quickly process the data collected by sensors are inevitably the focus and difficulty of comprehensive research on environmental perception technology. The following will use the method of patent analysis to carry out a comprehensive analysis and research on the above issues.

Sensor Layout Optimization
The role of the sensor is to carry out real-time perception of the environment around the vehicle to ensure timely access to the environmental information of the vehicle during the driving process, so it is necessary to achieve 360-degree monitoring without dead corners when deploying the sensor position, which requires comprehensive consideration of the characteristics and advantages of different sensors in specific application scenarios, so as to reasonably select the appropriate sensor and layout [39]. At present, the mainstream approach is to achieve the perception ability through the fusion of a variety of sensors and increase the number of sensors, but the sensor price is too high, the use of too many directly lead to the surge in the overall cost of the vehicle, the larger size of the sensor such as no shielding will also affect the beauty of the vehicle and the stability of the vehicle at high speed. How to reasonably place sensors with matching scene characteristics in all directions of the car body, how to optimize the number of sensors used under the premise of ensuring the sensing range and accuracy, and how to properly integrate the sensors with the car body are the key research directions of optimizing the sensor layout. At present, many domestic universities, automotive research institutes and enterprises have put forward the optimization method of sensor layout, so that the layout of the sensor is adapted to its own individual characteristics to achieve a better perception effect, while reducing the overall configuration cost.

Optimization of Sensor Sensing Range and Accuracy
The range and accuracy of sensor perception information directly affect the subsequent decision-making level of driverless cars, and the sensor perception range is limited, the current maximum perception distance is about 250 meters, and the perception range of high-speed vehicles is shorter [41]. Obstacles will affect the sensor's overall perception of the vehicle's surrounding environment, and natural factors such as light and weather will also affect the sensor's perception accuracy. In the existing technology, the sensor's sensing range and accuracy can be effectively improved by optimizing the deployment position and deployment Angle of the sensor. LiDAR and camera sensors working together can also achieve a greater range of perception; The sensing performance can be improved by optimizing the internal parameters and algorithms of the sensor, such as error compensation for the sensor to improve the accuracy of the sensor, the use of fast radar line scanning to achieve fine configuration and registration of the point cloud, and the use of tracking center transformation algorithm to improve the vehicle detection accuracy during vehicle identification. At present, the invention patent application has proposed many different solutions to the problem of limited sensing range and low precision of the sensor, and the relevant technical solutions are shown in Figure 6. Based on the optimization plan for the LiDAR wiring harness distribution in the automatic driving scenario, the LiDAR wiring harness model is established, optimization parameters are set according to the requirements of the environmental awareness task, coarse optimization is carried out, coarse optimization results are fine-optimized, and optimized laser wiring harness distribution is obtained. The optimized laser beam distribution can be tested and verified in virtual environments such as driving simulators to improve the effectiveness of environmental awareness tasks. Compared with the uniform beam distribution of the existing multi-line LiDAR, the proposed method is more targeted to specific target detection tasks, and the optimized sensor has the characteristics of high detection accuracy and wide range in target detection.

Optimization of Anti-jamming Performance of Sensor
Anti-interference is an important part of sensor detection, commonly used anti-noise, anti-electromagnetic field and other interference sources of technology mainly include shielding technology, electrostatic shielding and electromagnetic shielding. When the sensor is applied to the autonomous vehicle, environmental factors such as strong light, temperature, rain, snow, fog, snow and ice on the road will also affect the normal work of different sensors, which is easy to cause the error of the sensor results, and then lead to the risk of vehicle driving. LiDAR can effectively sense rain and snow, to achieve all-weather work; Vehicle-mounted radar is not affected by light, rain and snow; Millimeter wave radar can detect distant targets at night or in foggy weather; In the night vision scenario of driverless cars, infrared sensors are usually used, which are highly adaptable to the environment and are not affected by rain, snow, wind and fog. The temperature control of the sensor is mainly by installing a heat shield or adding a metal temperature control board to the sensor, and the above two ways are for the control of a single sensor, and the cost is high. In order to overcome the interference of environmental factors to the sensor, fault tolerance processing can be carried out according to the complementary characteristics of each sensor [42]. At present, there are fewer patent applications for anti-interference schemes involving sensors to environmental factors in China, and the relevant technical schemes are shown in Figure 7. Automotive sensor temperature control system, wherein the heat exchange system includes one or more conduction tubes [43]; Conduction tubes connect multiple sensors and heating and cooling units; The central controller is coupled with multiple sensors, receives the current temperature data of each sensor, and determines the temperature adjustment mode of the sensor at least according to the current vehicle condition and the current environment; The heating and cooling devices are coupled to the central controller and receive the temperature regulation control signal of the central controller; The heating and cooling unit also connects multiple sensors through the conduction tubes of the heat exchange system, which heat or cool the conduction tubes to transfer heat in the heat exchange system. The overall temperature of the system is realized, the efficiency of the system is greatly improved, and the cost is reduced.

Sensor Data Processing Speed Optimization
The information collected by the vehicle camera is usually processed by visual correlation algorithm, the distance data collected by millimeter wave is processed by distance correlation algorithm, and the data collected by LiDAR is processed by filtering and clustering technology. Because driverless cars use a variety of different sensors, the collected data is large, the coverage is wide and the security level is different. Driverless cars are required to have strong computing and processing power. At present, multi-sensor information fusion technology is widely used, including Bayesian fusion method, Kalman filter fusion method and neural network fusion method. Bayesian information fusion method is a reasoning method based on probability statistics. Kalman filtering method can predict and correct the location of objects and other information from limited and noisy observation sequences, and neural network method can eliminate the cross-influence effect produced by multi-sensor cooperation through a large number of learning and training [44].

Summary
This paper expounds the research content, difficulties and key points in the field of environmental perception, and sorts out the related technologies of image recognition, sensor layout, sensing range, accuracy, anti-interference and data processing under different road conditions in environmental perception. In order to reduce traffic accidents and optimize human time structure, driverless cars can improve their performance. Practical problems such as saving energy consumption emerged under the strong demand of the market. The perception of the space environment of the unmanned vehicle is very dependent on the single or multi-line LiDAR. The acquisition of the image information of the traffic signal light and traffic sign light is completed by using the camera, and other sensors such as millimeter wave radar are used to jointly collect the environmental information. The analysis and recognition of various information data uses a number of data analysis and solution methods, and uses artificial intelligence machine learning methods to identify various targets, so as to finally complete the environmental perception task of the unmanned vehicle.