Vehicle Seat Adjustment System based on Body Image Recognition

: A vehicle seat adjustment system based on body image recognition is designed. The system uses body image recognition algorithms to calculate the physique parameters of passengers, and automatically adjusts the vehicle seat position. Through on-site testing, the matching rate between the automatically adjusted vehicle seat position and passenger body shape reaches 98.1%, which can effectively improve the convenience and comfort of the vehicle.


Introduction
The comfort and convenience of vehicle seats directly affect the user experience [1][2].The improvement and upgrading of vehicle seat adjustment systems are increasingly important in the development of vehicles.Among the existing technologies at home and abroad, the adjustment of vehicle seats is mainly achieved by manually triggering mechanical or electronic switches on the seats by passengers [3][4], which is relatively cumbersome.Although some electric seats have seat memory function [5][6], the support for storing seat positions is very limited, the supported seat positions are very limited, only meeting the comfortable seating needs of a few specific passengers.The inefficient seat adjustment method has a negative impact on the user experience [7].Therefore, developing a convenient and widely applicable vehicle seat adjustment system is of great significance.
The designed vehicle seat adjustment system in this article utilizes a passenger's full-body photo to calculate the passenger's body parameters, and controls the vehicle seat to move to a position that matches the passenger's body shape based on these parameters.This effectively improves the convenience and comfort of vehicle rides.Next, the composition, design, and testing of the vehicle seat adjustment system based on human body image recognition will be detailed.

System Composition
The workflow of the vehicle seat adjustment system based on body image recognition is shown in Figure 1.Before passengers board the vehicle, the system sequentially takes full-body photos of the passengers and measures the distance between the passengers and the vehicle.After the passengers sit down in the vehicle, half-body photos of the passengers on each seat are taken separately.Then, using facial recognition algorithms, the passenger's full-body and half-body photos are paired to achieve the binding of the seat, passenger identity, and full-body photos.The successfully paired fullbody photos are sent to the distortion removal model and body key points recognition model.The passenger's body parameters are calculated based on the coordinates of the body key points in the image.Finally, referring to the passenger's body parameters, the system uses the lookup table method to obtain the corresponding seat adjustment amount and automatically adjusts the seat position.In order to save costs, the system is developed based on existing parts of the vehicle.The seat adjustment system mainly consists of the following 5 modules: external wideangle camera, Lidar, internal wide-angle camera, electronic control unit, and vehicle seats.The system schematic is shown in Figure 2.
Before passengers board the vehicle, external wide-angle cameras can capture full-body photos of the passengers.The wide-angle lens helps to expand the field of view and avoid missing important image information.
When the Lidar detects that the distance between the passenger and the vehicle is between 0.5m and 2m, it records this distance and prompts the external wide-angle camera to take a snapshot.
After the passengers sit down, internal wide-angle cameras take half-body photos of the passengers to identify their identity.
Electronic control unit (ECU) is able to receive passenger image information and distance data, calculate seat position calibration and control the vehicle seat to move to the appropriate position.
Vehicle seats are equipped with headrest lifting and leg rest extension functions.When receiving control signals from the electronic control unit, the motor rotates according to instructions and adjusts the seat to the target position.

Facial Recognition Pairing
The system uses a pre-trained facial comparison model FaceNet to contrast full-body and half-body photos.As shown in Figure 3, the facial recognition model MTCNN extracts facial images from full-body and half-body photos.FaceNet utilizes deep convolutional neural networks to extract and learn Euclidean spatial features of facial images.The smaller the Euclidean distance between the feature vectors of two images, the greater the likelihood that they belong to the same person.The system sets the threshold of Euclidean distance to 1.0.When the distance is less than 1.0, it is determined that the pairing of full-body and half-body photos is successful.Otherwise, the pairing fails.

Distortion Removal.
The full-body photos of passengers captured by wide-angle cameras are prone to distortion, which can cause interference in the calculation of body parameters.Therefore, it is necessary to perform distortion removal on the full-body photos.Image distortion mainly includes radial distortion and tangential distortion.The distortion removal process is mainly divided into the following 5 steps.
The first step is to map the pixel coordinates of the original image to the camera coordinates using formula (1).
x, y is the camera coordinate of the original image.u, v is the pixel coordinates of the original image.c x , c y , f x , f y are camera related constants.
The second step is to use formula (2) to calculate the radial distortion of the image in the camera coordinate system.
△x 1 and △y 1 are radial distortion variables.k 1 , k 2 and k 3 are camera related constants.
The third step is to use formula (3) to calculate the tangential distortion of the image in the camera coordinate system.
△x 2 and △y 2 are tangential distortion variables.p 1 and p 2 are camera related constants.
The fourth step is to use formula (4) to calculate the camera coordinates of the corrected image.
x' = x +△x 1 +△x 2 y' = y +△y 1 +△y 2 (4) x' and y' are the camera coordinates of the corrected image.The fifth step is to use formula ( 5) to map the camera coordinates of the corrected image back to the pixel coordinates.
u' and v' are the pixel coordinates of the corrected image.

Calculation of Passenger Body Parameters.
The system uses the pre-trained body recognition model Openpose to label the positions of the passenger's head, neck, and feet.These positions provide a basis for calculating passenger body parameters.The body key points marked by Openpose are shown in Figure 4. Calculated passenger body parameters mainly include the passenger's head and neck length and height.The length of the passenger's head and neck in a full-body photo is defined as formula (6), where l is the length of the head and neck, y h is the vertical coordinate of the head position, y n is the vertical coordinate of the neck position.
Height is defined as formula (7), where h is height, y f is the vertical coordinate of the foot position.
The principle of camera capturing images is shown in Figure 5.
From Figure 5, it can be seen that when the focal distance and object distance are constant, the actual body parameter of the passenger is directly proportional to the body parameter of the passenger in full-body photo: (8) This article uses the lookup table method to calculate seat position calibration.The corresponding relationship between passenger body parameters and seat position calibration is shown in Table 1.The test set consists of full-body and half-body photos of 10 passengers, with a total of 20 images.Each experiment selects four passenger's photos for face pairing testing.It simulates the scene where four passengers board the vehicle simultaneously.By combining the combination of , it is known that the number of experiments is 210.The test results are shown in Table 2, with a pairing accuracy of 99%, it indicates that the facial comparison model can meet the system's needs.

Accuracy Testing of Body Parameter Calculation.
This article calculates the body parameters of passengers in 7 height ranges, each containing 30 people.When the calculated head and neck length and height fall into the correct range, the calculation is judged to be correct.Otherwise, the calculation is judged to be incorrect.The test results are shown in Table 3. Taking into account 7 height ranges, the calculation accuracy reaches 98.1%.It indicates that the body parameter calculation model can meet the system requirements.

Summary
Starting from user needs, this article designs a vehicle seat adjustment system based on body image recognition.It uses facial recognition algorithms to distinguish passenger identities.Based on body image recognition algorithms, it is able to calculates passenger body parameters.Then, these parameters can guide the vehicle seat to adjust to the suitable position automatically.The experiment results show that the facial pairing accuracy and body parameter calculation accuracy of the system reach 99% and 98.1% respectively, which can meet the application in practical scenarios.Compared with the traditional solutions of manual adjustment and seat memory, this article greatly simplifies the seat adjustment process, which is of great significance for improving the user's riding experience.

Fig 4 .
Fig 4. Body key points marked by Openpose

Table 1 .
Physique parameters-seat position comparison

Table 2 .
Face pairing accuracy