中文  |  English
Home:Marketing > Use cases

Simulation Methods for In-Vehicle Cameras in ADAS HiL

Introduction

 

Autonomous driving includes perception, decision-making, and execution, where perception is the source of the entire process and an essential module of the autonomous driving system. During vehicle operation, the perception system continuously collects surrounding environmental information through sensors, serving as the "eyes" of an autonomous vehicle, helping the vehicle achieve observation capabilities similar to those of a human driver.

 

The perception system mainly consists of sensors such as cameras, ultrasonic radar, millimeter-wave radar, and optional LiDAR. Among these, the camera plays a crucial role as the primary environmental perception sensor. It can achieve 360° full visual perception, compensating for the shortcomings of radar in object recognition and is the sensor closest to human vision. As autonomous driving technology evolves, the number of required in-vehicle cameras continues to increase, along with higher demands for clarity, stability, and robustness.

图片1.png


Currently, in L2+ and L3 level vehicles, cameras are primarily categorized based on their installation location into five types: front-view cameras, surround-view cameras, rear-view cameras, side-view cameras, and interior cameras. While driving, front, side, and rear cameras, along with millimeter-wave and LiDAR, work together for fusion sensing, providing information about drivable areas and target obstacles to the algorithm module, enabling functionalities such as ACC/ICA/NCA, AEB, LKA, TSR, etc. Meanwhile, the interior camera monitors the driver's state, realizing fatigue monitoring. During parking, surround-view cameras and ultrasonic radar sense the parking environment to enable APA functionality.

 

In-vehicle cameras play a critical role in advanced driver assistance systems (ADAS), providing strong support for driving safety. So, how do we simulate cameras during ADAS HiL testing?

 

Polelink offers the following two implementation solutions:

 

Video Dark Box

 

The video dark box connects the video signal of the virtual simulation scenario to a display in the dark box, with a real camera capturing the video from the display. The captured video signal is then transmitted to the autonomous driving controller via coaxial cables, making the controller believe it is in a real-world environment, achieving the goal of testing the ADAS controller.

图片2.png

Diagram of the Video Dark Box Solution


The dark box equipment mainly consists of the box body, sliding rails, display, lens, camera, related brackets, and base. The video dark box does not require OEMs or Tier 1 suppliers to provide communication protocols between the image acquisition module and the image processing module, as it uses real cameras. This method is easy to implement and low in cost, but the placement and angle of the camera must be precisely set according to the size of the display, and it is easily affected by lighting and the display. The display's refresh rate may also cause image recognition delays. This solution is suitable for monocular cameras, with a field of view less than 120° (not applicable for surround-view cameras). The video dark box is relatively large, supports only one camera at a time, and has low precision.


图片3.png

Structure of the Video Dark Box


Camera calibration consists of two parts. First is the calibration of hardware equipment positions, ensuring that the camera, lens, and display are aligned on the same horizontal line. Second is the calibration of lane lines captured in the simulation scenario.


Video Injection

 

The video injection system can be used for raw data stream injection into in-vehicle cameras. The video injection box replaces the ADAS system's in-vehicle camera sensor. The camera simulation equipment receives different Camera view video signals of the virtual simulation scene through an HDMI/DVI interface. After internal image processing, the video signal is injected into the ADAS controller using a specific protocol.

图片4.png

Diagram of the Video Injection Solution


Video injection technology is unaffected by lighting and has high simulation precision. It supports online adjustment of camera signal color spaces (RGB, YUV, RAW, etc.). A video injection box can support up to two cameras simultaneously. For multi-camera signal simulation, each channel’s video signal can be synchronized using a serializer to trigger signals, making it suitable for multi-camera and multi-channel injection scenarios. However, video injection requires specific video protocol information, and OEMs or Tier 1 suppliers need to provide communication protocols between the image acquisition and image processing modules. Development challenges include distortion calibration, color correction, and other technical difficulties, leading to higher costs.

 

The video injection system supports configuration for various camera installation positions and characteristics (including resolution, frame rate, optics, and sensor features). Additionally, video injection combined with camera models can simulate other lens characteristics in a simulated environment, such as screen flicker, lens distortion, fisheye, motion blur, etc. It can simulate short-term overexposure or underexposure due to sudden changes in ambient lighting, incorrect gain adjustment in part or all channels, image noise, image distortion, or imaging failure caused by lens obstructions like rain, fog, or dirt.


图片5.png

Overexposure Example


图片6.png

Dead Pixel Example 


For video injection solutions, the camera simulation model needs to be generated based on real distortion data, FOV, pixel size, resolution, and other parameters. However, the simulation model still has slight distortion differences compared to real in-vehicle cameras and requires calibration. There are two methods for calibration: 

1. Capturing an image with the camera model and calculating the distortion parameters of the image, then modifying the distortion parameters configured for the camera in the ADAS controller. 

2. Comparing the black-and-white checkerboard image generated by the model with the real camera image, and adjusting the simulation model parameters to match the distortion parameters.

图片7.png

Checkerboard Calibration Example

 

Conclusion

 

In ADAS HiL testing, the simulated camera video stream data is transmitted to the controller along with data from dynamic models and other sensors, forming a closed-loop system in CANoe software for experiment management. Simulated cameras can mimic various real-world scenarios and conditions, including different road conditions, weather, and traffic situations. By simulating these scenarios, the controller's performance and robustness in various conditions can be evaluated.

 

The ADAS controller receives raw video stream data, LiDAR point cloud data, millimeter-wave radar, and ultrasonic radar target list data, assessing its ability to fuse and process different sensor data. Camera simulation can also be used to test and verify control algorithms and functions. By simulating various scenarios and conditions, the accuracy and reliability of the controller’s target detection, tracking, lane-keeping, automatic emergency braking, and other functions can be validated.

 

Video stream data can simulate various faults, such as complete black or white images, noise overlay, motion blur, frame loss, or delays, injected into the controller to verify the safety mechanisms of the controller's functions.


Summary:

 

This article first introduces the role of in-vehicle cameras in ADAS systems, focusing on describing the differences between the video dark box and video injection simulation methods for camera simulation in ADAS HiL systems. Finally, it briefly introduces the applications of in-vehicle camera simulation in ADAS HiL testing.

 

As Vector’s technical partner, Polelink covers MiL/HiL/ViL testing, vehicle networking testing, and sensor perception testing for intelligent driving systems. We provide high-quality intelligent driving testing solutions, testing integration systems, and services, helping to accelerate the verification and testing of intelligent driving simulation systems.