For autonomous vehicle (AV) applications, both radar and LiDAR (light detection and ranging) are often used in addition to high-definition cameras and other sensors. The proper fusion of the output from these devices enables the detection of obstacles, identification of lane markings, and the accurate recognition of other vehicles and pedestrians.
The radio waves (typically 76–81 GHz frequency range) in the radar portion of the system measure object velocity and distance (up to distances of 250 meters) and are particularly useful in poor visibility conditions such as fog or heavy rain.
LiDAR sensing uses up to 1 million laser pulses per second to create accurate 3D maps of the vehicle’s surroundings. This allows measuring distances within a range of up to 200 meters and detecting objects even in challenging weather conditions.
Processing this sensor data within milliseconds using artificial intelligence (AI) algorithms, including the high-resolution visual data at up to 120 frames per second from cameras, allows the system to make real-time decisions, predict movements, and adapt to dynamic driving environments.
Sensor calibration
The use of lasers in LiDAR sensing provides cm-level precision (mm precision in some 2D LiDAR systems). In contrast, the longer radio wavelengths in radar sensing mean the resolution is significantly lower. In a multiple sensor tracking system, two types of architecture are typically used to align data from the radar and LiDAR sensors. In central-level tracking, the data from all the sensors is sent directly to a tracking system that maintains tracks based on all the detections. An alternative is a hierarchical structure with sensor-level tracking combined with track-level fusion for a multiple-sensor system.
For several reasons (including shared data, when sensors directly output tracks instead of detections, limited communication bandwidth, and more), a track-to-track, or track-level fusion architecture may be preferable to a central-level tracking architecture in some applications. One company has explained how a track-level fusion scheme processes the radar measurements using an extended object tracker with a Gaussian mixture probability hypothesis density (GM-PHD) filter and the LiDAR measurements using a joint probabilistic data association (JPDA) tracker based on an Interacting Multiple Model – Unscented Kalman Filter (UMM-UKF).

The track-level fusion example explains the radar tracking and LiDAR tracking algorithms, how to set up the fuser, metrics, and visualization, run scenario and trackers, evaluate performance, and track maintenance.
The ego vehicle (the one being controlled by the autonomous driving system) has four 2-D radar sensors, where the front and rear radar sensors have a field of view (FOV) of 45 degrees, and the left and right radar sensors have an FOV of 150 degrees. Each radar sensor has a resolution of 6 degrees in azimuth and 2.5 meters in range. The vehicle also has one 3-D LiDAR sensor with a field of view of 360 degrees in azimuth and 40 degrees in elevation with a resolution of 0.2 degrees in azimuth and 1.25 degrees in elevation (using 32 elevation channels).
Other companies and researchers are addressing AV simulation as well. The References section has links to some examples of their efforts.
Part 2 will address practical techniques for improving sensor fusion accuracy and provide real-world examples of simulation success stories for LiDAR and radar fusion.
References
Sensor Fusion in Autonomous Transport: Integrating LiDAR, Cameras, and AI for Enhanced Safety
An in-depth comparison of LiDAR, Cameras, and Radars’ technology
Introduction to Track-To-Track Fusion
Track-Level Fusion of Radar and Lidar Data
AdvFuzz: Finding More Violations Caused by the EGO Vehicle in Simulation Testing by Adversarial NPC Vehicles
First steps – CARLA Simulator
EGO-Centric, Multi-Scale Co-Simulation to Tackle Large Urban Traffic Scenarios
V2X Testing with Simulation of Multiple Ego Vehicles |
RobustStateNet: Robust ego vehicle state estimation for Autonomous Driving
Ego Vehicle – AWSIM Labs Documentation
Related EE World content
How to implement multi-sensor fusion algorithms for autonomous vehicles
The power of sensor fusion
Sensor fusion: What is it?
Sensor fusion levels and architectures
How does fusion timing impact sensors?
Sensors in the driving seat





Leave a Reply
You must be logged in to post a comment.