Practical techniques for improving sensor fusion accuracy
In addition to the usual design techniques for improving sensor performance such as (1) calibration to correct errors including multi-point calibration for higher accuracy, and regular recalibration to correct for drift over time; (2) signal conditioning to correct for errors like offset and linearity, amplified signals for improved signal-to-noise ratio (SNR); and (3) optimizing data acquisition to minimize delays in data acquisition and transmission for time synchronization, there are specific sensor fusion improvement design considerations.
Designers have the choice of Kalman filters, including the extended Kalman filter (EKF) and the unscented Kalman filter (shown in Figure 1 in part 1), as well as particle filters that combine predictions from a system dynamics model with new observations to update the estimated state of a system.
While AI can learn complex patterns, identify objects, and adapt to dynamic environments, machine learning (ML), especially deep learning, can analyze, interpret, and combine sensor data in more sophisticated ways. In addition, designers should develop algorithms that can predict and compensate for sensor latencies to improve real-time performance as well as learn and improve sensor fusion performance continuously over time.
Real-world examples of LiDAR and radar fusion simulation efforts
Waymo is possibly the best example of the application of sensor fusion in vehicles to achieve advanced autonomous capabilities in complex urban environments. Long-range radar detects the speed and trajectory of moving objects in all weather conditions, while LiDAR provides precise 3D mapping and depth information for highly accurate object detection. In addition to LiDAR, radar, cameras, and external audio receivers, the company uses advanced AI and ML technologies and cutting-edge research across its software stack. This combination allows a Waymo vehicle to perceive the dynamic environment and road users, predict their behavior, and plan and navigate a journey from A to B in real time. The company’s Foundation Model architecture combines AV-specific ML advances with general world knowledge of Vision-Language Models (VLMs). With its Foundation Model, Waymo is significantly enhancing the capabilities of closed-loop simulation systems, simulating realistic future world states, and other road users’ behavior.

A quite different approach to AV sensor fusion was taken by Mobileye. Known for its camera technology and computer vision algorithms for autonomous driving, Mobileye, now a subsidiary of Intel, partnered with its parent company to revolutionize radar and LiDAR technologies with new sensors and computing capability developed specifically for self-driving applications. Their combined efforts are pioneering the way for the most robust way to achieve what they call True Redundancy through multiple sensor subsystems for environmental modelling for AVs.

Initially, researchers observed that existing multi-sensor systems are often complementary and not redundant, since cameras and radar or LiDAR each sense certain aspects of the environment, which are then combined to build a single world model. Their design goal is to have both channels – camera and radar-LiDAR – sense all elements of the environment, and each build a full model separately. To do this, they developed one AV that can drive on cameras alone and another that can drive on radar/LiDAR alone.
When they combine the separate parts into a production-ready AV, the camera subsystem provides the backbone of the AV, and the added radar-LiDAR subsystem provides enhanced safety and a significantly higher mean time between failures (MTBF). The combination delivers 360⁰ camera coverage and 360⁰ radar protection with improved forward-facing LiDAR technology. Using frequency-modulated continuous wave (FMCW) LiDAR, researchers have observed several system-level improvements, including lower cost over traditional LiDAR approaches.
References
Behind the Innovation: AI & ML at Waymo
True Redundancy: The Realistic Path to Deploying AVs at Scale
Radar & LiDAR Autonomous Driving Sensors by Mobileye & Intel: Next Generation Active Sensor Development
Related EE World content
What are the Key Considerations for Integrating LiDAR and Radar Data for Robust Perception: part 1
How to implement multi-sensor fusion algorithms for autonomous vehicles
The power of sensor fusion
Sensor fusion: What is it?
Sensor fusion levels and architectures
How does fusion timing impact sensors?
Sensors in the driving seat
What sensors make the latest Waymo Driver smarter?





Leave a Reply
You must be logged in to post a comment.