• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Subscribe
  • Advertise

Sensor Tips

Sensor Product News, Tips, and learning resources for the Design Engineering Professional.

  • Motion Sensing
  • Vision systems
    • Smart cameras
    • Vision software
    • Lighting
    • Optics
  • Pressure
  • Speed
  • Temperature
  • Suppliers
  • Video
    • EE Videos
    • Teardown Videos
  • EE Learning Center
    • Design Guides
    • Tech Toolboxes

What are the key considerations for integrating lidar and radar data for robust perception: part 2

December 3, 2025 By Randy Frank Leave a Comment

Practical techniques for improving sensor fusion accuracy

In addition to the usual design techniques for improving sensor performance such as (1) calibration to correct errors including multi-point calibration for higher accuracy, and regular recalibration to correct for drift over time; (2) signal conditioning to correct for errors like offset and linearity, amplified signals for improved signal-to-noise ratio (SNR);  and (3) optimizing data acquisition to minimize delays in data acquisition and transmission for time synchronization, there are specific sensor fusion improvement design considerations.

Designers have the choice of Kalman filters, including the extended Kalman filter (EKF) and the unscented Kalman filter (shown in Figure 1 in part 1), as well as particle filters that combine predictions from a system dynamics model with new observations to update the estimated state of a system.

While AI can learn complex patterns, identify objects, and adapt to dynamic environments, machine learning (ML), especially deep learning, can analyze, interpret, and combine sensor data in more sophisticated ways. In addition, designers should develop algorithms that can predict and compensate for sensor latencies to improve real-time performance as well as learn and improve sensor fusion performance continuously over time.

Real-world examples of LiDAR and radar fusion simulation efforts

Waymo is possibly the best example of the application of sensor fusion in vehicles to achieve advanced autonomous capabilities in complex urban environments. Long-range radar detects the speed and trajectory of moving objects in all weather conditions, while LiDAR provides precise 3D mapping and depth information for highly accurate object detection. In addition to LiDAR, radar, cameras, and external audio receivers, the company uses advanced AI and ML technologies and cutting-edge research across its software stack. This combination allows a Waymo vehicle to perceive the dynamic environment and road users, predict their behavior, and plan and navigate a journey from A to B in real time. The company’s Foundation Model architecture combines AV-specific ML advances with general world knowledge of Vision-Language Models (VLMs). With its Foundation Model, Waymo is significantly enhancing the capabilities of closed-loop simulation systems, simulating realistic future world states, and other road users’ behavior.

Figure 1. Waymo Foundation Model architecture. (Image: Waymo)

A quite different approach to AV sensor fusion was taken by Mobileye. Known for its camera technology and computer vision algorithms for autonomous driving, Mobileye, now a subsidiary of Intel, partnered with its parent company to revolutionize radar and LiDAR technologies with new sensors and computing capability developed specifically for self-driving applications. Their combined efforts are pioneering the way for the most robust way to achieve what they call True Redundancy through multiple sensor subsystems for environmental modelling for AVs.

Figure 2. System improvements from FMCCW LiDAR compared to typical LiDAR. (Image: Mobileye)

Initially, researchers observed that existing multi-sensor systems are often complementary and not redundant, since cameras and radar or LiDAR each sense certain aspects of the environment, which are then combined to build a single world model. Their design goal is to have both channels – camera and radar-LiDAR – sense all elements of the environment, and each build a full model separately. To do this, they developed one AV that can drive on cameras alone and another that can drive on radar/LiDAR alone.

When they combine the separate parts into a production-ready AV, the camera subsystem provides the backbone of the AV, and the added radar-LiDAR subsystem provides enhanced safety and a significantly higher mean time between failures (MTBF). The combination delivers 360⁰ camera coverage and 360⁰ radar protection with improved forward-facing LiDAR technology. Using frequency-modulated continuous wave (FMCW) LiDAR, researchers have observed several system-level improvements, including lower cost over traditional LiDAR approaches.

References

Behind the Innovation: AI & ML at Waymo
True Redundancy: The Realistic Path to Deploying AVs at Scale
Radar & LiDAR Autonomous Driving Sensors by Mobileye & Intel: Next Generation Active Sensor Development

Related EE World content

What are the Key Considerations for Integrating LiDAR and Radar Data for Robust Perception: part 1
How to implement multi-sensor fusion algorithms for autonomous vehicles
The power of sensor fusion
Sensor fusion: What is it?
Sensor fusion levels and architectures
How does fusion timing impact sensors?
Sensors in the driving seat
What sensors make the latest Waymo Driver smarter?

You may also like:


  • Key Considerations for integrating LiDAR and radar data for robust…

  • Are additional sensors needed for Waymos?

  • How to implement multi-sensor fusion algorithms for autonomous vehicles

  • The power of sensor fusion

  • Motion sensor for smart glasses learns how people move

Filed Under: Featured, Frequently Asked Question (FAQ), RADAR/LiDAR Tagged With: FAQ, LIDAR

Reader Interactions

Leave a Reply

You must be logged in to post a comment.

Primary Sidebar

Featured Contributions

Automotive sensor requirements for software-defined vehicles: latency, resolution, and zonal architecture

High-current, low-impedance systems need advanced current sensing technology

A2L refrigerants drive thermal drift concerns in HVAC systems

Integrating MEMS technology into next-gen vehicle safety features

Fire prevention through the Internet

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: Connectivity
AI and high-performance computing demand interconnects that can handle massive data throughput without bottlenecks. This Tech Toolbox explores the connector technologies enabling ML systems, from high-speed board-to-board and PCIe interfaces to in-package optical interconnects and twin-axial assemblies.

EE LEARNING CENTER

EE Learning Center
“sensor
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

RSS Current EDABoard.com discussions

RSS Current Electro-Tech-Online.com Discussions

  • Sine wave distortion
  • Silicon insulated wire
  • Unable To Get Advertised Op-Amp Slew Rate
  • Wien bridge oscillator
  • Flip Flop for My Mirrors

EE ENGINEERING TRAINING DAYS

engineering
“bills

RSS Featured White Papers

  • 4D Imaging Radar: Sensor Supremacy For Sustained L2+ Vehicle Enablement
  • Amphenol RF solutions enable the RF & video signal chains in industrial robots
  • Implementing Position Sensors for Hazardous Areas & Safety

Footer

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Microcontroller Tips
  • Power Electronic Tips
  • Test and Measurement Tips

SENSOR TIPS

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2026 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy