• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Subscribe
  • Advertise

Sensor Tips

Sensor Product News, Tips, and learning resources for the Design Engineering Professional.

  • Motion Sensing
  • Vision systems
    • Smart cameras
    • Vision software
    • Lighting
    • Optics
  • Pressure
  • Speed
  • Temperature
  • Suppliers
  • Video
    • EE Videos
    • Teardown Videos
  • EE Learning Center
    • Design Guides
    • Tech Toolboxes

How does fusion timing impact sensors?

February 12, 2025 By Randy Frank

Systems that employ several sensors increasingly use multi-sensor fusion to process the data before making decisions. Among the more frequently discussed applications are object recognition (in automotive and robotics), medical diagnosis, speech recognition, security and surveillance, and more.

Two common design approaches for multi-modal fusion include early fusion and late fusion. In early (or low-level) fusion, or raw data fusion, the raw data from different sensors is combined before any high-level processing or decision-making. The fused data is then used as input to a machine-learning model.

Alternatively, in late (or high-level) fusion, also known as object-level fusion, the data of each sensor is processed independently to make a local prediction. These individual results are combined at a higher level to make the final fused prediction. Both early and late fusion have advantages and disadvantages and are widely used across various applications. Table 1 shows a summary of the differences between early and late fusion.

Table 1. A comparison of early fusion to late fusion (Image: GeeksforGeeks).

Early fusion is used for object recognition tasks where data from multiple sensors (e.g., cameras, LiDAR) is fused to improve the detection accuracy in autonomous vehicles. Medical diagnosis in healthcare and multi-modal sentiment analysis that combines text, audio, and visual data are additional applications of early fusion.

Late fusion is commonly used in recommendation systems where separate models predict user preferences based on different sources of information (e.g., user behavior, item features) before aggregating the results. Speech recognition and security and surveillance systems provide additional applications for late fusion.

Implementing sensor fusion

Over 20 years ago, to improve the safety of vulnerable road users (VRUs), including pedestrians and cyclists, the European Commission funded a research project called Sensors and System Architecture for Vulnerable Road Users (SAVE-U). Researchers investigated multilevel and different sensor technologies in a sensing architecture that included low-level and high-level data fusion. Sensing technologies included infrared (IR) vision, color visible vision, and four or five 24-GHz radar sensors. At that time, they concluded that high-level data fusion was insufficient to provide the required quality and reliability of the target data.

fusion timing
Figure 1. The SAVE-U project’s sensing architecture for radar and vision fusion (Image: Understanding Smart Sensors, Third Edition).

Today, with LiDAR included as one of the potential sensing technologies, low-level fusion still seems to be a popular design approach. Lower-level/early fusion allows ADAS to use lower-cost sensors without requiring high-performance computing, keeping the sensor’s power budget down.

fusion timing
Figure 2. A comparison of early vs late sensor fusion (Image: Segments.ai).

Luminar, Tesla, and others are examples of companies that have implemented early fusion. Luminar’s LiDAR technology is standard on Volvo Cars EX90, and its Halo design uses early fusion. In 2021, Tesla presented an end-to-end early fusion approach. Carmaker Rivian and ADAS software developer LeddarTech are also interested in early fusion.

Some industry experts are considering a new approach called very early fusion. In these designs, tuning sensors work together to reduce data volume close to the sensor, so sensors capture the environment based on each other’s capability. While camera outputs account for a large volume of sensor data, much of that data is irrelevant. With LiDAR data considered, unnecessary camera data can be eliminated before data processing.

References

ADAS sensor fusion
Understanding Smart Sensors, Third Edition, Chapter 4: Sensor Fusion
Late vs early sensor fusion: a comparison
Early Fusion vs. Late Fusion in Multimodal Data Processing – GeeksforGeeks

You may also like:

  • ADAS sensors
    The role of ADAS sensors in automotive design

  • What is the newest use for radar in automobiles?

  • What’s the status of autonomous vehicles in industrial applications? Part…

  • Sensor fusion: What is it?

Filed Under: Featured, Frequently Asked Question (FAQ), Image sensing, RADAR/LiDAR Tagged With: luminar

Primary Sidebar

Featured Contributions

Integrating MEMS technology into next-gen vehicle safety features

Fire prevention through the Internet

Beyond the drivetrain: sensor innovation in automotive

Sensors in American football can help the game

Select and integrate sensors into IoT devices

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: 5G Technology
This Tech Toolbox covers the basics of 5G technology plus a story about how engineers designed and built a prototype DSL router mostly from old cellphone parts. Download this first 5G/wired/wireless communications Tech Toolbox to learn more!

EE LEARNING CENTER

EE Learning Center
“sensor
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

RSS Current EDABoard.com discussions

  • Innovus Scan Reorder deletes Scan In Pad
  • LLC resonant converter clarification
  • The Analog Gods Hate Me
  • Step Up Push Pull Transformer design / construction
  • Pre amp and other circuits

RSS Current Electro-Tech-Online.com Discussions

  • Home Smoke detectors are all Beeping Batteries are not dead.???
  • How to make string LEDs?
  • My Advanced Realistic Humanoid Robots Project
  • PIC KIT 3 not able to program dsPIC
  • Display TFT ST7789 (OshonSoft Basic).

EE ENGINEERING TRAINING DAYS

engineering
“bills

RSS Featured White Papers

  • 4D Imaging Radar: Sensor Supremacy For Sustained L2+ Vehicle Enablement
  • Amphenol RF solutions enable the RF & video signal chains in industrial robots
  • Implementing Position Sensors for Hazardous Areas & Safety

DesignFast

Component Selection Made Simple.

Try it Today
design fast globle

Footer

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Microcontroller Tips
  • Power Electronic Tips
  • Test and Measurement Tips

SENSOR TIPS

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy