Sensors are becoming ubiquitous as their price and availability continue to improve. However, sensor data is not that simple and is prone to noise and other interference. The complexities of sensor data have led to sensor fusion that aims to perform better than a single sensor by improving the signal-to-noise ratio, decreasing uncertainty and ambiguity, and increasing reliability, robustness, resolution, accuracy, and other properties. It uses selected sensors to compensate for other sensors’ weaknesses or improve the overall accuracy or the reliability of a decision-making process. In most applications, computing resources are finite, and artificial intelligence and machine learning (AI/ML) can determine the best sensor data merging (fusion) strategy based on real-time operating conditions.
This FAQ reviews various fusion levels and modeling methodologies and presents some platforms for developing and implementing sensor fusion applications in Industry 4.0, Internet of Things (IoT), and machine vision and image processing applications. Sensor fusion implementations can be split into three categories based on the abstraction level:
Fusion at the data level simply fuses or aggregates multiple sensor data streams, producing a larger quantity of data, assuming that merging similar data sources results in increased precision and better information. Data level fusion is used to reduce noise and improve robustness.
Fusion at the feature level uses features derived from several independent sensor nodes or a single node with several sensors. It combines those features into a multi-dimensional vector usable in pattern-recognition algorithms. Machine vision and localization functions are common applications of fusion at the feature level.
Fusion at the decision level combines local results from multiple decision classifiers into a single global decision.
Various methods based on machine learning have been offered to develop an optimal sensor fusion algorithm. One approach compares the results of several sensor fusion approaches using the Friedman test to analyze variance by ranks and the Holm method to accept and reject hypotheses regarding the best fusion method iteratively. This approach can work well when a limited number of sensor modalities are used in relatively simple domains such as recognizing simple human activities (SHAs). When more complex domains such as recognition of grammatical facial expressions require additional sensors, improved results can be obtained by adding a ‘generalization step’ to the statistical signature data set stage (Figure 1). The generalization step integrates the statistical signatures of the data sets of different domains, producing a larger and generalized meta-data set that can support more complex and powerful sensor fusion activities.

Computing algorithms are used in sensor fusion to take the various sensor inputs and produce a combined result that is more accurate and useful compared with the data from the individual sensors. Algorithms can be chained together to provide successively refined results. Sensor fusion algorithms have common characteristics and may include:
Smoothing uses multiple measurements to estimate the value of a variable, such as global position satellite (GPS) positioning, either offline or in real-time.
Filtering uses current and past measurements to determine the state of a variable, such as speed, in real-time.
Prediction state estimation analyzes previous measures of variables such as direction and speed in real-time to predict a current or future state, such as a GPS position.
Kalman filters
The Kalman filter, a form of linear quadratic estimation, is a common sensor fusion algorithm. It runs recursively, only needing the current sensor measurements, the last estimated state, and known uncertainties. In addition to sensor fusion, Kalman filters are also the basis for some ML algorithms. A Kalman filter operates in two steps:
- Prediction estimates current state variables and uncertainties such as environmental and other factors affecting the sensor measurements.
- Updating based on the next set of sensor measurements when the filter updates the estimated states, weighing the estimates using the calculated uncertainties.
Sensor fusion developers can use a Kalman filter to obtain relatively accurate information from situations with inherent uncertainty and to reduce bias, noise, and accumulation errors. Kalman filters are used in motion control applications to estimate position over time using historical data and secondary sensors such as accelerometers and gyroscopes when data from a primary source such as a GPS signal is unavailable. Kalman filters are commonly found in mobile robots, drones, and other Industry 4.0 systems.
Senor fusion platforms for Industry 4.0 and IoT
With the increasing number of sensors in Industry 4.0 systems comes a growing demand for sensor fusion to make sense of the mountains of data that those sensors produce. Suppliers are responding with integrated sensor fusion devices. For example, an intelligent condition monitoring box is available designed for machine condition monitoring based on fusing data from vibration, sound, temperature, and magnetic field sensors. Additional sensor modalities for monitoring acceleration, rotational speeds, and shock and vibration can optionally be included.
The system implements sensor fusion through AI algorithms to classify abnormal operating conditions with better granularity resulting in high probability decision making (Figure 2). This edge AI architecture can simplify handling the big data produced by sensor fusion, ensuring that only the most relevant data is sent to the edge AI processor or to the cloud for further analysis and possible use in training ML algorithms.

The use of AI/ML has several benefits:
- The AI algorithm can employ sensor fusion to use the data from one sensor to compensate for weaknesses in the data from other sensors.
- The AI algorithm can classify the relevance of each sensor to specific tasks and minimize or ignore data from sensors determined to be less important.
- Through continuous training at the edge or in the cloud, AI/ML algorithms can learn to identify changes in system behavior that were previously unrecognized.
- The AI algorithm can predict possible sources of failures, enabling preventative maintenance and improving overall productivity.
Sensor fusion kits are also available for IoT applications. Some are designed that follow the Adafruit “Feather” specification. It’s based on a board specification part of the “Adafruit Feather Ecosystem”. It includes two small circuit boards, a controller “Feather” and a sensor fusion “Feather Wing” that stacks on top of the Feather (Figure 3). The Wing contains a high accuracy barometric pressure sensor, a high SNR MEMS microphone, an inertial measurement unit (IMU), and a microcontroller. The microcontroller is edge AI-capable and can process microphone and other sensor data through local sensor fusion algorithms to trigger a notification or alarm.

The Feather Controller, with FreeRTOS firmware installed, serves as an IoT controller with Wi-Fi/Bluetooth connectivity to the Wing, so pre-processed or raw sensor data from the Wing can be uploaded to the AWS Cloud for further processing.
Sensor Fusion Kit for Radar + Camera Data
Developers of advanced driver assistance systems ( ADAS), autonomous vehicles, smart retail, industrial 4.0, robotics, smart building, and smart city applications can turn to a system-on-module (SoM) AI-enabled sensor fusion kit (AI-SFK) that fuses data from a camera and mmWave radar for deep learning and video analytics (Figure 4). The camera data plus mmWave radar data are complementary and support object detection, classification, range, velocity, and other parameters in real-time. The radar operates at 77 GHz, and the 8 MP, 4 K color camera can deliver up to 21 frames per second.

This AI-SFK can significantly reduce development times. It has side-by-side panels that show the objects detected with the radar sensors on one panel and the video captured by the camera at the same spot on the other panel. It supports a variety of standard hardware interfaces, such as CAN and USB, simplifying the integration of this SFK into the overall system development environment.
The available AI libraries include computer vision, graphics, and multimedia applications. The kit can incorporate additional sensor modalities such as thermal imaging and LiDAR and be extended with additional machine learning and deep learning algorithms.
Summary
Sensor fusion combined with AI/ML produces a powerful tool to maximize the benefits when using a variety of sensor modalities. AI/ML enhanced sensor fusion can be employed at several levels in a system, including at the data level, the fusion level, and the decision level. Basic functions in sensor fusion implementations include smoothing and filtering sensor data and predicting sensor and system states. Designers have a variety of sensor fusion kits and platforms available to speed the development of sensor fusion systems across a range of applications, including Industry 4.0, IoT, automotive, image processing, and more.
References
AI-enabled Sensor Fusion Kit, Mistrial Solutions
Choosing the Best Sensor Fusion Method: A Machine-Learning Approach, MDPI Sensors
Embedded Sensor Platform with AI Algorithms—Locally from Big Data to Smart Data, Analog Devices
Sensor Fusion and Artificial Intelligence Kit, Ainstein
Sensor Fusion Development Kit: Getting started with FreeRTOS, Flex