LiDAR Sensor Fusion for Advanced Detection
155 patents in this list
Updated:
Modern autonomous systems rely on multiple sensor streams to build comprehensive environmental models, with LiDAR data providing crucial depth and geometry information. Raw point cloud data, however, contains inherent uncertainties—ranging from beam divergence effects at long ranges to partial occlusions and varying reflectivity responses—that can impact detection reliability at distances beyond 50 meters.
The fundamental challenge lies in combining temporally and spatially diverse sensor data streams while managing the computational overhead required for real-time perception.
This page brings together solutions from recent research—including probabilistic sensor fusion architectures, neural network approaches for multi-modal detection, synchronized multi-LiDAR configurations, and adaptive background filtering techniques. These and other approaches focus on improving detection accuracy and range while maintaining real-time processing capabilities required for autonomous navigation.
1. LiDAR and Camera Data Fusion System for Semantic Segmentation with 3D-to-2D Coordinate Conversion and Feature Integration
HYUNDAI MOTOR COMPANY, Kia Corporation, Konkuk University Industrial Cooperation Corp, 2024
Fusing data from a LiDAR and a camera for semantic segmentation of objects around a vehicle. The fusion involves converting 3D LiDAR point cloud coordinates to 2D coordinates using calibration parameters, matching points and pixels, extracting features from both, fusing the features, and updating the LiDAR feature map with the fused results. This allows accurate matching and fusion of features from the 3D LiDAR and 2D camera inputs for better object recognition.
2. Fusion-Based Occlusion Classification System for LiDAR Sensors Using Camera and LiDAR Data
KYUNGPOOK NATIONAL UNIV INDUSTRY ACADEMIC COOPERATION FOUNDATION, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION, 2024
Classifying occlusion of LiDAR sensors using fusion of camera and LiDAR data to improve reliability of LiDAR perception in autonomous vehicles. The method involves fusing camera images and LiDAR point clouds to extract reflection intensity and distance information. This fused data is fed into a pre-trained neural network to extract features and classify if the LiDAR is occluded.
3. Sensor Fusion Method for Object Detection Using Single-Line Lidar and Monocular Camera Data
HUBEI SANJIANG AEROSPACE HONGFENG CONTROL CO LTD, 2024
Fusing single-line lidar and monocular camera data for object detection and 3D perception at lower cost compared to multi-line lidar. The method involves using clustering on the single-line lidar point cloud and object detection from the monocular camera image using deep learning. Joint calibration aligns the sensors' coordinate systems. Bounding boxes from both sources are projected into each other's spaces. IOU calculates if they represent the same object. If so, fused 3D information is output. This leverages lower cost sensors and reduces complexity compared to multi-line lidar while still providing 3D perception.
4. 3D Road Target Detection Method Utilizing Direct Lidar Point Cloud Processing with Outlier Filtering and RANSAC Estimation
上海应用技术大学, SHANGHAI INSTITUTE OF TECHNOLOGY, 2024
3D road target detection using lidar point clouds without converting to 2D images. The method directly processes the raw 3D point cloud data to improve accuracy compared to 2D projections. It involves filtering outliers, estimating target parameters like size and center, and refining the estimates using RANSAC. This allows robust and accurate 3D detection of road targets using lidar point clouds, rather than losing depth and coordinate information by mapping to 2D.
5. Multi-Modal Sensor Fusion System for Vehicle-Road Collaboration with Attention Mechanism and Differential Transmission
FUZHOU UNIV, FUZHOU UNIVERSITY, 2024
Low-cost, high-performance vehicle-road collaboration using multi-modal sensors to overcome the limitations of single-sensor autonomous driving. It involves using lower-precision lidar on vehicles with roadside sensors to provide wider coverage. The method involves fusing lidar and camera features from both sources using an attention mechanism and correcting the fused features. It also selectively transmits regions based on a difference map to reduce communication bandwidth. This allows using cheaper, lower-precision lidar on vehicles while leveraging roadside sensors for complementary coverage.
6. Multi-level Feature Fusion of Heterogeneous Sensing Data for Target Recognition in Autonomous Driving
同济大学, TONGJI UNIVERSITY, 2024
Multi-source heterogeneous sensing data fusion and target recognition for autonomous driving applications that improves robustness and accuracy under challenging conditions like occlusion and small targets. The method involves fusing features from cameras, millimeter wave radars, and lidar at multiple granularities to generate a robust and complete representation of the scene. The fusion is done in a multi-level manner where features from the same level (e.g., mid-level) of cameras and radars are combined. This allows leveraging the strengths of each modality at each level while mitigating their weaknesses. The fusion is followed by a recognition step using the fused representation.
7. Multi-Sensor Data Fusion System with Bird's-Eye View Feature Tensor Using Camera, Lidar, Millimeter Wave Radar, and Map Data
苏州轻棹科技有限公司, 2024
Multi-sensor data fusion system for autonomous driving that improves obstacle detection accuracy over long distances by combining camera, lidar, millimeter wave radar, and map data. The system uses sensors like cameras, lidar, millimeter wave radar, and a high-precision map to obtain feature tensors representing object images, distances, speeds, and environment maps. These tensors are fused to create a comprehensive bird's-eye view feature tensor with rich data regardless of distance. This improves obstacle detection and identification compared to lidar alone, especially for distant objects.
8. Multi-Modal Data Fusion Method for 3D Object Detection with Dual-Stage Lidar and Camera Enhancement
WUXI INTERNET OF THINGS INNOVATION CENTER CO LTD, ZHONGWEI WUCHUANG INTELLIGENT TECH SHANGHAI CO LTD, ZHONGWEI WUCHUANG INTELLIGENT TECHNOLOGY CO LTD, 2024
Multi-modal data fusion method for 3D object detection using cameras and lidar that improves detection accuracy and robustness in challenging scenarios. The method involves enhancing lidar point cloud data using camera images to obtain more complete and accurate object shapes. It uses two stages of enhancement: 1) enhancing point clouds with category information and instance centers from images, and 2) predicting complete object shapes based on enhanced point clouds. This second enhancement leverages the initial enhancement's semantic and other information to better characterize targets with occluded or incomplete data.
9. Neural Network-Based Fusion of Low-Beam Lidar and Camera Data for Obstacle Detection
桂林电子科技大学, GUILIN UNIVERSITY OF ELECTRONIC TECHNOLOGY, 2024
Fusing low-beam lidar and camera data for improved obstacle detection in low light conditions. The method involves using neural networks to process the sparse lidar point clouds and camera images separately. The lidar point clouds are segmented into dense bird's-eye views, then fed through a backbone network for feature learning. A separate neural network is trained for obstacle detection using lidar data. Both networks are used to detect obstacles in their respective inputs. The lidar and camera detections are then fused to provide accurate obstacle locations, distances, and types.
10. Lidar and Camera Fusion System with Attention-Based Feature Synchronization for 3D Small Object Detection
DONGFENG MOTOR GROUP CO LTD, DONGFENG YUEXIANG TECH CO LTD, DONGFENG YUEXIANG TECHNOLOGY CO LTD, 2024
Small target detection method, system and medium based on lidar and camera fusion for autonomous vehicles that improves accuracy, especially for detecting small objects like pedestrians. The method involves synchronizing lidar and camera data, using attention mechanisms to fuse features, and performing 3D detection on the combined frames. This enhances internal feature extraction and reduces false positives/negatives compared to just projecting 3D data onto 2D images.
11. Multi-Sensor Feature Fusion Method with Lidar-Camera Data and Dense Lidar Map Generation
DONGFENG YUEXIANG TECH CO LTD, DONGFENG YUEXIANG TECHNOLOGY CO LTD, 2024
Multi-sensor feature fusion method for vehicles using lidar and cameras that addresses the sparsity issue of lidar data and enriches fusion information. The method involves estimating candidate regions from lidar points and fusing them with image features. Steps include: obtaining lidar and camera data, rasterizing lidar points to create a dense lidar map, estimating candidate regions from lidar points, extracting image features, fusing candidate regions with image features, and passing the fused features to downstream tasks.
12. LIDAR System with Camera-Assisted Point Cloud Inconsistency Detection and Adjustment
INNOVIZ TECH LTD, INNOVIZ TECHNOLOGIES LTD, 2024
A LIDAR system that improves object detection by comparing point cloud data from LIDAR with images from a conventional camera to detect inconsistencies. If inconsistencies are found, the LIDAR point cloud is adjusted to provide a more accurate representation of objects in the scene. This helps mitigate errors in LIDAR point clouds caused by factors like reflections or occlusions. The comparison and adjustment are performed by the LIDAR system itself.
13. Point Cloud Filtering Method with Camera-Lidar Fusion for Dynamic Object Segmentation
HARBIN INSTITUTE OF TECH SHENZHEN SHENZHEN INSTITUTE OF SCIENCE AND TECH INNOVATION HARBIN INSTITUTE, HARBIN INSTITUTE OF TECHNOLOGY SHENZHEN, 2024
A point cloud filtering method for autonomous mobile robots operating in dynamic environments. The method uses fusion of camera and lidar sensors to improve robot positioning accuracy in dynamic scenes and reduce the impact of moving objects on mapping. It involves detecting moving objects in images, then filtering out corresponding points in the point cloud using clustering and segmentation to retain stationary environment points. This reduces false filtering of static points compared to directly filtering point clouds. By leveraging cameras for target detection and lidar for dense point clouds, it achieves efficient real-time filtering without deep learning.
14. Sensor Fusion Method for Multi-Target Detection Using Temporally and Spatially Aligned Camera and Lidar Data
CATARC AUTOMOTIVE ENGINEERING RESEARCH INSTITUTE CO LTD, CATARC TIANJIN AUTOMOTIVE ENG RES INST CO LTD, 2024
Multi-target detection method using cameras and lidar that overcomes the limitations of using just cameras or lidar alone by fusing the data from both sensors. The method involves aligning camera and lidar data in time, using a camera-based object detection network to get a 2D frame, spatially aligning lidar data, preprocessing it, and clustering lidar points in the camera frame region. Then, fusing the lidar cluster and camera frame using Kalman filtering to get an accurate 3D object contour.
15. Multi-Sensor Image Fusion System with Depth-Separable Convolution and Spatial-Channel Attention
HARBIN INST TECHNOLOGY WEIHAI, HARBIN INSTITUTE OF TECHNOLOGY WEIHAI, 2024
A method and system for accurate and real-time target perception using fusion of images from multiple sensors like cameras and lidars. The method involves a multi-sensor fusion strategy that combines two-dimensional (2D) and three-dimensional (3D) detection models to improve accuracy and efficiency. It uses techniques like depth-separable convolution, feature fusion, and spatial-channel attention to process the combined feature maps. This allows leveraging the benefits of both sensors to overcome limitations of single sensors in harsh environments.
16. Multi-Sensor Fusion Method for Accurate Lidar Mapping in Glass Environments
SHANGHAI NORMAL UNIVERSITY, UNIV SHANGHAI, 2024
Robot mapping method that enables accurate mapping of robots in glass environments using multi-sensor fusion. The method involves using a visual camera to detect if there is glass in the environment, then checking if the difference between lidar and ultrasonic sensor readings exceeds a threshold. If so, it triggers a data fusion algorithm to compensate the lidar readings for the glass. This ensures the lidar data is accurate when detecting transparent glass. The compensated lidar data is then used with SLAM algorithms to construct a map of the glass environment.
17. Method for Identifying Erroneous LIDAR Data via Fusion with Optical Flow and Hash Table Spatial Correspondence Analysis
Toyota Motor Engineering & Manufacturing North America, Inc., Toyota Jidosha Kabushiki Kaisha, 2023
Detecting erroneous LIDAR data in robots to prevent false perception and avoid accidents. The method involves fusing LIDAR point clouds with optical flow images, generating a hash table from the fused data, and querying the hash table to measure spatial correspondence between LIDAR points and optical flow pixels. If the correspondence falls below thresholds, the LIDAR data is identified as erroneous. This allows detecting spoofed LIDAR or sensor issues that don't match the actual scene captured by the camera. The robot can then withhold or flag the erroneous LIDAR data to prevent incorrect decision-making based on it.
18. Multi-LiDAR System with Fused Point Cloud Generation and Sensor Status Detection
SUM, SUM SMART YOUR MOBILITY, 2023
A multi-LiDAR system for more accurate and robust object recognition in autonomous vehicles and robots that mitigates issues like occlusion and sensor failures. The system uses multiple LiDAR sensors on the vehicle or robot to capture point cloud data of the surrounding environment. The data from all the sensors is merged to create a fused point cloud. This fused point cloud is then used to generate accurate and complete images of surrounding objects, even if some sensors are occluded or malfunctioning. The system also determines if any sensors are failing and displays that information alongside the images. The fusion and calibration steps involve converting the coordinate systems of the non-reference LiDARs to match the reference sensor's frame.
19. Unmanned Target Detection System with Camera and Solid-State Lidar Fusion Using Joint Calibration and Data Synchronization
CHINA UNIVERSITY OF MINING & TECHNOLOGY, JIANGSU HONGSHENG INTELLIGENT TECH RESEARCH INSTITUTE CO LTD, JIANGSU HONGSHENG INTELLIGENT TECHNOLOGY RESEARCH INSTITUTE CO LTD, 2023
Accurate unmanned target detection in harsh environments like mines and factories using fusion of camera and solid-state lidar. The method involves joint calibration of the cameras and lidar to align their fields of view, synchronizing their time stamps, and fusing the synchronized data to provide high-precision 3D target detection in challenging environments where single cameras or lidar struggle.
20. Multi-Focal Camera and Lidar Fusion System with Horizontal Offset for Enhanced Obstacle Detection
SANY ZHIKUANG TECH CO LTD, SANY ZHIKUANG TECHNOLOGY CO LTD, 2023
Environment perception system for autonomous vehicles that improves obstacle detection by using cameras with different focal lengths and lidar. The system acquires images from a short-range camera and a long-range camera, and fuses the results with lidar points. This provides wider field of view from the short-range camera and longer range from the long-range camera. The fused data is used for obstacle detection, tracking, and 3D information. The cameras are installed at the same height with a horizontal offset. This allows the system to cover a larger area compared to using just the long-range camera.
Request the full report with complete details of these
+135 patents for offline reading.
The patents examined here demonstrate developments in the field of LiDAR sensor fusion. Certain inventions concentrate on enhancing fundamental features, including combining LiDAR data with inertial measurements to improve vehicle placement. Others analyze and aggregate sensor data for 3D object tracking and detection using neural networks.