LiDAR Sensor Fusion for Advanced Detection
Modern autonomous systems rely on multiple sensor streams to build comprehensive environmental models, with LiDAR data providing crucial depth and geometry information. Raw point cloud data, however, contains inherent uncertainties—ranging from beam divergence effects at long ranges to partial occlusions and varying reflectivity responses—that can impact detection reliability at distances beyond 50 meters.
The fundamental challenge lies in combining temporally and spatially diverse sensor data streams while managing the computational overhead required for real-time perception.
This page brings together solutions from recent research—including probabilistic sensor fusion architectures, neural network approaches for multi-modal detection, synchronized multi-LiDAR configurations, and adaptive background filtering techniques. These and other approaches focus on improving detection accuracy and range while maintaining real-time processing capabilities required for autonomous navigation.
1. LiDAR and Camera Data Fusion System for Semantic Segmentation with 3D-to-2D Coordinate Conversion and Feature Integration
HYUNDAI MOTOR COMPANY, Kia Corporation, Konkuk University Industrial Cooperation Corp, 2024
Fusing data from a LiDAR and a camera for semantic segmentation of objects around a vehicle. The fusion involves converting 3D LiDAR point cloud coordinates to 2D coordinates using calibration parameters, matching points and pixels, extracting features from both, fusing the features, and updating the LiDAR feature map with the fused results. This allows accurate matching and fusion of features from the 3D LiDAR and 2D camera inputs for better object recognition.
2. Fusion-Based Occlusion Classification System for LiDAR Sensors Using Camera and LiDAR Data
KYUNGPOOK NATIONAL UNIV INDUSTRY ACADEMIC COOPERATION FOUNDATION, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION, 2024
Classifying occlusion of LiDAR sensors using fusion of camera and LiDAR data to improve reliability of LiDAR perception in autonomous vehicles. The method involves fusing camera images and LiDAR point clouds to extract reflection intensity and distance information. This fused data is fed into a pre-trained neural network to extract features and classify if the LiDAR is occluded.
3. Sensor Fusion Method for Object Detection Using Single-Line Lidar and Monocular Camera Data
HUBEI SANJIANG AEROSPACE HONGFENG CONTROL CO LTD, 2024
Fusing single-line lidar and monocular camera data for object detection and 3D perception at lower cost compared to multi-line lidar. The method involves using clustering on the single-line lidar point cloud and object detection from the monocular camera image using deep learning. Joint calibration aligns the sensors' coordinate systems. Bounding boxes from both sources are projected into each other's spaces. IOU calculates if they represent the same object. If so, fused 3D information is output. This leverages lower cost sensors and reduces complexity compared to multi-line lidar while still providing 3D perception.
4. 3D Road Target Detection Method Utilizing Direct Lidar Point Cloud Processing with Outlier Filtering and RANSAC Estimation
Shanghai Institute of Technology, SHANGHAI INSTITUTE OF TECHNOLOGY, 2024
3D road target detection using lidar point clouds without converting to 2D images. The method directly processes the raw 3D point cloud data to improve accuracy compared to 2D projections. It involves filtering outliers, estimating target parameters like size and center, and refining the estimates using RANSAC. This allows robust and accurate 3D detection of road targets using lidar point clouds, rather than losing depth and coordinate information by mapping to 2D.
5. Multi-Modal Sensor Fusion System for Vehicle-Road Collaboration with Attention Mechanism and Differential Transmission
FUZHOU UNIV, FUZHOU UNIVERSITY, 2024
Low-cost, high-performance vehicle-road collaboration using multi-modal sensors to overcome the limitations of single-sensor autonomous driving. It involves using lower-precision lidar on vehicles with roadside sensors to provide wider coverage. The method involves fusing lidar and camera features from both sources using an attention mechanism and correcting the fused features. It also selectively transmits regions based on a difference map to reduce communication bandwidth. This allows using cheaper, lower-precision lidar on vehicles while leveraging roadside sensors for complementary coverage.
6. Multi-level Feature Fusion of Heterogeneous Sensing Data for Target Recognition in Autonomous Driving
Tongji University, TONGJI UNIVERSITY, 2024
Multi-source heterogeneous sensing data fusion and target recognition for autonomous driving applications that improves robustness and accuracy under challenging conditions like occlusion and small targets. The method involves fusing features from cameras, millimeter wave radars, and lidar at multiple granularities to generate a robust and complete representation of the scene. The fusion is done in a multi-level manner where features from the same level (e.g., mid-level) of cameras and radars are combined. This allows leveraging the strengths of each modality at each level while mitigating their weaknesses. The fusion is followed by a recognition step using the fused representation.
7. Multi-Sensor Data Fusion System with Bird's-Eye View Feature Tensor Using Camera, Lidar, Millimeter Wave Radar, and Map Data
Suzhou Qingzhou Technology Co., Ltd., 2024
Multi-sensor data fusion system for autonomous driving that improves obstacle detection accuracy over long distances by combining camera, lidar, millimeter wave radar, and map data. The system uses sensors like cameras, lidar, millimeter wave radar, and a high-precision map to obtain feature tensors representing object images, distances, speeds, and environment maps. These tensors are fused to create a comprehensive bird's-eye view feature tensor with rich data regardless of distance. This improves obstacle detection and identification compared to lidar alone, especially for distant objects.
8. Multi-Modal Data Fusion Method for 3D Object Detection with Dual-Stage Lidar and Camera Enhancement
WUXI INTERNET OF THINGS INNOVATION CENTER CO LTD, ZHONGWEI WUCHUANG INTELLIGENT TECH SHANGHAI CO LTD, ZHONGWEI WUCHUANG INTELLIGENT TECHNOLOGY CO LTD, 2024
Multi-modal data fusion method for 3D object detection using cameras and lidar that improves detection accuracy and robustness in challenging scenarios. The method involves enhancing lidar point cloud data using camera images to obtain more complete and accurate object shapes. It uses two stages of enhancement: 1) enhancing point clouds with category information and instance centers from images, and 2) predicting complete object shapes based on enhanced point clouds. This second enhancement leverages the initial enhancement's semantic and other information to better characterize targets with occluded or incomplete data.
9. Neural Network-Based Fusion of Low-Beam Lidar and Camera Data for Obstacle Detection
Guilin University of Electronic Technology, GUILIN UNIVERSITY OF ELECTRONIC TECHNOLOGY, 2024
Fusing low-beam lidar and camera data for improved obstacle detection in low light conditions. The method involves using neural networks to process the sparse lidar point clouds and camera images separately. The lidar point clouds are segmented into dense bird's-eye views, then fed through a backbone network for feature learning. A separate neural network is trained for obstacle detection using lidar data. Both networks are used to detect obstacles in their respective inputs. The lidar and camera detections are then fused to provide accurate obstacle locations, distances, and types.
10. Lidar and Camera Fusion System with Attention-Based Feature Synchronization for 3D Small Object Detection
DONGFENG MOTOR GROUP CO LTD, DONGFENG YUEXIANG TECH CO LTD, DONGFENG YUEXIANG TECHNOLOGY CO LTD, 2024
Small target detection method, system and medium based on lidar and camera fusion for autonomous vehicles that improves accuracy, especially for detecting small objects like pedestrians. The method involves synchronizing lidar and camera data, using attention mechanisms to fuse features, and performing 3D detection on the combined frames. This enhances internal feature extraction and reduces false positives/negatives compared to just projecting 3D data onto 2D images.
11. Multi-Sensor Feature Fusion Method with Lidar-Camera Data and Dense Lidar Map Generation
DONGFENG YUEXIANG TECH CO LTD, DONGFENG YUEXIANG TECHNOLOGY CO LTD, 2024
Multi-sensor feature fusion method for vehicles using lidar and cameras that addresses the sparsity issue of lidar data and enriches fusion information. The method involves estimating candidate regions from lidar points and fusing them with image features. Steps include: obtaining lidar and camera data, rasterizing lidar points to create a dense lidar map, estimating candidate regions from lidar points, extracting image features, fusing candidate regions with image features, and passing the fused features to downstream tasks.
12. LIDAR System with Camera-Assisted Point Cloud Inconsistency Detection and Adjustment
INNOVIZ TECH LTD, INNOVIZ TECHNOLOGIES LTD, 2024
A LIDAR system that improves object detection by comparing point cloud data from LIDAR with images from a conventional camera to detect inconsistencies. If inconsistencies are found, the LIDAR point cloud is adjusted to provide a more accurate representation of objects in the scene. This helps mitigate errors in LIDAR point clouds caused by factors like reflections or occlusions. The comparison and adjustment are performed by the LIDAR system itself.
13. Point Cloud Filtering Method with Camera-Lidar Fusion for Dynamic Object Segmentation
HARBIN INSTITUTE OF TECH SHENZHEN SHENZHEN INSTITUTE OF SCIENCE AND TECH INNOVATION HARBIN INSTITUTE, HARBIN INSTITUTE OF TECHNOLOGY SHENZHEN, 2024
A point cloud filtering method for autonomous mobile robots operating in dynamic environments. The method uses fusion of camera and lidar sensors to improve robot positioning accuracy in dynamic scenes and reduce the impact of moving objects on mapping. It involves detecting moving objects in images, then filtering out corresponding points in the point cloud using clustering and segmentation to retain stationary environment points. This reduces false filtering of static points compared to directly filtering point clouds. By leveraging cameras for target detection and lidar for dense point clouds, it achieves efficient real-time filtering without deep learning.
14. Sensor Fusion Method for Multi-Target Detection Using Temporally and Spatially Aligned Camera and Lidar Data
CATARC AUTOMOTIVE ENGINEERING RESEARCH INSTITUTE CO LTD, CATARC TIANJIN AUTOMOTIVE ENG RES INST CO LTD, 2024
Multi-target detection method using cameras and lidar that overcomes the limitations of using just cameras or lidar alone by fusing the data from both sensors. The method involves aligning camera and lidar data in time, using a camera-based object detection network to get a 2D frame, spatially aligning lidar data, preprocessing it, and clustering lidar points in the camera frame region. Then, fusing the lidar cluster and camera frame using Kalman filtering to get an accurate 3D object contour.
15. Multi-Sensor Image Fusion System with Depth-Separable Convolution and Spatial-Channel Attention
HARBIN INST TECHNOLOGY WEIHAI, HARBIN INSTITUTE OF TECHNOLOGY WEIHAI, 2024
A method and system for accurate and real-time target perception using fusion of images from multiple sensors like cameras and lidars. The method involves a multi-sensor fusion strategy that combines two-dimensional (2D) and three-dimensional (3D) detection models to improve accuracy and efficiency. It uses techniques like depth-separable convolution, feature fusion, and spatial-channel attention to process the combined feature maps. This allows leveraging the benefits of both sensors to overcome limitations of single sensors in harsh environments.
16. Multi-Sensor Fusion Method for Accurate Lidar Mapping in Glass Environments
SHANGHAI NORMAL UNIVERSITY, UNIV SHANGHAI, 2024
Robot mapping method that enables accurate mapping of robots in glass environments using multi-sensor fusion. The method involves using a visual camera to detect if there is glass in the environment, then checking if the difference between lidar and ultrasonic sensor readings exceeds a threshold. If so, it triggers a data fusion algorithm to compensate the lidar readings for the glass. This ensures the lidar data is accurate when detecting transparent glass. The compensated lidar data is then used with SLAM algorithms to construct a map of the glass environment.
17. Method for Identifying Erroneous LIDAR Data via Fusion with Optical Flow and Hash Table Spatial Correspondence Analysis
Toyota Motor Engineering & Manufacturing North America, Inc., Toyota Jidosha Kabushiki Kaisha, 2023
Detecting erroneous LIDAR data in robots to prevent false perception and avoid accidents. The method involves fusing LIDAR point clouds with optical flow images, generating a hash table from the fused data, and querying the hash table to measure spatial correspondence between LIDAR points and optical flow pixels. If the correspondence falls below thresholds, the LIDAR data is identified as erroneous. This allows detecting spoofed LIDAR or sensor issues that don't match the actual scene captured by the camera. The robot can then withhold or flag the erroneous LIDAR data to prevent incorrect decision-making based on it.
18. Multi-LiDAR System with Fused Point Cloud Generation and Sensor Status Detection
SUM, SUM SMART YOUR MOBILITY, 2023
A multi-LiDAR system for more accurate and robust object recognition in autonomous vehicles and robots that mitigates issues like occlusion and sensor failures. The system uses multiple LiDAR sensors on the vehicle or robot to capture point cloud data of the surrounding environment. The data from all the sensors is merged to create a fused point cloud. This fused point cloud is then used to generate accurate and complete images of surrounding objects, even if some sensors are occluded or malfunctioning. The system also determines if any sensors are failing and displays that information alongside the images. The fusion and calibration steps involve converting the coordinate systems of the non-reference LiDARs to match the reference sensor's frame.
19. Unmanned Target Detection System with Camera and Solid-State Lidar Fusion Using Joint Calibration and Data Synchronization
CHINA UNIVERSITY OF MINING & TECHNOLOGY, JIANGSU HONGSHENG INTELLIGENT TECH RESEARCH INSTITUTE CO LTD, JIANGSU HONGSHENG INTELLIGENT TECHNOLOGY RESEARCH INSTITUTE CO LTD, 2023
Accurate unmanned target detection in harsh environments like mines and factories using fusion of camera and solid-state lidar. The method involves joint calibration of the cameras and lidar to align their fields of view, synchronizing their time stamps, and fusing the synchronized data to provide high-precision 3D target detection in challenging environments where single cameras or lidar struggle.
20. Multi-Focal Camera and Lidar Fusion System with Horizontal Offset for Enhanced Obstacle Detection
SANY ZHIKUANG TECH CO LTD, SANY ZHIKUANG TECHNOLOGY CO LTD, 2023
Environment perception system for autonomous vehicles that improves obstacle detection by using cameras with different focal lengths and lidar. The system acquires images from a short-range camera and a long-range camera, and fuses the results with lidar points. This provides wider field of view from the short-range camera and longer range from the long-range camera. The fused data is used for obstacle detection, tracking, and 3D information. The cameras are installed at the same height with a horizontal offset. This allows the system to cover a larger area compared to using just the long-range camera.
21. Method for Vehicle Detection via Lidar-Camera Data Fusion with Synchronization and Algorithmic Fusion Techniques
HEFEI UNIVERSITY OF TECHNOLOGY, UNIV HEFEI TECHNOLOGY, 2023
Vehicle detection method using fusion of lidar and camera data to improve accuracy and reliability of automated driving perception systems. The method involves synchronizing lidar and camera sensors, preprocessing point cloud and image data, fusing lidar and camera detections using intersection-union and longest common subsequence algorithms, and trajectory tracking to generate multi-dimensional perception data.
22. Multi-Sensor Fusion Obstacle Detection Method with Grid Coordinate Feature Projection
ZHEJIANG UNIV, ZHEJIANG UNIVERSITY, 2023
Efficient obstacle detection method for autonomous vehicles using multi-sensor fusion under grid coordinate system. The method involves extracting features from lidar point cloud and multi-view images, projecting them to grid map coordinates, fusing the features, and passing the mixed features through a semantic segmentation network to obtain obstacle detection results in the grid map. This allows accurate grid-based obstacle perception using lidar and cameras without relying on object identification.
23. Multi-Modal Fusion 3D Object Detection Using Voxel Grid and Image Feature Stacking
ANHUI HAIBO INTELLIGENT TECH CO LTD, ANHUI HAIBO INTELLIGENT TECHNOLOGY CO LTD, 2023
Multi-modal fusion 3D object detection for autonomous vehicles that improves accuracy and real-time performance compared to using just lidar or just cameras. The method involves fusing lidar point clouds and camera images to detect objects in a scene. It leverages the complementary features of lidar and camera data to improve detection quality and reduce redundancy. The fusion involves converting the lidar points to a voxel grid representation and stacking it with the image features. This fused representation is then fed into a neural network for object detection. The voxel grid provides geometric structure and sparsity, while the images provide color and texture. The fused representation has better quality and reduces redundant calculations compared to stacking raw lidar and camera data.
24. Scene Depth Completion via Conditional Variational Autoencoders with Geometric Guidance and Dynamic Graph Message Dissemination
Nanjing University of Aeronautics and Astronautics, NANJING UNIVERSITY OF AERONAUTICS AND ASTRONAUTICS, 2023
Scene depth completion method for autonomous driving using conditional variational autoencoders and geometric guidance to improve depth perception from sparse lidar data. The method involves using a conditional variational autoencoder to learn feature distribution from dense depth maps and guide color images and sparse depth maps to generate more valuable depth features. It also uses point cloud features to capture spatial structure and provide auxiliary information. A dynamic graph message dissemination module integrates color and point features for accurate depth completion prediction.
25. Multi-Frame Lidar Point Cloud Preprocessing with Residual Depth Image Generation for Dynamic Object Detection
MOER XIANCHENG INTELLIGENT TECH BEIJING CO LTD, MOER XIANCHENG INTELLIGENT TECHNOLOGY CO LTD, 2023
Improving dynamic object detection using lidar for autonomous vehicles by leveraging multi-frame lidar data. The method involves preprocessing lidar point cloud data to create depth images where each channel represents attribute info like distance, reflection, position. Residual processing is done on multiple frames to generate new depth images with combined attribute data. These residual depth sets are then fed into a trained dynamic object detection model. This allows using the full lidar environmental info instead of just semantic categories for more accurate and robust detection.
26. Three-Dimensional Vehicle Detection via Point Cloud and Image Data Fusion Using Shared Encoder-Decoder Network
GUILIN UNIV OF ELECTRONIC TECHNOLOGY, GUILIN UNIVERSITY OF ELECTRONIC TECHNOLOGY, 2023
Three-dimensional vehicle detection method for autonomous vehicles that combines point cloud and image data to improve accuracy in complex environments with occlusion and lighting variations. The method involves fusing point cloud and image features using a shared encoder-decoder network. It uses a feature matching step to align and project point cloud points onto the image. This allows joint processing of 3D point cloud and 2D image features for more accurate vehicle detection. The method also improves non-maximum suppression by adaptively adjusting the threshold based on vehicle density to reduce false positives.
27. Camera-LiDAR Fusion System with Point Pruning and Segmentation for Enhanced Object Detection
FORD GLOBAL TECH LLC, FORD GLOBAL TECHNOLOGIES LLC, 2023
Camera-LiDAR fusion object detection system and method for autonomous vehicles that improves accuracy and reduces false positives compared to using just LiDAR or just cameras for object detection. The fusion involves matching LiDAR points to pixels in images and using the combined data to detect objects. Techniques like point pruning, local segmentation, merging, and filtering are used to handle challenges like alignment, projection errors, and ambiguity when objects are close together. This allows better separation and merging of overlapping objects in the fused point cloud.
28. Point Cloud Data Refinement Method Using Laser Irradiation Range and Angle Analysis
SONY SEMICONDUCTOR SOLUTIONS CORP, 2023
A method to accurately use point cloud data obtained from sensors like LiDAR in image processing applications like object detection for autonomous vehicles. The method involves identifying and removing false points in the point cloud data that are not actually illuminated on objects in the image. This is done by analyzing the laser irradiation range and angle information from the LiDAR. By removing the false points, it ensures that only the points that are actually detected on objects are used in further processing like object recognition. This improves the reliability and accuracy of using point cloud data for tasks like object detection when the sensor and camera views may not perfectly align.
29. Lidar Data Augmentation via Video Segmentation with 2D Point Sampling and Depth Integration
KYUNGPOOK NATIONAL UNIV INDUSTRY ACADEMIC COOPERATION FOUNDATION, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION, 2023
Lidar data augmentation method using video segmentation to improve 3D object detection and classification performance from lidar sensors. The method involves acquiring an image frame from a camera and lidar data from the same area. It then generates segmentation masks for objects in the image, samples 2D points from the masks, and uses them to augment the lidar data. This involves projecting lidar points onto the masks, adding depth to the 2D points, and merging them. It fills gaps, removes errors, and projects lidar onto masks instead of using 3D convolutions. This improves 3D object detection from sparse lidar by leveraging dense camera data.
30. Method for Enhanced Object Detection in Autonomous Vehicles via LiDAR-Camera Data Integration with Probabilistic Point Transfer and Segment Merging
FORD GLOBAL TECH LLC, FORD GLOBAL TECHNOLOGIES LLC, 2023
Improving object detection for autonomous vehicles using LiDAR and cameras. The method involves transferring LiDAR point cloud detections to camera images, segmenting the points into merged segments, filtering out false positives, and merging segments. This allows better object detection using the complementary information from LiDAR and cameras. The transfer involves estimating probabilities for matching points to images based on uncertainty and confidence. Segmentation groups similar points together and filters out false matches. Merging segments improves object detection accuracy compared to point clustering.
31. Lidar Data Augmentation via Video-Segmented 2D Point Sampling and Projection
KYUNGPOOK NATIONAL UNIV INDUSTRY ACADEMIC COOPERATION FOUNDATION, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION, 2023
A lidar data augmentation technique using video segmentation to improve 3D object detection and classification from lidar sensors. The method involves obtaining an image from a camera and lidar data from a lidar sensor in sync. It then generates segmentation masks for objects in the image, samples 2D points from the masks, and augments lidar data by projecting the lidar points onto the masked image areas. This generates a depth map, adds depth to the sampled 2D points, and combines them with the lidar data. Post-processing fills gaps and removes errors. This provides augmented lidar data with higher density for objects of any size or distance.
32. Sensor Fusion System with Lidar and Camera for 3D Grid-Based Road Surface Prediction
NANJING UNIV OF SCIENCE & TECHNOLOGY, NANJING UNIVERSITY OF SCIENCE & TECHNOLOGY, 2023
Road surface prediction for autonomous driving that uses a combination of lidar and camera sensors to improve accuracy and robustness compared to using just one sensor type. The method involves converting lidar point clouds and camera images into 3D grid representations called Bird's Eye View (BEV). It then fuses the BEV features from both sensors using Kalman filtering to predict road surface conditions.
33. LiDAR-Based Motion Detection via Depth Comparison Using Machine Learning with Image Projection
NVIDIA Corporation, 2023
Detecting static and dynamic features from LiDAR in autonomous machine applications using machine learning techniques to identify motion without requiring prior knowledge of object types. The method involves generating input channels for a machine learning model by projecting a current LiDAR range image to the coordinate system of a prior image, comparing depth values, and generating a comparison image encoding changes in depth between frames. This allows detecting moving points with different depth values in the same location between frames.
34. Autonomous Vehicle Positioning System Integrating Inertial Measurements with LiDAR Point Cloud Data
APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD., 2023
Accurate, robust positioning for autonomous vehicles, using data from onboard sensors without relying on external maps. The method integrates inertial measurements with LiDAR point clouds and local maps to determine a vehicle's position. It compensates vehicle motion using inertial data, matches LiDAR points to maps, and probabilistically combines the data sources to optimize positioning.
35. Lidar-Based Object Classification System Utilizing 2D Projection and Convolutional Neural Networks
HYUNDAI MOTOR CO, KIA CORP, 2023
Vehicle lidar system and object classification method that accurately classifies objects detected using lidar. The method involves projecting the 3D lidar point cloud onto 2D images to extract shape-based features. Grids are placed over the 2D images and physical quantities like vertical distances are stored. CNNs process this 2D feature data to classify objects. This reduces computation compared to 3D CNNs.
36. Multimodal Sensor Data Integration for 3D Object Localization Using Neural Network-Generated Probability Maps
Zoox, Inc., 2023
Associating object detection in a 2D image with 3D point cloud data from multiple sensors to accurately locate and track objects in 3D space. The technique uses neural networks to analyze subsets of sensor data from different modalities associated with object detection. It combines the network outputs to generate probability maps indicating which points in the point clouds are likely to belong to the detected object. This allows associating the object detection with 3D data and generating a more accurate 3D region of interest.
37. Multi-Vehicle Sensor Data Fusion System for Enhanced Depth Map Generation via End-to-End Neural Network
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC., 2023
Improving autonomous driving systems by leveraging sensor data from multiple vehicles to generate more accurate and detailed depth maps. The method involves fusing sensor data from a local vehicle's sensor, like a camera, with sensor data from a nearby remote vehicle's sensor, like a lidar. A machine learning algorithm estimates the relative pose of the remote sensor relative to the local sensor. This estimated pose is then used to combine the local sensor's depth map with the remote sensor's sparse depth map to generate a fused, more detailed depth map. This fused depth map is then used for navigation. The fusion process is implemented as an end-to-end trainable neural network.
38. Multi-Sensor LiDAR Data Fusion for Enhanced Point Cloud Object Labeling
TuSimple, Inc., 2023
Combining LiDAR data from multiple sensors on an autonomous vehicle to improve perception and object identification. The technique involves scanning and combining the point clouds from multiple LiDAR sensors to create a combined point cloud. This combined point cloud is then processed to assign labels to points based on camera images. Labels indicate objects in the environment. By combining LiDAR data from multiple angles, the technique improves object detection and identification compared to using just one sensor.
39. System for Synchronizing Rotating LiDAR Sensors with Adjustable Scanning Directions and Rotations
Waymo LLC, 2023
Syncing multiple rotating LiDAR sensors on a vehicle to capture overlapping environments by adjusting their scanning directions and rotations. The system accounts for differences in mounting positions and phases and aligns their scans to combine the sensor data into a coherent representation. This involves adjusting the individual sensor's rotation to scan the same targets simultaneously.
40. Integrated Imaging Device with Co-Located LiDAR and Thermal Photodetectors on Single Focal Plane Array
OWL AUTONOMOUS IMAGING, INC., 2023
An integrated imaging device combining LiDAR and thermal imaging to overcome limitations of conventional camera and LiDAR systems for applications like autonomous vehicles, and military reconnaissance. The key features are: 1) Co-locating LiDAR and thermal photodetectors on a single focal plane array (FPA) to correlate object detection between the two sensing modes. 2) Utilizing separate wavebands for LiDAR (e.g. NIR) and thermal (e.g. LWIR) to avoid interferences. 3) Configurable readout circuitry to optimize FPA operation between 2D thermal and 3D LiDAR imaging.
41. LiDAR and Video Measurement Integration System with 3D Image Error Correction and Motion Trajectory Resolution
Aeva, Inc., 2023
System for combining LiDAR and video measurements to generate 3D images of targets and refining the 3D images to account for errors. The system uses LiDAR measurements and video images to resolve the motion trajectory of a target. It then refines the 3D images by reducing the errors in the transformation parameters between video frames.
42. Roadside Perception System with Lidar-Camera Data Fusion for Enhanced Detection of Small Targets
ZHEJIANG UNIV, ZHEJIANG UNIVERSITY, 2023
Roadside perception system for improved pedestrian and non-motorized vehicle detection using fusion of lidar and camera data. The system combines data from roadside lidar and cameras to improve accuracy of detecting small targets like pedestrians and bicycles. It uses lidar to capture high-resolution 3D data of small objects like pedestrians, leveraging micro-Doppler effects. The camera provides RGB data. Fusing these modalities provides richer feature maps for pedestrian/bicycle detection.
43. Background Filtering Method and System for Solid-State Roadside LiDAR Data Extraction
Guangdong University of Technology, 2023
Method and system to accurately filter background from solid-state roadside LiDAR data to extract road user information for autonomous vehicles. It uses a roadside solid-state LiDAR to extract background frames by aggregating individual channel point clouds. Then in real-time data, channel point clouds are extracted and compared against their corresponding background channel to identify road users. The resulting road user point clouds from each channel are combined into a complete road user point cloud. This filters out the static background and provides accurate road user information for self-driving vehicles.
44. Sequential Fusion Architecture for LiDAR and Image Data in 3D Object Detection
Motional AD LLC, 2023
Perception processing pipeline for object detection in self-driving cars that fuses image semantic data (e.g., semantic segmentation scores) with LiDAR points to improve detection accuracy. The pipeline uses a sequential fusion architecture that accepts LiDAR point clouds and camera images as input and estimates oriented 3D bounding boxes for all relevant object classes. It consists of three stages: 1) semantic segmentation to compute semantic data, 2) fusion to combine the data with LiDAR points, and 3) 3D object detection using a network that takes the fused point cloud as input.
45. Cross-Modal Object Detection via Bird's Eye View Feature Fusion from Lidar, Visible Light, and Thermal Infrared Sensors
CHERY AUTOMOBILE CO LTD, LION AUTOMOTIVE TECH NANJING CO LTD, LION AUTOMOTIVE TECHNOLOGY CO LTD, 2023
Object detection method using fusion of features extracted from different sensor modalities like lidar, visible light, and thermal infrared to improve object detection in complex environments. The method involves extracting features from lidar point clouds, visible light images, and thermal infrared images. Cross-modal fusion is performed by fusing the features in a bird's eye view space. This provides depth information for object detection that is lacking in single-modal methods.
46. Method for Outlier Removal in Lidar-Camera Image Fusion Using Depth Map Window Deviation Analysis
TESTWORKS CO LTD, 2023
A method to improve lidar-camera image fusion by removing outliers caused by the different positions of the lidar and camera sensors. The method involves extracting outliers from the depth map generated by projecting the lidar point cloud onto the camera image. The outlier extraction is done by searching windows in the depth map, calculating deviations within the windows, and extracting pixels with large deviations. Areas containing object boundaries are excluded from the outlier extraction. This refines the depth map by removing anomalies due to sensor positional differences.
47. Lidar Data Processing System with Image and Geometric Input Integration for Object Detection
DENSO CORPORATION, 2023
Improved object detection using lidar data from vehicles. The technique involves generating lidar inputs with both image-based and geometric-based portions. It processes the image part using a CNN to generate outputs. It processes the geometric part using an echo assignment routine. The outputs and assignments are concatenated and used to identify objects. This multi-modality approach provides enhanced accuracy in detecting and identifying objects in the environment around the vehicle.
48. 3D Imaging System Calibration Using Facial Feature-Based Alignment of Video and LiDAR Subsystems
Aeva, Inc., 2023
Calibrating the video and LiDAR subsystems of a 3D imaging system using facial features to improve the accuracy of mapping 3D coordinates to 2D images. The calibration process involves mapping measurements of facial features obtained by each subsystem to align their coordinate systems. This allows combining LiDAR range measurements with video images to generate accurate 3D images of a target.
49. LiDAR Data Processing for Vehicle Size Estimation Using Clustering and Height Analysis
Zoox, Inc., 2023
Estimating vehicle size from LiDAR data in autonomous vehicles to avoid collisions. The technique uses LiDAR data clustering and analysis to estimate object heights. The LiDAR data is processed by associating it with a 2D representation, removing ground points, clustering remaining points to identify objects, and estimating object heights based on the vertical extent and distances between LiDAR beams. The estimated heights are used to control the autonomous vehicle.
50. Real-Time High-Frequency LiDAR and Camera Sensor Synchronization System with Dynamic Time Delay Adjustment
CYNGN, INC., 2023
A system and method for synchronizing LiDAR and camera sensors on autonomous vehicles. The synchronization is done in real-time at high frequencies to provide accurate and synchronized LiDAR and camera data for object detection and tracking. The method involves dynamically determining the time delay between capturing data from the LiDAR and camera sensors based on properties like FOV and packet capture timings. This allows precise alignment of the sensor data capture timings.
The patents examined here demonstrate developments in the field of LiDAR sensor fusion. Certain inventions concentrate on enhancing fundamental features, including combining LiDAR data with inertial measurements to improve vehicle placement. Others analyze and aggregate sensor data for 3D object tracking and detection using neural networks.
Get Full Report
Access our comprehensive collection of 146 documents related to this technology
