50 patents in this list

Updated: February 06, 2024

This page provides information on the use of LiDAR technology and sensor fusion in various applications.

LiDAR, which stands for Light Detection and Ranging, is a remote sensing method that uses laser beams to measure distances and create detailed 3D maps of the surrounding environment. Sensor fusion, on the other hand, involves combining data from multiple sensors to enhance accuracy and reliability. The combination of LiDAR and sensor fusion has revolutionized numerous industries by enabling advanced perception systems and autonomous operations.

Despite their immense potential, LiDAR and sensor fusion present certain technological challenges. One major challenge is the complexity of processing large amounts of data generated by LiDAR sensors and other sensors simultaneously. Integration and synchronization of multiple sensors to ensure seamless data fusion can be a complex task. Additionally, the cost of LiDAR technology has historically been significant, making it less accessible for some applications. However, recent advancements have brought about more affordable options without compromising performance.

1. Integrating Inertial Measurements with LiDAR for Enhanced Autonomous Vehicle Positioning

APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD., 2023

(Summary) Accurate, robust positioning for autonomous vehicles, using data from onboard sensors without relying on external maps. The method integrates inertial measurements with lidar point clouds and local maps to determine a vehicle's position. It compensates vehicle motion using inertial data, matches lidar points to maps, and probabilistically combines the data sources to optimize positioning.

2. Neural Network-Based Sensor Fusion for Enhanced Object Detection and Tracking in 3D Space

Zoox, Inc., 2023

(Summary) Associating an object detection in a 2D image with 3D point cloud data from multiple sensors to accurately locate and track objects in 3D space. The technique uses neural networks to analyze subsets of sensor data from different modalities associated with the object detection. It combines the network outputs to generate probability maps indicating which points in the point clouds are likely to belong to the detected object. This allows associating the object detection with 3D data and generating a more accurate 3D region of interest.

3. Synchronized Rotating LiDAR Sensor System for Enhanced Environmental Mapping

Waymo LLC, 2023

(Summary) Syncing multiple rotating LIDAR sensors on a vehicle to capture overlapping environments by adjusting their scanning directions and rotations. The system accounts for differences in mounting positions and phase aligns their scans to combine the sensor data into a coherent representation. This involves adjusting the individual sensor's rotation to scan the same targets simultaneously.

US11656358B2-patent-drawing

4. Integrated LiDAR and Thermal Imaging Device for Enhanced Object Detection

OWL AUTONOMOUS IMAGING, INC., 2023

(Summary) An integrated imaging device combining LiDAR and thermal imaging to overcome limitations of conventional camera and LiDAR systems for applications like autonomous vehicles, military reconnaissance. The key features are: 1) Co-locating LiDAR and thermal photodetectors on a single focal plane array (FPA) to correlate object detection between the two sensing modes. 2) Utilizing separate wavebands for LiDAR (e.g. NIR) and thermal (e.g. LWIR) to avoid interferences. 3) Configurable readout circuitry to optimize FPA operation between 2D thermal and 3D LiDAR imaging.

5. Enhanced 3D Imaging through Lidar and Video Measurement Fusion

Aeva, Inc., 2023

(Summary) System for combining lidar and video measurements to generate 3D images of targets and refining the 3D images to account for errors. The system uses lidar measurements and video images to resolve the motion trajectory of a target. It then refines the 3D images by reducing the errors in the transformation parameters between video frames.

6. Roadside Solid-State LiDAR Data Filtering for Autonomous Vehicle Road User Detection

Guangdong University of Technology, 2023

(Summary) Method and system to accurately filter background from solid-state roadside lidar data to extract road user information for autonomous vehicles. It uses a roadside solid-state lidar to extract background frames by aggregating individual channel point clouds. Then in real-time data, channel point clouds are extracted and compared against their corresponding background channel to identify road users. The resulting road user point clouds from each channel are combined into a complete road user point cloud. This filters out the static background and provides accurate road user information for self-driving vehicles.

US11645759B1-patent-drawing

7. Sequential Fusion Architecture for Enhancing Object Detection in Autonomous Vehicles

Motional AD LLC, 2023

(Summary) Perception processing pipeline for object detection in self-driving cars that fuses image semantic data (e.g., semantic segmentation scores) with LiDAR points to improve detection accuracy. The pipeline uses a sequential fusion architecture that accepts LiDAR point clouds and camera images as input and estimates oriented 3D bounding boxes for all relevant object classes. It consists of three stages: 1) semantic segmentation to compute semantic data, 2) fusion to combine the data with LiDAR points, and 3) 3D object detection using a network that takes the fused point cloud as input.

US11634155B2-patent-drawing

8. Machine Learning Model Training for Enhanced Object Detection Using Sparse LiDAR Data

Zoox, Inc., 2023

(Summary) Training a ML model to detect object velocity and center even when training data is sparse. The technique involves using subsets of image and point cloud data associated with an object detection to train the ML model. The model outputs velocity and center information that can be used to predict future object positions. The model parameters are adjusted based on differences between predictions and ground truth, improving accuracy.

US11628855B1-patent-drawing

9. Facial Feature-Based Calibration for Enhanced 3D Imaging in LiDAR Systems

Aeva, Inc., 2023

(Summary) Calibrating the video and lidar subsystems of a 3D imaging system using facial features to improve the accuracy of mapping 3D coordinates to 2D images. The calibration process involves mapping measurements of facial features obtained by each subsystem to align their coordinate systems. This allows combining lidar range measurements with video images to generate accurate 3D images of a target.

10. Lidar Data Clustering for Vehicle Size Estimation in Autonomous Driving Systems

Zoox, Inc., 2023

(Summary) Estimating vehicle size from lidar data in autonomous vehicles to avoid collisions. The technique uses lidar data clustering and analysis to estimate object heights. The lidar data is processed by associating it with a 2D representation, removing ground points, clustering remaining points to identify objects, and estimating object heights based on vertical extent and distances between lidar beams. The estimated heights are used to control the autonomous vehicle.

11. Real-Time Synchronization Method for LiDAR and Camera Sensors on Autonomous Vehicles

12. Point Clouds Registration System for High Definition 3D Mapping in Autonomous Driving Vehicles

13. Enhanced Sensitivity in Long-Range Autonomous Vehicle Sensing Using Coherent Detection LiDAR System

14. Augmenting Multispectral Imaging with LiDAR for Enhanced Environmental Perception

15. Automated Data Labeling for Autonomous Vehicle Training Using High to Low-End Sensor Fusion

Download a full report with complete details of these

+40 patents for offline reading.