50 patents in this list

Updated: July 01, 2024

LiDAR technology is essential for producing intricate 3D maps and enabling sophisticated perception for a range of uses. Precision and dependability are further improved via sensor fusion, which integrates information from several sensors, such as LiDAR.

 

This potent combination has transformed many industries, most notably autonomous operations. The developments in sensor fusion and LiDAR technologies are examined on this page.

1. Integrating Inertial Measurements with LiDAR for Enhanced Autonomous Vehicle Positioning

APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD., 2023

Accurate, robust positioning for autonomous vehicles, using data from onboard sensors without relying on external maps. The method integrates inertial measurements with LiDAR point clouds and local maps to determine a vehicle's position. It compensates vehicle motion using inertial data, matches LiDAR points to maps, and probabilistically combines the data sources to optimize positioning.

2. Neural Network-Based Sensor Fusion for Enhanced Object Detection and Tracking in 3D Space

Zoox, Inc., 2023

Associating object detection in a 2D image with 3D point cloud data from multiple sensors to accurately locate and track objects in 3D space. The technique uses neural networks to analyze subsets of sensor data from different modalities associated with object detection. It combines the network outputs to generate probability maps indicating which points in the point clouds are likely to belong to the detected object. This allows associating the object detection with 3D data and generating a more accurate 3D region of interest.

3. Synchronized Rotating LiDAR Sensor System for Enhanced Environmental Mapping

Waymo LLC, 2023

Syncing multiple rotating LiDAR sensors on a vehicle to capture overlapping environments by adjusting their scanning directions and rotations. The system accounts for differences in mounting positions and phases and aligns their scans to combine the sensor data into a coherent representation. This involves adjusting the individual sensor's rotation to scan the same targets simultaneously.

US11656358B2-patent-drawing

4. Integrated LiDAR and Thermal Imaging Device for Enhanced Object Detection

OWL AUTONOMOUS IMAGING, INC., 2023

An integrated imaging device combining LiDAR and thermal imaging to overcome limitations of conventional camera and LiDAR systems for applications like autonomous vehicles, and military reconnaissance. The key features are: 1) Co-locating LiDAR and thermal photodetectors on a single focal plane array (FPA) to correlate object detection between the two sensing modes. 2) Utilizing separate wavebands for LiDAR (e.g. NIR) and thermal (e.g. LWIR) to avoid interferences. 3) Configurable readout circuitry to optimize FPA operation between 2D thermal and 3D LiDAR imaging.

5. Enhanced 3D Imaging through Lidar and Video Measurement Fusion

Aeva, Inc., 2023

System for combining LiDAR and video measurements to generate 3D images of targets and refining the 3D images to account for errors. The system uses LiDAR measurements and video images to resolve the motion trajectory of a target. It then refines the 3D images by reducing the errors in the transformation parameters between video frames.

6. Roadside Solid-State LiDAR Data Filtering for Autonomous Vehicle Road User Detection

Guangdong University of Technology, 2023

Method and system to accurately filter background from solid-state roadside LiDAR data to extract road user information for autonomous vehicles. It uses a roadside solid-state LiDAR to extract background frames by aggregating individual channel point clouds. Then in real-time data, channel point clouds are extracted and compared against their corresponding background channel to identify road users. The resulting road user point clouds from each channel are combined into a complete road user point cloud. This filters out the static background and provides accurate road user information for self-driving vehicles.

US11645759B1-patent-drawing

7. Sequential Fusion Architecture for Enhancing Object Detection in Autonomous Vehicles

Motional AD LLC, 2023

Perception processing pipeline for object detection in self-driving cars that fuses image semantic data (e.g., semantic segmentation scores) with LiDAR points to improve detection accuracy. The pipeline uses a sequential fusion architecture that accepts LiDAR point clouds and camera images as input and estimates oriented 3D bounding boxes for all relevant object classes. It consists of three stages: 1) semantic segmentation to compute semantic data, 2) fusion to combine the data with LiDAR points, and 3) 3D object detection using a network that takes the fused point cloud as input.

US11634155B2-patent-drawing

8. Machine Learning Model Training for Enhanced Object Detection Using Sparse LiDAR Data

Zoox, Inc., 2023

Training an ML model to detect object velocity and center even when training data is sparse. The technique involves using subsets of image and point cloud data associated with object detection to train the ML model. The model outputs velocity and center information that can be used to predict future object positions. The model parameters are adjusted based on differences between predictions and ground truth, improving accuracy.

US11628855B1-patent-drawing

9. Facial Feature-Based Calibration for Enhanced 3D Imaging in LiDAR Systems

Aeva, Inc., 2023

Calibrating the video and LiDAR subsystems of a 3D imaging system using facial features to improve the accuracy of mapping 3D coordinates to 2D images. The calibration process involves mapping measurements of facial features obtained by each subsystem to align their coordinate systems. This allows combining LiDAR range measurements with video images to generate accurate 3D images of a target.

10. Lidar Data Clustering for Vehicle Size Estimation in Autonomous Driving Systems

Zoox, Inc., 2023

Estimating vehicle size from LiDAR data in autonomous vehicles to avoid collisions. The technique uses LiDAR data clustering and analysis to estimate object heights. The LiDAR data is processed by associating it with a 2D representation, removing ground points, clustering remaining points to identify objects, and estimating object heights based on the vertical extent and distances between LiDAR beams. The estimated heights are used to control the autonomous vehicle.

11. Real-Time Synchronization Method for LiDAR and Camera Sensors on Autonomous Vehicles

12. Point Clouds Registration System for High Definition 3D Mapping in Autonomous Driving Vehicles

13. Enhanced Sensitivity in Long-Range Autonomous Vehicle Sensing Using Coherent Detection LiDAR System

14. Augmenting Multispectral Imaging with LiDAR for Enhanced Environmental Perception

15. Automated Data Labeling for Autonomous Vehicle Training Using High to Low-End Sensor Fusion

Download a full report with complete details of these

+40 patents for offline reading.

The patents examined here demonstrate developments in the field of LiDAR sensor fusion. Certain inventions concentrate on enhancing fundamental features, including combining LiDAR data with inertial measurements to improve vehicle placement. Others analyze and aggregate sensor data for 3D object tracking and detection using neural networks.