50 patents in this list

Updated: July 01, 2024

LiDAR technology is essential for producing intricate 3D maps and enabling sophisticated perception for a range of uses. Precision and dependability are further improved via sensor fusion, which integrates information from several sensors, such as LiDAR.

 

This potent combination has transformed many industries, most notably autonomous operations. The developments in sensor fusion and LiDAR technologies are examined on this page.

1.  Integrating Inertial Measurements with LiDAR for Enhanced Autonomous Vehicle Positioning

APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD., 2023

Accurate, robust positioning for autonomous vehicles, using data from onboard sensors without relying on external maps. The method integrates inertial measurements with LiDAR point clouds and local maps to determine a vehicle's position. It compensates vehicle motion using inertial data, matches LiDAR points to maps, and probabilistically combines the data sources to optimize positioning.

2.  Neural Network-Based Sensor Fusion for Enhanced Object Detection and Tracking in 3D Space

Zoox, Inc., 2023

Associating object detection in a 2D image with 3D point cloud data from multiple sensors to accurately locate and track objects in 3D space. The technique uses neural networks to analyze subsets of sensor data from different modalities associated with object detection. It combines the network outputs to generate probability maps indicating which points in the point clouds are likely to belong to the detected object. This allows associating the object detection with 3D data and generating a more accurate 3D region of interest.

3.  Synchronized Rotating LiDAR Sensor System for Enhanced Environmental Mapping

Waymo LLC, 2023

Syncing multiple rotating LiDAR sensors on a vehicle to capture overlapping environments by adjusting their scanning directions and rotations. The system accounts for differences in mounting positions and phases and aligns their scans to combine the sensor data into a coherent representation. This involves adjusting the individual sensor's rotation to scan the same targets simultaneously.

US11656358B2-patent-drawing

4.  Integrated LiDAR and Thermal Imaging Device for Enhanced Object Detection

OWL AUTONOMOUS IMAGING, INC., 2023

An integrated imaging device combining LiDAR and thermal imaging to overcome limitations of conventional camera and LiDAR systems for applications like autonomous vehicles, and military reconnaissance. The key features are: 1) Co-locating LiDAR and thermal photodetectors on a single focal plane array (FPA) to correlate object detection between the two sensing modes. 2) Utilizing separate wavebands for LiDAR (e.g. NIR) and thermal (e.g. LWIR) to avoid interferences. 3) Configurable readout circuitry to optimize FPA operation between 2D thermal and 3D LiDAR imaging.

5.  Enhanced 3D Imaging through Lidar and Video Measurement Fusion

Aeva, Inc., 2023

System for combining LiDAR and video measurements to generate 3D images of targets and refining the 3D images to account for errors. The system uses LiDAR measurements and video images to resolve the motion trajectory of a target. It then refines the 3D images by reducing the errors in the transformation parameters between video frames.

6.  Roadside Solid-State LiDAR Data Filtering for Autonomous Vehicle Road User Detection

Guangdong University of Technology, 2023

Method and system to accurately filter background from solid-state roadside LiDAR data to extract road user information for autonomous vehicles. It uses a roadside solid-state LiDAR to extract background frames by aggregating individual channel point clouds. Then in real-time data, channel point clouds are extracted and compared against their corresponding background channel to identify road users. The resulting road user point clouds from each channel are combined into a complete road user point cloud. This filters out the static background and provides accurate road user information for self-driving vehicles.

US11645759B1-patent-drawing

7.  Sequential Fusion Architecture for Enhancing Object Detection in Autonomous Vehicles

Motional AD LLC, 2023

Perception processing pipeline for object detection in self-driving cars that fuses image semantic data (e.g., semantic segmentation scores) with LiDAR points to improve detection accuracy. The pipeline uses a sequential fusion architecture that accepts LiDAR point clouds and camera images as input and estimates oriented 3D bounding boxes for all relevant object classes. It consists of three stages: 1) semantic segmentation to compute semantic data, 2) fusion to combine the data with LiDAR points, and 3) 3D object detection using a network that takes the fused point cloud as input.

US11634155B2-patent-drawing

8.  Machine Learning Model Training for Enhanced Object Detection Using Sparse LiDAR Data

Zoox, Inc., 2023

Training an ML model to detect object velocity and center even when training data is sparse. The technique involves using subsets of image and point cloud data associated with object detection to train the ML model. The model outputs velocity and center information that can be used to predict future object positions. The model parameters are adjusted based on differences between predictions and ground truth, improving accuracy.

US11628855B1-patent-drawing

9.  Facial Feature-Based Calibration for Enhanced 3D Imaging in LiDAR Systems

Aeva, Inc., 2023

Calibrating the video and LiDAR subsystems of a 3D imaging system using facial features to improve the accuracy of mapping 3D coordinates to 2D images. The calibration process involves mapping measurements of facial features obtained by each subsystem to align their coordinate systems. This allows combining LiDAR range measurements with video images to generate accurate 3D images of a target.

10.  Lidar Data Clustering for Vehicle Size Estimation in Autonomous Driving Systems

Zoox, Inc., 2023

Estimating vehicle size from LiDAR data in autonomous vehicles to avoid collisions. The technique uses LiDAR data clustering and analysis to estimate object heights. The LiDAR data is processed by associating it with a 2D representation, removing ground points, clustering remaining points to identify objects, and estimating object heights based on the vertical extent and distances between LiDAR beams. The estimated heights are used to control the autonomous vehicle.

11.  Real-Time Synchronization Method for LiDAR and Camera Sensors on Autonomous Vehicles

CYNGN, INC., 2023

A system and method for synchronizing LiDAR and camera sensors on autonomous vehicles. The synchronization is done in real-time at high frequencies to provide accurate and synchronized LiDAR and camera data for object detection and tracking. The method involves dynamically determining the time delay between capturing data from the LiDAR and camera sensors based on properties like FOV and packet capture timings. This allows precise alignment of the sensor data capture timings.

12.  Point Clouds Registration System for High Definition 3D Mapping in Autonomous Driving Vehicles

BAIDU USA LLC, 2023

A point clouds registration system for autonomous driving vehicles (ADVs) to generate high definition 3D maps of the driving environment. It partitions and registers captured point clouds to create an accurate map.

US11608078B2-patent-drawing

13.  Enhanced Sensitivity in Long-Range Autonomous Vehicle Sensing Using Coherent Detection LiDAR System

GM CRUISE HOLDINGS LLC, 2023

A coherent detection lidar sensor system for long-range autonomous vehicle sensing. The system uses a coherent detection scheme instead of direct detection to enhance sensitivity. A semiconductor optical amplifier (SOA) modulates an input optical signal from a laser source and amplifies a portion of the signal. This modulated signal is transmitted and reflected back from targets. A balanced detector coherently mixes the reflected signal with a local oscillator. This enables coherent detection of the modulated signal even at low power levels, improving long-range detection.

US11592558B2-patent-drawing

14.  Augmenting Multispectral Imaging with LiDAR for Enhanced Environmental Perception

OSR ENTERPRISES AG, 2023

Fusing information about a vehicle's environment using a Lidar sensor and a multispectral camera. The method involves capturing an image using the multispectral camera that includes both visible light and the specific wavelength emitted by the Lidar. By matching points of the Lidar light in the image to Lidar distance readings, objects in the image can be associated with their distances. This provides a way to augment the image with accurate distance information.

US11592557B2-patent-drawing

15.  Automated Data Labeling for Autonomous Vehicle Training Using High to Low-End Sensor Fusion

BAIDU USA LLC, 2023

Leveraging high-end perception sensors to automatically label data from low-end sensors for training autonomous vehicle perception systems. The approach involves using the output from a neural network processing high-quality sensor data as ground truth to label corresponding low-quality sensor data. This enables efficient training by reducing manual labeling of low-quality data.

US11592570B2-patent-drawing

16.  Enhancing LiDAR Resolution through Camera and LiDAR Data Fusion for Autonomous Vehicles

Volkswagen Aktiengesellschaft, 2023

A method to improve autonomous vehicle perception by generating higher resolution LiDAR data by fusing camera images and sparse LiDAR data. The method involves using machine learning models to identify features of interest in both the images and LiDAR data, fusing them together, and generating new LiDAR data with a depth map and location mask. This allows leveraging the high resolution of camera images to enhance the sparse LiDAR data.

17.  Targetless Calibration Method for LiDAR and Camera Alignment in Vehicles

GM GLOBAL TECHNOLOGY OPERATIONS LLC, 2023

Accurately aligning a lidar sensor with a camera on a vehicle without needing external targets or calibration objects. The alignment is achieved by removing dynamic objects from the lidar and camera data, aggregating lidar scans over time, and iteratively updating the lidar pose and color until the rendered lidar points onto the camera image match.

18.  Online Multi-LiDAR Dynamic Occupancy Mapping for Autonomous Vehicle Safety

HONDA MOTOR CO., LTD., 2023

Method for providing online multi-LiDAR dynamic occupancy mapping that improves safety and reliability of autonomous vehicles. The method segments dynamic and static objects in the vehicle's environment using LiDAR data. It computes a static occupancy map using phase congruency to detect objects. It then computes a dynamic occupancy map to detect moving objects. The dynamic occupancy map is used to control the vehicle in real-time.

19.  Synchronized Rotating LiDAR Sensor Fusion for Coherent Point Cloud Generation

Samsung Electronics Co., Ltd., 2023

Synchronization of rotating LiDAR sensors to provide a consistent fused point cloud without artifacts or smearing due to sensor misalignment. Separate the sensor rotation into slices and timestamp them independently. Then fuse the slices separately to get a coherent point cloud. Phased trigger rotation of multiple LiDARs mounted on a platform.

20.  Sensor Calibration for Enhanced Object Detection in Autonomous Vehicles

NIO Technology (Anhui) Co., Ltd., 2022

Calibrating sensors like cameras and lidar on autonomous vehicles to improve their perception accuracy and robustness. It involves detecting objects in the environment using each sensor, constructing 3D models of the objects, matching the models to find corresponding points, and computing the sensor transformation that aligns them.

21. Low-Cost Sensor Fusion Device for Enhanced Object Detection in Autonomous Vehicles

22. Enhanced Mobile Robot Navigation through Combined Camera and LiDAR Sensor Fusion

23. LiDAR-Enhanced Annotation for Training Autonomous Driving Systems

24. LiDAR Point Cloud-Based Camera Localization for Accurate 3D Pose Estimation

25. Parallax Error Correction in LiDAR and Camera Data Fusion for Autonomous Vehicles

Request the full report with complete details of these

+30 patents for offline reading.

The patents examined here demonstrate developments in the field of LiDAR sensor fusion. Certain inventions concentrate on enhancing fundamental features, including combining LiDAR data with inertial measurements to improve vehicle placement. Others analyze and aggregate sensor data for 3D object tracking and detection using neural networks.