50 patents in this list

Updated:

In the realm of autonomous vehicles, precise environmental perception is essential for safe navigation. LiDAR technology, with its ability to create detailed 3D maps, plays a crucial role. However, relying solely on LiDAR can lead to inaccuracies due to environmental factors like weather and lighting. To address these issues, advanced sensor fusion techniques are being developed, integrating LiDAR with other sensor data to provide a more comprehensive understanding of the surroundings.

The challenge lies in effectively combining data from disparate sources, such as cameras and inertial sensors, to enhance object detection and tracking. This involves overcoming issues like data synchronization, varying resolutions, and computational complexity. Professionals in this field strive to create seamless integration methods that ensure reliable and accurate real-time data processing, even in dynamic environments.

This page explores solutions from recent research, including neural network-based fusion for 3D object tracking, and synchronized systems for coherent point cloud generation. These approaches improve environmental mapping and object detection, ultimately enhancing the safety and efficiency of autonomous vehicles. By leveraging these techniques, the industry moves closer to achieving robust and reliable autonomous navigation systems.

1. Autonomous Vehicle Positioning System Integrating Inertial Measurements with LiDAR Point Cloud Data

APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD., 2023

Accurate, robust positioning for autonomous vehicles, using data from onboard sensors without relying on external maps. The method integrates inertial measurements with LiDAR point clouds and local maps to determine a vehicle's position. It compensates vehicle motion using inertial data, matches LiDAR points to maps, and probabilistically combines the data sources to optimize positioning.

2. Multimodal Sensor Data Integration for 3D Object Localization Using Neural Network-Generated Probability Maps

Zoox, Inc., 2023

Associating object detection in a 2D image with 3D point cloud data from multiple sensors to accurately locate and track objects in 3D space. The technique uses neural networks to analyze subsets of sensor data from different modalities associated with object detection. It combines the network outputs to generate probability maps indicating which points in the point clouds are likely to belong to the detected object. This allows associating the object detection with 3D data and generating a more accurate 3D region of interest.

3. System for Synchronizing Rotating LiDAR Sensors with Adjustable Scanning Directions and Rotations

Waymo LLC, 2023

Syncing multiple rotating LiDAR sensors on a vehicle to capture overlapping environments by adjusting their scanning directions and rotations. The system accounts for differences in mounting positions and phases and aligns their scans to combine the sensor data into a coherent representation. This involves adjusting the individual sensor's rotation to scan the same targets simultaneously.

US11656358B2-patent-drawing

4. Integrated Imaging Device with Co-Located LiDAR and Thermal Photodetectors on Single Focal Plane Array

OWL AUTONOMOUS IMAGING, INC., 2023

An integrated imaging device combining LiDAR and thermal imaging to overcome limitations of conventional camera and LiDAR systems for applications like autonomous vehicles, and military reconnaissance. The key features are: 1) Co-locating LiDAR and thermal photodetectors on a single focal plane array (FPA) to correlate object detection between the two sensing modes. 2) Utilizing separate wavebands for LiDAR (e.g. NIR) and thermal (e.g. LWIR) to avoid interferences. 3) Configurable readout circuitry to optimize FPA operation between 2D thermal and 3D LiDAR imaging.

5. LiDAR and Video Measurement Integration System with 3D Image Error Correction and Motion Trajectory Resolution

Aeva, Inc., 2023

System for combining LiDAR and video measurements to generate 3D images of targets and refining the 3D images to account for errors. The system uses LiDAR measurements and video images to resolve the motion trajectory of a target. It then refines the 3D images by reducing the errors in the transformation parameters between video frames.

6. Background Filtering Method and System for Solid-State Roadside LiDAR Data Extraction

Guangdong University of Technology, 2023

Method and system to accurately filter background from solid-state roadside LiDAR data to extract road user information for autonomous vehicles. It uses a roadside solid-state LiDAR to extract background frames by aggregating individual channel point clouds. Then in real-time data, channel point clouds are extracted and compared against their corresponding background channel to identify road users. The resulting road user point clouds from each channel are combined into a complete road user point cloud. This filters out the static background and provides accurate road user information for self-driving vehicles.

US11645759B1-patent-drawing

7. Sequential Fusion Architecture for LiDAR and Image Data in 3D Object Detection

Motional AD LLC, 2023

Perception processing pipeline for object detection in self-driving cars that fuses image semantic data (e.g., semantic segmentation scores) with LiDAR points to improve detection accuracy. The pipeline uses a sequential fusion architecture that accepts LiDAR point clouds and camera images as input and estimates oriented 3D bounding boxes for all relevant object classes. It consists of three stages: 1) semantic segmentation to compute semantic data, 2) fusion to combine the data with LiDAR points, and 3) 3D object detection using a network that takes the fused point cloud as input.

US11634155B2-patent-drawing

8. Machine Learning Model Utilizing Image and Point Cloud Subsets for Object Velocity and Center Detection

Zoox, Inc., 2023

Training an ML model to detect object velocity and center even when training data is sparse. The technique involves using subsets of image and point cloud data associated with object detection to train the ML model. The model outputs velocity and center information that can be used to predict future object positions. The model parameters are adjusted based on differences between predictions and ground truth, improving accuracy.

US11628855B1-patent-drawing

9. 3D Imaging System Calibration Using Facial Feature-Based Alignment of Video and LiDAR Subsystems

Aeva, Inc., 2023

Calibrating the video and LiDAR subsystems of a 3D imaging system using facial features to improve the accuracy of mapping 3D coordinates to 2D images. The calibration process involves mapping measurements of facial features obtained by each subsystem to align their coordinate systems. This allows combining LiDAR range measurements with video images to generate accurate 3D images of a target.

10. LiDAR Data Processing for Vehicle Size Estimation Using Clustering and Height Analysis

Zoox, Inc., 2023

Estimating vehicle size from LiDAR data in autonomous vehicles to avoid collisions. The technique uses LiDAR data clustering and analysis to estimate object heights. The LiDAR data is processed by associating it with a 2D representation, removing ground points, clustering remaining points to identify objects, and estimating object heights based on the vertical extent and distances between LiDAR beams. The estimated heights are used to control the autonomous vehicle.

11. Real-Time High-Frequency LiDAR and Camera Sensor Synchronization System with Dynamic Time Delay Adjustment

CYNGN, INC., 2023

A system and method for synchronizing LiDAR and camera sensors on autonomous vehicles. The synchronization is done in real-time at high frequencies to provide accurate and synchronized LiDAR and camera data for object detection and tracking. The method involves dynamically determining the time delay between capturing data from the LiDAR and camera sensors based on properties like FOV and packet capture timings. This allows precise alignment of the sensor data capture timings.

12. Point Cloud Registration System with Partitioning for 3D Map Generation in Autonomous Driving Vehicles

BAIDU USA LLC, 2023

A point clouds registration system for autonomous driving vehicles (ADVs) to generate high definition 3D maps of the driving environment. It partitions and registers captured point clouds to create an accurate map.

US11608078B2-patent-drawing

13. Coherent Detection Lidar System with Semiconductor Optical Amplifier and Balanced Detector for Long-Range Sensing

GM CRUISE HOLDINGS LLC, 2023

A coherent detection lidar sensor system for long-range autonomous vehicle sensing. The system uses a coherent detection scheme instead of direct detection to enhance sensitivity. A semiconductor optical amplifier (SOA) modulates an input optical signal from a laser source and amplifies a portion of the signal. This modulated signal is transmitted and reflected back from targets. A balanced detector coherently mixes the reflected signal with a local oscillator. This enables coherent detection of the modulated signal even at low power levels, improving long-range detection.

US11592558B2-patent-drawing

14. Method for Fusing Lidar and Multispectral Camera Data by Wavelength Matching

OSR ENTERPRISES AG, 2023

Fusing information about a vehicle's environment using a Lidar sensor and a multispectral camera. The method involves capturing an image using the multispectral camera that includes both visible light and the specific wavelength emitted by the Lidar. By matching points of the Lidar light in the image to Lidar distance readings, objects in the image can be associated with their distances. This provides a way to augment the image with accurate distance information.

US11592557B2-patent-drawing

15. Automated Data Labeling System Using High-End Perception Sensors for Low-End Sensor Training

BAIDU USA LLC, 2023

Leveraging high-end perception sensors to automatically label data from low-end sensors for training autonomous vehicle perception systems. The approach involves using the output from a neural network processing high-quality sensor data as ground truth to label corresponding low-quality sensor data. This enables efficient training by reducing manual labeling of low-quality data.

US11592570B2-patent-drawing

16. Method for Generating Enhanced LiDAR Data via Fusion of Camera Images and Sparse LiDAR Using Machine Learning Models

Volkswagen Aktiengesellschaft, 2023

A method to improve autonomous vehicle perception by generating higher resolution LiDAR data by fusing camera images and sparse LiDAR data. The method involves using machine learning models to identify features of interest in both the images and LiDAR data, fusing them together, and generating new LiDAR data with a depth map and location mask. This allows leveraging the high resolution of camera images to enhance the sparse LiDAR data.

17. Lidar-Camera Alignment System Using Dynamic Object Removal and Iterative Pose Adjustment

GM GLOBAL TECHNOLOGY OPERATIONS LLC, 2023

Accurately aligning a lidar sensor with a camera on a vehicle without needing external targets or calibration objects. The alignment is achieved by removing dynamic objects from the lidar and camera data, aggregating lidar scans over time, and iteratively updating the lidar pose and color until the rendered lidar points onto the camera image match.

18. Method for Multi-LiDAR Dynamic Occupancy Mapping with Phase Congruency-Based Object Segmentation

HONDA MOTOR CO., LTD., 2023

Method for providing online multi-LiDAR dynamic occupancy mapping that improves safety and reliability of autonomous vehicles. The method segments dynamic and static objects in the vehicle's environment using LiDAR data. It computes a static occupancy map using phase congruency to detect objects. It then computes a dynamic occupancy map to detect moving objects. The dynamic occupancy map is used to control the vehicle in real-time.

19. Rotating LiDAR Sensor Synchronization with Independent Slice Timestamping and Phased Trigger Rotation

Samsung Electronics Co., Ltd., 2023

Synchronization of rotating LiDAR sensors to provide a consistent fused point cloud without artifacts or smearing due to sensor misalignment. Separate the sensor rotation into slices and timestamp them independently. Then fuse the slices separately to get a coherent point cloud. Phased trigger rotation of multiple LiDARs mounted on a platform.

20. Sensor Calibration System Using 3D Model Alignment for Autonomous Vehicles

NIO Technology (Anhui) Co., Ltd., 2022

Calibrating sensors like cameras and lidar on autonomous vehicles to improve their perception accuracy and robustness. It involves detecting objects in the environment using each sensor, constructing 3D models of the objects, matching the models to find corresponding points, and computing the sensor transformation that aligns them.

21. Electronic Device with Dual Lidar and Camera Configuration for Depth Image Generation and Sensor Fusion

22. Mobile Robot with Integrated Camera and Lidar Data for Simultaneous Mapping and Localization

23. Sensor Frame Annotation System Utilizing LiDAR-Based 3D Bounding Boxes for Object Detection Training

24. Camera Pose Estimation via Iterative Depth Projection Sampling from LiDAR Point Clouds

25. Image Grid-Based LIDAR Point Deletion for Parallax Error Correction

Request the full report with complete details of these

+30 patents for offline reading.

The patents examined here demonstrate developments in the field of LiDAR sensor fusion. Certain inventions concentrate on enhancing fundamental features, including combining LiDAR data with inertial measurements to improve vehicle placement. Others analyze and aggregate sensor data for 3D object tracking and detection using neural networks.