Neuromorphic Vision Control for Autonomous Drone Flight
Neuromorphic vision systems operate at thresholds where conventional imaging struggles—specifically in low-light conditions where illuminance drops below 10 lux. Flight tests demonstrate that while standard vision systems experience detection failures at signal-to-noise ratios below 3 dB, event-based neuromorphic sensors maintain operational capability down to -2 dB by processing only luminance changes rather than full frames. These differential sensitivities enable critical distinctions between navigation obstacles and operational targets in environments where photon counts are sparse.
The fundamental challenge lies in balancing the asynchronous, sparse data advantages of neuromorphic sensing against the computational demands of real-time flight control and object recognition in dynamic, poorly illuminated environments.
This page brings together solutions from recent research—including event-driven image reconstruction techniques, dual-stage illuminance-dependent processing methods, attention-aware sparse learning architectures, and systems that integrate polarization sensing with neuromorphic computing. These and other approaches focus on practical implementation for UAV operations where power constraints, processing latency, and environmental variability must be simultaneously managed.
1. Optoelectronic Device with Integrated Surface-Emitting Laser Array, Dynamic Vision Sensor, and Neuromorphic Computing Platform
BROWN UNIVERSITY, 2025
A compact optoelectronic device for noninvasive imaging of targets obscured by dense turbid media, comprising a high-density surface-emitting laser array source, a dynamic vision sensor detector, and a chip-scale neuromorphic computing platform. The device integrates these components into a single functional whole, enabling real-time image reconstruction of targets with 100 μm spatial resolution and 100 ms time resolution. The system uses neuromorphic computing to process the dynamic vision sensor's spike train data, achieving asynchronous and low-latency image reconstruction.
2. Real-Time Object Detection Method with Dual-Stage Illuminance-Dependent Processing and Line Clustering
TWINNY CO LTD, 2025
A real-time object detection method for robots that operates across varying illuminance levels. The method employs a two-stage approach: a high-illuminance loop using conventional object detection techniques, and a low-illuminance loop using line clustering of depth image data. The method determines object location based on the median of the detected cluster, with the line clustering stage providing robustness in low-light conditions.
3. Robotic Order Fulfillment System with Event Camera for Real-Time Item Detection and Neuromorphic Computing
DEMATIC GMBH, 2025
A method and system for robotic order fulfillment in a warehouse that enables picking and placement of items while in motion. The system employs an event camera that captures pixel changes as the item moves, allowing for real-time item detection, localization, and pose calculation. The event camera's asynchronous data stream and low latency enable processing of visual scene analysis on-the-fly, eliminating the need for stationary picking targets and enabling continuous robotic movement. The system further utilizes neuromorphic computing and specialized hardware acceleration to optimize processing and achieve high-speed item recognition and robotic control.
4. Neuromorphic Optical Computing System with Attention-Aware Sparse Learning and Multi-Channel Representation
TSINGHUA UNIVERSITY, 2024
A neuromorphic optical computing architecture system that achieves high-performance machine vision applications through attention-aware sparse learning. The system employs a multi-channel representation module, an attention-aware optical neural network module, and an output module to process complex tasks at light speed. The architecture leverages the inherent sparsity and parallelism of light to optimize optical computation, enabling efficient processing of large-scale machine vision tasks.
5. In-Pixel Processing System with Photogate Sensors and Saccadic Algorithm for Image Acquisition and Analysis
ADAPTIVE COMPUTATION LLC, 2024
A brain-like in-pixel intelligent processing system for image acquisition and processing that emulates the human visual pathway to achieve rapid and accurate object recognition. The system comprises an in-pixel processing array with a set of processing units, each containing a photogate sensor, average circuit, subtraction circuit, and absolute circuit, that process raw gray information in a pixel level. A saccadic eye movement algorithm selects output from the processing units based on the acquired image, enabling efficient object detection, classification, and tracking.
6. BGF-YOLOv10: Small Object Detection Algorithm from Unmanned Aerial Vehicle Perspective Based on Improved YOLOv10
Junhui Mei, Wenqiu Zhu - MDPI AG, 2024
With the rapid development of deep learning, unmanned aerial vehicles (UAVs) have acquired intelligent perception capabilities, demonstrating efficient data collection across various fields. In UAV perspective scenarios, captured images often contain small and unevenly distributed objects, and are typically high-resolution. This makes object detection in UAV imagery more challenging compared to conventional detection tasks. To address this issue, we propose a lightweight object detection algorithm, BGF-YOLOv10, specifically designed for small object detection, based on an improved version of YOLOv10n. First, we introduce a novel YOLOv10 architecture tailored for small objects, incorporating BoTNet, variants of C2f and C3 in the backbone, along with an additional small object detection head, to enhance detection performance for small objects. Second, we embed GhostConv into both the backbone and head, effectively reducing the number of parameters by nearly half. Finally, we insert a Patch Expanding Layer module in the neck to restore the feature spatial resolution. Experimental result... Read More
7. Optical Synapse Device with Transparent Electrode and Double-Oxide Layer for Integrated Sensing and Processing
ZHANG BAIZHOU, 2024
An optical synapse device for machine vision systems that integrates sensing, processing, and computing functions into a single device. The device comprises a transparent conductive electrode, a double-oxide active layer with resistive switching properties, and an electrically conductive layer. The double-oxide layer generates a current in response to light and voltage, enabling convolutional processing and neuromorphic computing. The device can be arranged in a matrix to perform convolution operations, and its transconductance can be modulated by different light wavelengths to implement various image processing kernels.
8. Neuromorphic Vision Sensor Array with Event-Driven Image Reconstruction in Low-Light Conditions
SENSORS UNLIMITED INC, 2024
Neuromorphic vision (NMV) sensing in low-light environments enables efficient image reconstruction through passive sensing of light by an array of NMV sensors. The NMV array integrates light from each sensor, outputs event signals upon exceeding a threshold, and combines these signals to reconstruct images. The system achieves this by exploiting the unique characteristics of NMV sensors, including their ability to capture all available light information without saturation. The reconstruction process can be performed using compressed sensing techniques, enabling real-time image processing in low-light environments.
9. An All-Time Detection Algorithm for UAV Images in Urban Low Altitude
Yuzhuo Huang, Jingyi Qu, Li Wang - MDPI AG, 2024
With the rapid development of urban air traffic, Unmanned Aerial Vehicles (UAVs) are gradually being widely used in cities. Since UAVs are prohibited over important places in Urban Air Mobility (UAM), such as government and airports, it is important to develop airground non-cooperative UAV surveillance for air security all day and night. In the paper, an all-time UAV detection algorithm based on visible images during the day and infrared images at night is proposed by our team. We construct a UAV dataset used in urban visible backgrounds (UAVvisible) and a UAV dataset used in urban infrared backgrounds (UAVinfrared). In the daytime, the visible images are less accurate for UAV detection in foggy environments; therefore, we incorporate a defogging algorithm with the detection network that can ensure the undistorted output of images for UAV detection based on the realization of defogging. At night, infrared images have the characteristics of a low-resolution, unclear object contour, and complex image background. We integrate the attention and the transformation of space feature maps... Read More
10. Bionic Vision Platform with Integrated Polarization, Lidar, Inertial Sensors, and Deep Learning Detection
UNIV SOUTHEAST, 2024
A bionic vision multi-source information intelligent perception unmanned platform that integrates polarization vision, lidar, inertial sensors, and deep learning-based target detection for autonomous navigation and obstacle avoidance in complex environments. The platform combines bionic polarization vision with lidar and inertial sensors for accurate navigation, while a deep learning-based target detection module utilizes a monocular vision camera to detect targets and obstacles. The system enables high-precision positioning, autonomous navigation, and covert operation in both field and closed environments.
11. Event-Based Sensor Data Processing Method with Connectivity-Based Event Grouping for Feature Extraction
PROPHESEE, 2024
Method for processing event-based sensor data to extract features for computer vision applications. The method groups events based on connectivity criteria, including luminance differences, without requiring a complete image frame. It enables online clustering and feature extraction from event-based data, overcoming limitations of traditional frame-based approaches.
12. Drone Navigation System with Dual-Mode Camera and Simulated Infrared Image Processing for Low Light Obstacle Avoidance
SKYDIO INC, 2024
Enabling autonomous aerial navigation for drones in low light conditions without disabling obstacle avoidance. The drone uses a learning model trained on simulated infrared images for obstacle avoidance in night mode. In day mode, the drone still uses the cameras but filters out the infrared data to improve image quality for navigation. This allows the drone to autonomously navigate in low light without relying solely on GPS. It prevents disabling obstacle avoidance or manual navigation in low light environments.
13. Neuromorphic Sensor with Spatiotemporal Filter-Based Multiple Pathways for Event-Based Pixel Processing
UNIVERSITY OF PITTSBURGH - OF THE COMMONWEALTH SYSTEM OF HIGHER EDUCATION, 2024
A neuromorphic programmable multiple pathways event-based sensor that extends the notion of events to spatiotemporal filters acting on neighboring pixels, outputting different pathways to mimic biological retinas. The sensor enables low-bandwidth, precisely timed information extraction from scenes, with architectures and devices that implement this approach.
14. Zero-referenced Enlightening and Restoration for UAV Nighttime Vision
Yuezhou Li, Yuzhen Niu, Rui Xu - Institute of Electrical and Electronics Engineers (IEEE), 2024
Unmanned aerial vehicle (UAV) based visual systems suffer from poor perception at nighttime. There are three challenges for enlightening nighttime vision for UAVs: Firstly, the UAV nighttime images differ from underexposed images in the statistical characteristic, limiting the performance of general low-light image enhancement (LLIE) methods. Secondly, when enlightening nighttime images, the artifacts tend to be amplified, distracting the visual perception of UAVs. Thirdly, due to the inherent scarcity of paired data in the real world, it is difficult for UAV nighttime vision to benefit from supervised learning. To meet these challenges, we propose a zero-referenced enlightening and restoration network (ZERNet) for improving the perception of UAV vision at nighttime. Specifically, by estimating the nighttime enlightening map (NE-map), a pixel-to-pixel transformation is then conducted to enlighten the dark pixels while suppressing overbright pixels. Furthermore, we propose the self-regularized restoration to preserve the semantic contents and restrict the artifacts in the final result... Read More
15. LDHD‐Net: A Lightweight Network With Double Branch Head for Feature Enhancement of UAV Targets in Complex Scenes
Cong Zhang, Qi Gao, Rui Shi - Wiley, 2024
The development of small UAV technology has led to the emergence of new challenges in UAV countermeasures. The timely detection of UAVs can effectively prevent potential infringements on airspace and privacy. Currently, methods based on deep learning demonstrate excellent performance in target detection. However, in complex scenes, there is a tendency for false alarms (FAs) and misdetections to occur at a higher rate. To solve these problems, we propose a lightweight infrared small target detection algorithm LDHDNet. First, we design a novel GhostShuffle module in the backbone network to enhance the network feature extraction capability. Meanwhile, we remove redundant layers from the network to make the backbone network more lightweight. Second, we design a hierarchical attention enhancement module in the neck network to improve the saliency of UAV targets and reduce background noise interference. In addition, we design a novel small target detection structure and prediction heads in the shallow layers of the network to improve small target detection accuracy. Finally, we design a ... Read More
16. Enhancing Nighttime UAV Tracking with Light Distribution Suppression
Liangliang Yao, Chang‐Hong Fu, Yiheng Wang, 2024
Visual object tracking has boosted extensive intelligent applications for unmanned aerial vehicles (UAVs). However, the state-of-the-art (SOTA) enhancers for nighttime UAV tracking always neglect the uneven light distribution in low-light images, inevitably leading to excessive enhancement in scenarios with complex illumination. To address these issues, this work proposes a novel enhancer, i.e., LDEnhancer, enhancing nighttime UAV tracking with light distribution suppression. Specifically, a novel image content refinement module is developed to decompose the light distribution information and image content information in the feature space, allowing for the targeted enhancement of the image content information. Then this work designs a new light distribution generation module to capture light distribution effectively. The features with light distribution information and image content information are fed into the different parameter estimation modules, respectively, for the parameter map prediction. Finally, leveraging two parameter maps, an innovative interweave iteration adjustment i... Read More
17. Fusion flow-enhanced graph pooling residual networks for Unmanned Aerial Vehicles surveillance in day and night dual visions
Alam Noor, Kai Li, Eduardo Tovar - Elsevier BV, 2024
Recognizing unauthorized Unmanned Aerial Vehicles (UAVs) within designated no-fly zones throughout the day and night is of paramount importance, where the unauthorized UAVs pose a substantial threat to both civil and military aviation safety. However, recognizing UAVs day and night with dual-vision cameras is nontrivial, since red-green-blue (RGB) images suffer from a low detection rate under an insufficient light condition, such as on cloudy or stormy days, while black-and-white infrared (IR) images struggle to capture UAVs that overlap with the background at night. In this paper, we propose a new optical flow-assisted graph-pooling residual network (OF-GPRN), which significantly enhances the UAV detection rate in day and night dual visions. The proposed OF-GPRN develops a new optical fusion to remove superfluous backgrounds, which improves RGB/IR imaging clarity. Furthermore, OF-GPRN extends optical fusion by incorporating a graph residual split attention network and a feature pyramid, which refines the perception of UAVs, leading to a higher success rate in UAV detection. A compre... Read More
18. Real-Time Neuromorphic Navigation: Integrating Event-Based Vision and Physics-Driven Planning on a Parrot Bebop2 Quadrotor
Amogh Joshi, S. K. Sanyal, Kaushik Roy, 2024
In autonomous aerial navigation, real-time and energy-efficient obstacle avoidance remains a significant challenge, especially in dynamic and complex indoor environments. This work presents a novel integration of neuromorphic event cameras with physics-driven planning algorithms implemented on a Parrot Bebop2 quadrotor. Neuromorphic event cameras, characterized by their high dynamic range and low latency, offer significant advantages over traditional frame-based systems, particularly in poor lighting conditions or during high-speed maneuvers. We use a DVS camera with a shallow Spiking Neural Network (SNN) for event-based object detection of a moving ring in real-time in an indoor lab. Further, we enhance drone control with physics-guided empirical knowledge inside a neural network training mechanism, to predict energy-efficient flight paths to fly through the moving ring. This integration results in a real-time, low-latency navigation system capable of dynamically responding to environmental changes while minimizing energy consumption. We detail our hardware setup, control loop, and ... Read More
19. Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors
Jakub Mandula, Jonas Kühne, Luca Pascarella, 2024
Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications. However, uncontrolled access to restricted areas threatens privacy and security. Thus, prevention and detection of UAVs are pivotal to guarantee confidentiality and safety. Although active scanning, mainly based on radars, is one of the most accurate technologies, it can be expensive and less versatile than passive inspections, e.g., object recognition. Dynamic vision sensors (DVS) are bio-inspired event-based vision models that leverage timestamped pixel-level brightness changes in fast-moving scenes that adapt well to low-latency object detection. This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection. In particular, we propose a setup to exploit DVS as an alternative to RGB cameras in a real-time and low-power configuration. Our approach leverages the high-dynamic range (HDR) and background suppression of DVS and, when trained with various fast-moving drones, outperforms RGB input in suboptimal ambient conditions such ... Read More
20. Infrared UAV Target Detection Based on Continuous-Coupled Neural Network
Zhuoran Yang, Jing Lian, Jizhao Liu - MDPI AG, 2023
The task of the detection of unmanned aerial vehicles (UAVs) is of great significance to social communication security. Infrared detection technology has the advantage of not being interfered with by environmental and other factors and can detect UAVs in complex environments. Since infrared detection equipment is expensive and data collection is difficult, there are few existing UAV-based infrared images, making it difficult to train deep neural networks; in addition, there are background clutter and noise in infrared images, such as heavy clouds, buildings, etc. The signal-to-clutter ratio is low, and the signal-to-noise ratio is low. Therefore, it is difficult to achieve the UAV detection task using traditional methods. The above challenges make infrared UAV detection a difficult task. In order to solve the above problems, this work drew upon the visual processing mechanism of the human brain to propose an effective framework for UAV detection in infrared images. The framework first determines the relevant parameters of the continuous-coupled neural network (CCNN) through the image... Read More
Get Full Report
Access our comprehensive collection of 79 documents related to this technology