Neuromorphic vision systems operate at thresholds where conventional imaging struggles—specifically in low-light conditions where illuminance drops below 10 lux. Flight tests demonstrate that while standard vision systems experience detection failures at signal-to-noise ratios below 3 dB, event-based neuromorphic sensors maintain operational capability down to -2 dB by processing only luminance changes rather than full frames. These differential sensitivities enable critical distinctions between navigation obstacles and operational targets in environments where photon counts are sparse.

The fundamental challenge lies in balancing the asynchronous, sparse data advantages of neuromorphic sensing against the computational demands of real-time flight control and object recognition in dynamic, poorly illuminated environments.

This page brings together solutions from recent research—including event-driven image reconstruction techniques, dual-stage illuminance-dependent processing methods, attention-aware sparse learning architectures, and systems that integrate polarization sensing with neuromorphic computing. These and other approaches focus on practical implementation for UAV operations where power constraints, processing latency, and environmental variability must be simultaneously managed.

1. Optoelectronic Device with Integrated Surface-Emitting Laser Array, Dynamic Vision Sensor, and Neuromorphic Computing Platform

BROWN UNIVERSITY, 2025

A compact optoelectronic device for noninvasive imaging of targets obscured by dense turbid media, comprising a high-density surface-emitting laser array source, a dynamic vision sensor detector, and a chip-scale neuromorphic computing platform. The device integrates these components into a single functional whole, enabling real-time image reconstruction of targets with 100 μm spatial resolution and 100 ms time resolution. The system uses neuromorphic computing to process the dynamic vision sensor's spike train data, achieving asynchronous and low-latency image reconstruction.

US2025106530A1-patent-drawing

2. Real-Time Object Detection Method with Dual-Stage Illuminance-Dependent Processing and Line Clustering

TWINNY CO LTD, 2025

A real-time object detection method for robots that operates across varying illuminance levels. The method employs a two-stage approach: a high-illuminance loop using conventional object detection techniques, and a low-illuminance loop using line clustering of depth image data. The method determines object location based on the median of the detected cluster, with the line clustering stage providing robustness in low-light conditions.

3. Robotic Order Fulfillment System with Event Camera for Real-Time Item Detection and Neuromorphic Computing

DEMATIC GMBH, 2025

A method and system for robotic order fulfillment in a warehouse that enables picking and placement of items while in motion. The system employs an event camera that captures pixel changes as the item moves, allowing for real-time item detection, localization, and pose calculation. The event camera's asynchronous data stream and low latency enable processing of visual scene analysis on-the-fly, eliminating the need for stationary picking targets and enabling continuous robotic movement. The system further utilizes neuromorphic computing and specialized hardware acceleration to optimize processing and achieve high-speed item recognition and robotic control.

WO2025008055A1-patent-drawing

4. Neuromorphic Optical Computing System with Attention-Aware Sparse Learning and Multi-Channel Representation

TSINGHUA UNIVERSITY, 2024

A neuromorphic optical computing architecture system that achieves high-performance machine vision applications through attention-aware sparse learning. The system employs a multi-channel representation module, an attention-aware optical neural network module, and an output module to process complex tasks at light speed. The architecture leverages the inherent sparsity and parallelism of light to optimize optical computation, enabling efficient processing of large-scale machine vision tasks.

5. In-Pixel Processing System with Photogate Sensors and Saccadic Algorithm for Image Acquisition and Analysis

ADAPTIVE COMPUTATION LLC, 2024

A brain-like in-pixel intelligent processing system for image acquisition and processing that emulates the human visual pathway to achieve rapid and accurate object recognition. The system comprises an in-pixel processing array with a set of processing units, each containing a photogate sensor, average circuit, subtraction circuit, and absolute circuit, that process raw gray information in a pixel level. A saccadic eye movement algorithm selects output from the processing units based on the acquired image, enabling efficient object detection, classification, and tracking.

US2024404097A1-patent-drawing

6. BGF-YOLOv10: Small Object Detection Algorithm from Unmanned Aerial Vehicle Perspective Based on Improved YOLOv10

Junhui Mei, Wenqiu Zhu - MDPI AG, 2024

With the rapid development of deep learning, unmanned aerial vehicles (UAVs) have acquired intelligent perception capabilities, demonstrating efficient data collection across various fields. In UAV perspective scenarios, captured images often contain small and unevenly distributed objects, and are typically high-resolution. This makes object detection in UAV imagery more challenging compared to conventional detection tasks. To address this issue, we propose a lightweight object detection algorithm, BGF-YOLOv10, specifically designed for small object detection, based on an improved version of YOLOv10n. First, we introduce a novel YOLOv10 architecture tailored for small objects, incorporating BoTNet, variants of C2f and C3 in the backbone, along with an additional small object detection head, to enhance detection performance for small objects. Second, we embed GhostConv into both the backbone and head, effectively reducing the number of parameters by nearly half. Finally, we insert a Patch Expanding Layer module in the neck to restore the feature spatial resolution. Experimental result... Read More

7. Optical Synapse Device with Transparent Electrode and Double-Oxide Layer for Integrated Sensing and Processing

ZHANG BAIZHOU, 2024

An optical synapse device for machine vision systems that integrates sensing, processing, and computing functions into a single device. The device comprises a transparent conductive electrode, a double-oxide active layer with resistive switching properties, and an electrically conductive layer. The double-oxide layer generates a current in response to light and voltage, enabling convolutional processing and neuromorphic computing. The device can be arranged in a matrix to perform convolution operations, and its transconductance can be modulated by different light wavelengths to implement various image processing kernels.

US2024331334A1-patent-drawing

8. Neuromorphic Vision Sensor Array with Event-Driven Image Reconstruction in Low-Light Conditions

SENSORS UNLIMITED INC, 2024

Neuromorphic vision (NMV) sensing in low-light environments enables efficient image reconstruction through passive sensing of light by an array of NMV sensors. The NMV array integrates light from each sensor, outputs event signals upon exceeding a threshold, and combines these signals to reconstruct images. The system achieves this by exploiting the unique characteristics of NMV sensors, including their ability to capture all available light information without saturation. The reconstruction process can be performed using compressed sensing techniques, enabling real-time image processing in low-light environments.

US2024292074A1-patent-drawing

9. An All-Time Detection Algorithm for UAV Images in Urban Low Altitude

Yuzhuo Huang, Jingyi Qu, Li Wang - MDPI AG, 2024

With the rapid development of urban air traffic, Unmanned Aerial Vehicles (UAVs) are gradually being widely used in cities. Since UAVs are prohibited over important places in Urban Air Mobility (UAM), such as government and airports, it is important to develop airground non-cooperative UAV surveillance for air security all day and night. In the paper, an all-time UAV detection algorithm based on visible images during the day and infrared images at night is proposed by our team. We construct a UAV dataset used in urban visible backgrounds (UAVvisible) and a UAV dataset used in urban infrared backgrounds (UAVinfrared). In the daytime, the visible images are less accurate for UAV detection in foggy environments; therefore, we incorporate a defogging algorithm with the detection network that can ensure the undistorted output of images for UAV detection based on the realization of defogging. At night, infrared images have the characteristics of a low-resolution, unclear object contour, and complex image background. We integrate the attention and the transformation of space feature maps... Read More

10. Bionic Vision Platform with Integrated Polarization, Lidar, Inertial Sensors, and Deep Learning Detection

UNIV SOUTHEAST, 2024

A bionic vision multi-source information intelligent perception unmanned platform that integrates polarization vision, lidar, inertial sensors, and deep learning-based target detection for autonomous navigation and obstacle avoidance in complex environments. The platform combines bionic polarization vision with lidar and inertial sensors for accurate navigation, while a deep learning-based target detection module utilizes a monocular vision camera to detect targets and obstacles. The system enables high-precision positioning, autonomous navigation, and covert operation in both field and closed environments.

11. Event-Based Sensor Data Processing Method with Connectivity-Based Event Grouping for Feature Extraction

PROPHESEE, 2024

Method for processing event-based sensor data to extract features for computer vision applications. The method groups events based on connectivity criteria, including luminance differences, without requiring a complete image frame. It enables online clustering and feature extraction from event-based data, overcoming limitations of traditional frame-based approaches.

US11995878B2-patent-drawing

12. Drone Navigation System with Dual-Mode Camera and Simulated Infrared Image Processing for Low Light Obstacle Avoidance

SKYDIO INC, 2024

Enabling autonomous aerial navigation for drones in low light conditions without disabling obstacle avoidance. The drone uses a learning model trained on simulated infrared images for obstacle avoidance in night mode. In day mode, the drone still uses the cameras but filters out the infrared data to improve image quality for navigation. This allows the drone to autonomously navigate in low light without relying solely on GPS. It prevents disabling obstacle avoidance or manual navigation in low light environments.

US2024169719A1-patent-drawing

13. Neuromorphic Sensor with Spatiotemporal Filter-Based Multiple Pathways for Event-Based Pixel Processing

UNIVERSITY OF PITTSBURGH - OF THE COMMONWEALTH SYSTEM OF HIGHER EDUCATION, 2024

A neuromorphic programmable multiple pathways event-based sensor that extends the notion of events to spatiotemporal filters acting on neighboring pixels, outputting different pathways to mimic biological retinas. The sensor enables low-bandwidth, precisely timed information extraction from scenes, with architectures and devices that implement this approach.

14. Zero-referenced Enlightening and Restoration for UAV Nighttime Vision

Yuezhou Li, Yuzhen Niu, Rui Xu - Institute of Electrical and Electronics Engineers (IEEE), 2024

Unmanned aerial vehicle (UAV) based visual systems suffer from poor perception at nighttime. There are three challenges for enlightening nighttime vision for UAVs: Firstly, the UAV nighttime images differ from underexposed images in the statistical characteristic, limiting the performance of general low-light image enhancement (LLIE) methods. Secondly, when enlightening nighttime images, the artifacts tend to be amplified, distracting the visual perception of UAVs. Thirdly, due to the inherent scarcity of paired data in the real world, it is difficult for UAV nighttime vision to benefit from supervised learning. To meet these challenges, we propose a zero-referenced enlightening and restoration network (ZERNet) for improving the perception of UAV vision at nighttime. Specifically, by estimating the nighttime enlightening map (NE-map), a pixel-to-pixel transformation is then conducted to enlighten the dark pixels while suppressing overbright pixels. Furthermore, we propose the self-regularized restoration to preserve the semantic contents and restrict the artifacts in the final result... Read More

15. LDHD‐Net: A Lightweight Network With Double Branch Head for Feature Enhancement of UAV Targets in Complex Scenes

Cong Zhang, Qi Gao, Rui Shi - Wiley, 2024

The development of small UAV technology has led to the emergence of new challenges in UAV countermeasures. The timely detection of UAVs can effectively prevent potential infringements on airspace and privacy. Currently, methods based on deep learning demonstrate excellent performance in target detection. However, in complex scenes, there is a tendency for false alarms (FAs) and misdetections to occur at a higher rate. To solve these problems, we propose a lightweight infrared small target detection algorithm LDHDNet. First, we design a novel GhostShuffle module in the backbone network to enhance the network feature extraction capability. Meanwhile, we remove redundant layers from the network to make the backbone network more lightweight. Second, we design a hierarchical attention enhancement module in the neck network to improve the saliency of UAV targets and reduce background noise interference. In addition, we design a novel small target detection structure and prediction heads in the shallow layers of the network to improve small target detection accuracy. Finally, we design a ... Read More

16. Enhancing Nighttime UAV Tracking with Light Distribution Suppression

Liangliang Yao, Chang‐Hong Fu, Yiheng Wang, 2024

Visual object tracking has boosted extensive intelligent applications for unmanned aerial vehicles (UAVs). However, the state-of-the-art (SOTA) enhancers for nighttime UAV tracking always neglect the uneven light distribution in low-light images, inevitably leading to excessive enhancement in scenarios with complex illumination. To address these issues, this work proposes a novel enhancer, i.e., LDEnhancer, enhancing nighttime UAV tracking with light distribution suppression. Specifically, a novel image content refinement module is developed to decompose the light distribution information and image content information in the feature space, allowing for the targeted enhancement of the image content information. Then this work designs a new light distribution generation module to capture light distribution effectively. The features with light distribution information and image content information are fed into the different parameter estimation modules, respectively, for the parameter map prediction. Finally, leveraging two parameter maps, an innovative interweave iteration adjustment i... Read More

17. Fusion flow-enhanced graph pooling residual networks for Unmanned Aerial Vehicles surveillance in day and night dual visions

Alam Noor, Kai Li, Eduardo Tovar - Elsevier BV, 2024

Recognizing unauthorized Unmanned Aerial Vehicles (UAVs) within designated no-fly zones throughout the day and night is of paramount importance, where the unauthorized UAVs pose a substantial threat to both civil and military aviation safety. However, recognizing UAVs day and night with dual-vision cameras is nontrivial, since red-green-blue (RGB) images suffer from a low detection rate under an insufficient light condition, such as on cloudy or stormy days, while black-and-white infrared (IR) images struggle to capture UAVs that overlap with the background at night. In this paper, we propose a new optical flow-assisted graph-pooling residual network (OF-GPRN), which significantly enhances the UAV detection rate in day and night dual visions. The proposed OF-GPRN develops a new optical fusion to remove superfluous backgrounds, which improves RGB/IR imaging clarity. Furthermore, OF-GPRN extends optical fusion by incorporating a graph residual split attention network and a feature pyramid, which refines the perception of UAVs, leading to a higher success rate in UAV detection. A compre... Read More

18. Real-Time Neuromorphic Navigation: Integrating Event-Based Vision and Physics-Driven Planning on a Parrot Bebop2 Quadrotor

Amogh Joshi, S. K. Sanyal, Kaushik Roy, 2024

In autonomous aerial navigation, real-time and energy-efficient obstacle avoidance remains a significant challenge, especially in dynamic and complex indoor environments. This work presents a novel integration of neuromorphic event cameras with physics-driven planning algorithms implemented on a Parrot Bebop2 quadrotor. Neuromorphic event cameras, characterized by their high dynamic range and low latency, offer significant advantages over traditional frame-based systems, particularly in poor lighting conditions or during high-speed maneuvers. We use a DVS camera with a shallow Spiking Neural Network (SNN) for event-based object detection of a moving ring in real-time in an indoor lab. Further, we enhance drone control with physics-guided empirical knowledge inside a neural network training mechanism, to predict energy-efficient flight paths to fly through the moving ring. This integration results in a real-time, low-latency navigation system capable of dynamically responding to environmental changes while minimizing energy consumption. We detail our hardware setup, control loop, and ... Read More

19. Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors

Jakub Mandula, Jonas Kühne, Luca Pascarella, 2024

Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications. However, uncontrolled access to restricted areas threatens privacy and security. Thus, prevention and detection of UAVs are pivotal to guarantee confidentiality and safety. Although active scanning, mainly based on radars, is one of the most accurate technologies, it can be expensive and less versatile than passive inspections, e.g., object recognition. Dynamic vision sensors (DVS) are bio-inspired event-based vision models that leverage timestamped pixel-level brightness changes in fast-moving scenes that adapt well to low-latency object detection. This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection. In particular, we propose a setup to exploit DVS as an alternative to RGB cameras in a real-time and low-power configuration. Our approach leverages the high-dynamic range (HDR) and background suppression of DVS and, when trained with various fast-moving drones, outperforms RGB input in suboptimal ambient conditions such ... Read More

20. Infrared UAV Target Detection Based on Continuous-Coupled Neural Network

Zhuoran Yang, Jing Lian, Jizhao Liu - MDPI AG, 2023

The task of the detection of unmanned aerial vehicles (UAVs) is of great significance to social communication security. Infrared detection technology has the advantage of not being interfered with by environmental and other factors and can detect UAVs in complex environments. Since infrared detection equipment is expensive and data collection is difficult, there are few existing UAV-based infrared images, making it difficult to train deep neural networks; in addition, there are background clutter and noise in infrared images, such as heavy clouds, buildings, etc. The signal-to-clutter ratio is low, and the signal-to-noise ratio is low. Therefore, it is difficult to achieve the UAV detection task using traditional methods. The above challenges make infrared UAV detection a difficult task. In order to solve the above problems, this work drew upon the visual processing mechanism of the human brain to propose an effective framework for UAV detection in infrared images. The framework first determines the relevant parameters of the continuous-coupled neural network (CCNN) through the image... Read More

21. Low-Light Aerial Image Enhancement Algorithm Based on Retinex Theory

Chenying Ma, Xu Cheng, Peng Zhou - IEEE, 2023

The existing low-light image enhancement algorithms are ineffective for UAV aerial images due to challenges such as mountain shading and insufficient illumination during cloud imaging. To overcome these challenges, we propose a reflection image enhancement algorithm based on the principles of Retinex theory, integrating illumination estimation networks and attention mechanisms. This approach improves the quality of aerial images with uneven illumination while avoiding overexposure. We employ a lightweight convolutional neural network to estimate low-light image illumination and enhance reflection images. Furthermore, we incorporate an attention mechanism into the reflection image enhancement process to mitigate overexposure and reduce noise. Experimental results confirm the effectiveness of the proposed image enhancement method in mitigating image degradation caused by mountain shading and inadequate illumination during cloud imaging, consequently improving the quality of UAV-captured aerial images.

22. Neuromorphic Vision Sensor System with Pixel-Level Controlled Light Source Array Integration

LUMILEDS LLC, 2023

A system that integrates a neuromorphic vision sensor with a light source array, where the sensor's output controls the light source's intensity, duration, or color at a pixel-level granularity, enabling adaptive illumination and improved sensor performance.

US11800233B2-patent-drawing

23. In-flight testing of wide-angle low-light for UAS automated navigation

Julie Buquet, Simon-Gabriel Beauvais, Patrice Roulet - SPIE, 2023

Automated navigation of Unmanned Aircraft Systems (UAS) in a broad range of illumination scenarios implies improved and real-time depth estimation and long-distance obstacle detection. We present our lightweight ultra wide-angle camera optimized for low-light illumination (down to < 1 lux) mounted on a drone and compare its optical performance with other module found in the market. We also capture images from the drone in flight and test them on monocular depth estimation neural networks and show that our camera module is suitable for low-light navigation.

24. A Spatial-temporal Detecting Method for Low-altitude Slow and Small Target

Feng Chen, Shengjie Wang, Yikun Lyu - IEEE, 2023

UAV, as a rapidly developing and applied aircraft, poses a great threat to low altitude safety while bringing various changes. The complexity of low altitude environments and the small size, low speed, and flexibility of UAVs pose significant challenges to the detection of low altitude UAVs. Visual methods have the characteristics of real-time and flexibility, making them suitable for fast detection in close range. But the problems of detecting features of UAV from low contrast and low-resolution images by applying vision method are needed to be solved. A detecting method which combines morphology filtering in spatial domain and correlation in temporal domain is presented in this paper. The contrast of images is enhanced by extract background of images using morphology filtering. This improves the efficiency and reliability of detection. Because positions of object in the images are continuous, the incorrect results are rectified by calculating the correlation of positions in temporal domain. The theoretical analysis and experiment show that this method can detect the UAV successfull... Read More

25. Using Image Enhancement for Target Tracking

Xin Li, Huiling Chen, Yanhua Shao - IEEE, 2023

In UAV aerial photography scenes, lighting changes, image blur, low resolution and other factors have a great impact on tracking performance, and previous target tracking algorithms mainly studied robust tracking under sufficient light and high resolution. This article proposes an adaptive image enhancement algorithm for unmanned aerial vehicle target tracking, which achieves robust tracking of unmanned aerial vehicles under dark changes in lighting and insufficient lighting. Firstly, an adaptive image enhancement module is constructed to identify the dark light scene and compensate for the corresponding image brightness and image contrast. Secondly, a dynamic constraint strategy is added to constrain the difference in tracking response, enabling the tracker to achieve time adaptation. Finally, experiments on two UAV benchmarks have proven the superiority of our method compared to other nine state-of-the-art trackers.

26. All-Day Object Tracking for Unmanned Aerial Vehicle

Bowen Li, Changhong Fu, Fangqiang Ding - Institute of Electrical and Electronics Engineers (IEEE), 2023

Unmanned aerial vehicle (UAV) has facilitated a wide range of real-world applications and attracted extensive research in the mobile computing field. Specially, developing real-time robust visual onboard trackers for all-day aerial maneuver can remarkably broaden the scope of intelligent deployment of UAV. However, prior tracking methods have merely focused on robust tracking in the well-illuminated scenes, while ignoring trackers capabilities to be deployed in the dark. In darkness, the conditions can be more complex and harsh, easily posing inferior robust tracking or even tracking failure. To this end, this work proposes a novel discriminative correlation filter-based tracker with illumination adaptive and anti-dark capability, namely ADTrack. ADTrack firstly exploits image illuminance information to enable adaptability of the model to the given light condition. Then, by virtue of an efficient enhancer, ADTrack carries out image pretreatment where a target aware mask is generated. Benefiting from the mask, ADTrack aims to solve a novel dual regression problem where dual filters a... Read More

27. Modified Siamese Network Based on Feature Enhancement and Dynamic Template for Low-Light Object Tracking in UAV Videos

Lifan Sun, Shuaibing Kong, Zhe Yang - MDPI AG, 2023

Unmanned aerial vehicles (UAVs) visual object tracking under low-light conditions serves as a crucial component for applications, such as night surveillance, indoor searches, night combat, and all-weather tracking. However, the majority of the existing tracking algorithms are designed for optimal lighting conditions. In low-light environments, images captured by UAV typically exhibit reduced contrast, brightness, and a signal-to-noise ratio, which hampers the extraction of target features. Moreover, the targets appearance in low-light UAV video sequences often changes rapidly, rendering traditional fixed template tracking mechanisms inadequate, and resulting in poor tracker accuracy and robustness. This study introduces a low-light UAV object tracking algorithm (SiamLT) that leverages image feature enhancement and a dynamic template-updating Siamese network. Initially, the algorithm employs an iterative noise filtering framework-enhanced low-light enhancer to boost the features of low-light images prior to feature extraction. This ensures that the extracted features possess more cri... Read More

28. Neuromorphic Vision System with Retinomorphic Array and Neural Network for Parallel Signal Processing

NANJING UNIVERSITY OF TECHNOLOGY, 2023

A neuromorphic vision system that mimics the human visual system's parallel and low-energy processing mode. The system integrates a retinomorphic array and a neural network, where the array converts visual information into electrical signals and the network processes these signals for visual cognition. The array is composed of N×N optoelectronic devices with independently regulated gates, and the neural network performs information processing on the input electrical signals. The system achieves real-time and low-power processing of visual information, enabling applications such as human-computer interaction, autonomous driving, and intelligent security.

29. ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras

Sizhen Bian, Lukas Schulthess, G Rutishauser - IEEE, 2023

The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehicles (UAV) is raising, especially due to the microsecond-level reaction time of the bio-inspired event sensor, which increases robustness and reduces latency of the perception tasks compared to a RGB camera. This work presents ColibriUAV, a UAV platform with both frame-based and event-based cameras interfaces for efficient perception and near-sensor processing. The proposed platform is designed around Kraken, a novel low-power RISC-V System on Chip with two hardware accelerators targeting spiking neural networks and deep ternary neural networks.Kraken is capable of efficiently processing both event data from a DVS camera and frame data from an RGB camera. A key feature of Kraken is its integrated, dedicated interface with a DVS camera. This paper benchmarks the end-to-end latency and power efficiency of the neuromorphic and event-based UAV subsystem, demonstrating state-of-the-art event data with a throughput of 7200 frames of events per second and a power consumption of 10.7 mW, which is over 6.6 times faster an... Read More

30. Development of Mathematical Methods and Algorithms for Filtering Images Obtained from Unmanned Aerial Vehicle Camera

Hein Htet Zaw, Hein Zaw, Portnov E. Mikhailovich - IEEE, 2023

Recently, the use of unmanned aerial vehicles (UAVs) has become an interesting and active research topic in the field of aerial photography. Such an increase in popularity makes the task of finding optimal hardware and software configurations for UAVs, developing systems for compensating adverse environmental effects, and navigation systems in space, etc. The key factors behind this trend are the relatively low cost of implementing such projects and the speed of obtaining data. The main problems preventing full automation of data processing of UAV imagery are motion blur with a still camera, full image blur with camera movement, and camera focus blur. When shooting from a UAV in bad weather conditions, blurry video frames often occur that require image filter. This blurring can interfere with visual analysis and interpretation of data, cause errors, and reduce the accuracy of automatic photogrammetric processing algorithms. This article describes the proposed algorithm that performs element-by-element image filtering based on the use of neural-like structures.

31. Fast and Lightweight UAV-based Road Image Enhancement Under Multiple Low-Visibility Conditions

Chaitanya Kapoor, Aadith Warrier, Mohit Singh - IEEE, 2023

The amalgamation of Unmanned Aerial Vehicle (UAV) based systems with models built on Artificial Intelligence (AI) and Computer Vision approaches have enabled several applications in urban planning and smart cities, such as remote health monitoring of roads and infrastructure. However, most of such existing models are trained and evaluated for clear lighting conditions, and they do not perform well under low visibility. This work proposes a fast and lightweight approach for deployment on UAV-based systems that can (i) detect the low-visibility condition in a road image captured by a UAV, and (ii) alleviate it and enhance the quality of the road image. The proposed approach achieves state-of-the-art results and thus establishes itself as an essential precursor to downstream Computer Vision tasks related to remote monitoring of roads, such as identification of different distress conditions.

32. Integrated Frame-Rate and Neuromorphic Vision System with Single Optics Module and Shared Readout for Dynamic Target Detection and Tracking

SENSORS UNLIMITED INC, 2023

Combining frame-rate imaging and neuromorphic vision for real-world target detection and tracking from high altitude with constrained resources. The system uses a single optics module to focus light from the scene onto both a frame-rate sensor and an asynchronous neuromorphic sensor. It reads out both types of data using a common ROIC. Initially, the frame-rate sensor acquires low framerate images. But if an event is detected asynchronously, it increases the frame rate to capture more detail. This allows leveraging the strengths of both sensors - frame-rate for spatial resolution and neuromorphic for temporal resolution - in a complementary way. Additionally, template matching using trained event and intensity data helps identify targets.

33. Cone-Rod Dual-Modality Neuromorphic Vision Sensor with Integrated Voltage-Mode and Current-Mode Active Pixel Sensors

TSINGHUA UNIVERSITY, 2023

A cone-rod dual-modality neuromorphic vision sensor that integrates both voltage-mode and current-mode active pixel sensors (APS) to mimic the human retina's cone and rod cells. The sensor combines high-precision light intensity information from voltage-mode APS with high-speed spatial gradient information from current-mode APS, enabling simultaneous acquisition of both modalities. This dual-modality output mode enables the sensor to achieve higher image quality, wider dynamic range, and faster shooting speeds compared to traditional single-modal neuromorphic vision sensors.

US2023050794A1-patent-drawing

34. Dual-Modality Neuromorphic Vision Sensor with Integrated Current-Mode and Voltage-Mode Pixel Circuits

UNIV TSINGHUA, 2023

A dual-modality neuromorphic vision sensor that combines current-mode and voltage-mode APS circuits to mimic both rod and cone cells of the human retina. The current-mode circuit perceives light intensity gradient information for high-speed and wide dynamic range imaging, while the voltage-mode circuit provides precise light intensity information for high-quality imaging. The sensor integrates both modalities into a single pixel array, enabling simultaneous capture of both types of visual information.

35. Neuromorphic System for 3D Object Tracking with Event-Driven Depth and Spatial Processing Units

The Open University, 2023

A neuromorphic system for 3D object tracking using event cameras, comprising an event camera, depth estimation unit, spatial tracking unit, and error correction unit. The system generates depth and spatial tracking data from pixel-level luminance-transient events, and processes these signals to produce error-correcting data for tracking. The error-correcting data is then processed by neural networks to generate control signals for tracking, enabling real-time 3D motion correction in adaptive robotic systems.

36. TF-Net: Deep Learning Empowered Tiny Feature Network for Night-Time UAV Detection

Maham Misbah, Misha Urooj Khan, Zhaohui Yang - Springer Nature Switzerland, 2023

Technological advancements have normalized the usage of unmanned aerial vehicles (UAVs) in every sector, spanning from military to commercial but they also pose serious security concerns due to their enhanced functionalities and easy access to private and highly secured areas. Several instances related to UAVs have raised security concerns, leading to UAV detection research studies. Visual techniques are widely adopted for UAV detection, but they perform poorly at night, in complex backgrounds, and in adverse weather conditions. Therefore, a robust night vision-based drone detection system is required to that could efficiently tackle this problem. Infrared cameras are increasingly used for nighttime surveillance due to their wide applications in night vision equipment. This paper uses a deep learning-based TinyFeatureNet (TF-Net), which is an improved version of YOLOv5s, to accurately detect UAVs during the night using infrared (IR) images. In the proposed TF-Net, we introduce architectural changes in the neck and backbone of the YOLOv5s. We also simulated four different YOLOv5 model... Read More

37. Enhancement algorithm of low illumination image for UAV images inspired by biological vision

Dianwei Wang, Wang LIU, Jie Fang - EDP Sciences, 2023

To address the issue of low brightness, high noise and obscure details of UAV aerial low-light images, this paper proposes an UAV aerial low-light image enhancement algorithm based on dual-path inspired by the dual-path model in human vision system. Firstly, a U-Net network based on residual element is constructed to decompose UAV aerial low-light image into structural path and detail path. Then, an improved generative adversarial network (GAN) is proposed to enhance the structural path, and edge enhancement module is added to enhance the edge information of the image. Secondly, the noise suppression strategy is adopted in detail path to reduce the influence of noise on image. Finally, the output of the two paths is fused to obtain the enhanced image. The experimental results show that the proposed algorithm visually improves the brightness and detail information of the image, and the objective evaluation index is better than the other comparison algorithms. In addition, this paper also verifies the influence of the proposed algorithm on the target detection algorithm under low illum... Read More

38. Low-Light Enhancer for UAV Night Tracking Based on Zero-DCE++

Yihong Zhang, Yinjian Li, Qin Lin - Scientific Research Publishing, Inc., 2023

Unmanned aerial vehicle (UAV) target tracking tasks can currently be successfully completed in daytime situations with enough lighting, but they are unable to do so in nighttime scenes with inadequate lighting, poor contrast, and low signal-to-noise ratio.This letter presents an enhanced low-light enhancer for UAV nighttime tracking based on Zero-DCE++ due to its advantages of low processing cost and quick inference.We developed a light-weight UCBAM capable of integrating channel information and spatial features and offered a fully considered curve projection model in light of the low signalto-noise ratio of night scenes.This method significantly improved the tracking performance of the UAV tracker in night situations when tested on the public UAVDark135 and compared to other cutting-edge low-light enhancers.By applying our work to different trackers, this search shows how broadly applicable it is.

39. Hardware Implementation of Ultra‐Fast Obstacle Avoidance Based on a Single Photonic Spiking Neuron

Shuang Gao, Shuiying Xiang, Ziwei Song - Wiley, 2023

Abstract Visual obstacle avoidance is widely applied to unmanned aerial vehicles (UAVs) and mobile robot fields. A simple system architecture, low power consumption, optimized processing, and realtime performance are extremely needed due to the limited payload of some mini UAVs. To address these issues, an obstacle avoidance system harnessing the rate encoding features of a photonic spiking neuron based on a FabryProt (FP) laser is proposed, which simulates the monocular vision. Here, time to collision is used to describe the distance of obstacles. The experimental results show that the FP laser excites ultrafast spike responses in real time for the following cases, facilitating the generation of control commands by motor neurons to realize accurate decisionmaking. Four cases of mobile obstacle avoidance scenarios, including Constant velocity approach, Approach and retreat, The motion state involving stays, and Approach with different velocities, and obstacle avoidance problems with multiple stationary obstacles appearing simultaneously are experimentally analyzed. The s... Read More

40. Visual Image Design of the Internet of Things Based on AI Intelligence

Tian Tian - Elsevier BV, 2023

Visual object detection has emerged as a critical technology for Unmanned Arial Vehicle (UAV) use due to advances in computer vision. However, the detection performance is much worse for small targets than for big ones, and failed detection is commonplace when attempting to identify such objects for small, low-power devices. UAVs play a crucial role in delivering IoT services. Because of their limited battery life, these gadgets are limited in their range of communication. Because of the IoT, UAVs can be seen as terminal devices connected to a large network where a swarm of other UAVs is coordinating their motions, directing one another, and maintaining watch over locations outside its visual range. One of the essential components of UAV-based applications is the ability to recognize objects of interest in aerial photographs taken by UAVs. While aerial photos might be useful, object detection is challenging. As a result, capturing aerial photographs with UAVs is a unique challenge since the size of things in these images might vary greatly. The study proposal included specific inform... Read More

41. Dynamic Obstacle Avoidance for Unmanned Aerial Vehicle Using Dynamic Vision Sensor

Xiangyu Zhang, Junbo Tie, Jian Feng Li - Springer Nature Switzerland, 2023

Obstacle avoidance in dynamic environments is a critical issue in unmanned aerial vehicle (UAV) applications. Current solutions rely on deep reinforcement learning (DRL), which requires significant computing power and energy and limits UAVs with limited onboard computing resources. A combination of dynamic vision sensor (DVS) and spiking neural network (SNN) can be used for fast perception and low energy consumption. This work proposes an obstacle avoidance framework that uses DVS and SNN-based object detection algorithms to identify obstacles and a lightweight action decision algorithm to generate action commands. Simulation experiments show that the UAV can avoid 70% of dynamic obstacles, with an estimated power consumption of 1.5 to 13.5 milliwatts, and an overall delay up to 7% lower than that of reinforcement learning methods.

42. A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid

Ning Ma, Yanlong Cao, Zhi Zhang - Cambridge University Press (CUP), 2023

Abstract Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised ... Read More

43. Directly-trained Spiking Neural Networks for Deep Reinforcement Learning: Energy efficient implementation of event-based obstacle avoidance on a neuromorphic accelerator

Luca Zanatta, Alfio Di Mauro, Francesco Barchi - Elsevier BV, 2023

Spiking Neural Networks (SNN) promise extremely low-power and low-latency inference on neuromorphic hardware. Recent studies demonstrate the competitive performance of SNNs compared with Artificial Neural Networks (ANN) in conventional classification tasks. In this work, we present an energy-efficient implementation of a Reinforcement Learning (RL) algorithm using SNNs to solve an obstacle avoidance task performed by an Unmanned Aerial Vehicle (UAV), taking a Dynamic Vision Sensor (DVS) as event-based input. We train the SNN directly, improving upon state-of-art implementations based on hybrid (not directly trained) SNNs. For this purpose, we devise an adaptation of the Spatio-Temporal Backpropagation algorithm (STBP) for RL. We then compare the SNN with a state-of-art Convolutional Neural Network (CNN) designed to solve the same task. To this aim, we train both networks by exploiting a photorealistic training pipeline based on AirSim. To achieve a realistic latency and throughput assessment for embedded deployment, we designed and trained three different embedded SNN versions to be ... Read More

44. Design of Airborne Large Aperture Infrared Optical System Based on Monocentric Lens

Jiyan Zhang, Teng Qin, Zhexin Xie - MDPI AG, 2022

Conventional reconnaissance camera systems have been flown on manned aircraft, where the weight, size, and power requirements are not stringent. However, today, these parameters are important for unmanned aerial vehicles (UAVs). This article provides a solution to the design of airborne large aperture infrared optical systems, based on a monocentric lens that can meet the strict criteria of aerial reconnaissance UAVs for a wide field of view (FOV) and lightness of airborne electro-optical pod cameras. A monocentric lens has a curved image plane, consisting of an array of microsensors, which can provide an image with 368 megapixels over a 100 FOV. We obtained the initial structure of a five-glass (5GS) asymmetric monocentric lens with an air gap, using ray-tracing and global optimization algorithms. According to the design results, the ground sampling distance (GSD) of the system is 0.33 m at 3000 m altitude. The full-field modulation transfer function (MTF) value of the system is more than 0.4 at a Nyquist frequency of 70 lp/mm. We present a primary thermal control method, and the i... Read More

45. Next-generation of sUAS 360 surround vision cameras designed for automated navigation in low-light conditions

Julie Buquet, Simon-Gabriel Beauvais, Jocelyn Parent - SPIE, 2022

The next generation of sUAS (small Unmanned Aircraft Systems) for automated navigation will have to perform in challenging conditions, bad weather, high and low temperature and from dusk-to-dawn. The paper presents experimental results from a new wide-angle vision camera module specially optimized for low-light. We present the optical characteristics of this system as well as experimental results obtained for different sense and avoid functionalities. We also show preliminary results using our camera module images on neural networks for different scene understanding tasks.

46. Image Processing Method with Neural Network Emulating Synaptic Connectivity for Drone-Captured Data

X DEV LLC, 2022

A method for processing drone-captured images using a neural network that emulates the brain's synaptic connectivity. The network receives an image representation, processes it through a brain-emulation sub-network with architecture based on biological neuron connections, and generates a prediction characterizing the image. The prediction is used for tasks such as image segmentation and safe landing zone identification.

47. Visual Navigation Algorithm for Night Landing of Fixed-Wing Unmanned Aerial Vehicle

Zhaoyang Wang, Dan Zhao, Yunfeng Cao - MDPI AG, 2022

In the recent years, visual navigation has been considered an effective mechanism for achieving an autonomous landing of Unmanned Aerial Vehicles (UAVs). Nevertheless, with the limitations of visual cameras, the effectiveness of visual algorithms is significantly limited by lighting conditions. Therefore, a novel vision-based autonomous landing navigation scheme is proposed for night-time autonomous landing of fixed-wing UAV. Firstly, due to the difficulty of detecting the runway caused by the low-light image, a strategy of visible and infrared image fusion is adopted. The objective functions of the fused and visible image, and the fused and infrared image, are established. Then, the fusion problem is transformed into the optimal situation of the objective function, and the optimal solution is realized by gradient descent schemes to obtain the fused image. Secondly, to improve the performance of detecting the runway from the enhanced image, a runway detection algorithm based on an improved Faster region-based convolutional neural network (Faster R-CNN) is proposed. The runway ground-... Read More

48. Autonomous Aerial Navigation System with Infrared-Based Obstacle Avoidance and Mode-Switching Capability

SKYDIO INC, 2022

Autonomous aerial navigation system for unmanned aerial vehicles (UAVs) that enables obstacle avoidance and navigation in low-light and no-light conditions by utilizing infrared data from onboard cameras, rather than conventional infrared filtering. The system switches between day and night modes based on ambient light conditions, using infrared data for obstacle avoidance in night mode and filtered images for navigation in day mode.

49. Multichannel Object Detection System Integrating Static and Event Camera Inputs for Enhanced Scene Analysis

INTEL CORP, 2022

Object detection and classification for autonomous vehicles that combines inputs from static cameras and event cameras to improve object detection in challenging conditions. The system receives images from static cameras and event-based cameras as separate channels. It processes the static images and event data together to accurately identify objects in a scene. This leverages the advantages of both camera types: static cameras provide stable views and event cameras have high temporal resolution and dynamic range. The combined inputs enhance object detection in low light, low contrast, and motion blur conditions compared to using just static or event cameras alone.

US11455793B2-patent-drawing

50. A Fusion-based Enhancement Method for Low-light UAV Images

Haolin Liu, Yongfu Li, Hao Zhu - IEEE, 2022

This paper focuses on the enhancement of low-light UAV images. There are some differences between low-light UAV images and general low-light images. Specifically, low-light UAV images are underexposed as a whole but contain overexposed areas produced by lamplights. In addition, these images have a larger field of vision and therefore contain more information. Based on these characteristics, we propose an enhancement method for low-light UAV images. First, we adopt two different enhancing methods, one is to improve the global brightness, the other enhances the local contrast, and then we use appropriate weights to fuse them to retain their respective advantages. Second, a new detail enhancement strategy is designed to preserve more details of these images. Finally, a brightness and chrominance optimization operation based on linear stretching is used to further optimize the enhanced images. We test the proposed method with three different datasets, including a public UAV dataset, a self-made UAV dataset and a widely used image enhancement dataset. Besides, our enhancement method is co... Read More

51. Visual Tracking Based Deep Learning and Control Design Onboard Small-Sized Quadrotor UAV

52. Neuromorphic System with Event-Based Stereo Disparity Calculation Using Temporal Scale Ring Buffer and Distributed Processing

53. Autonomous Machine Navigation System with Low-Light Detection and Illumination Mechanism

54. Target Detection of Low-Altitude UAV Based on Improved YOLOv3 Network

55. Real-Time Object Detection in UAV Vision based on Neural Processing Units

Get Full Report

Access our comprehensive collection of 79 documents related to this technology