AI Decision Systems for Real-Time UAV Morphing
Current UAV morphing systems operate under complex parametric constraints. Field data shows morphing transitions requiring up to 6.2 seconds for full reconfiguration, with state estimation errors accumulating during transformation as sensor-frame relationships shift. Programmatic decision systems must process 35-45 sensor inputs simultaneously while maintaining flight control through multiple physical configurations that alter aerodynamic properties, thrust vectors, and mass distribution.
The engineering challenge lies in developing decision architectures that can anticipate environmental factors while simultaneously managing the transitional instabilities inherent in physical reconfiguration.
This page brings together solutions from recent research—including real-time gain scheduling via machine learning models, multi-objective flight decision optimization with adaptive hyperparameters, programmable polymer-based morphing mechanisms, and reinforcement learning frameworks that incorporate environmental data. These and other approaches demonstrate practical implementations for morphing UAVs that maintain stability and mission effectiveness throughout transformation sequences.
1. Unmanned Aerial Vehicle with Reversible Motors and Single Servo-Driven Transformation Mechanism for Mode Switching
ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY, 2025
Transformable unmanned aerial vehicle (UAV) that can switch between coplanar and omnidirectional motion capabilities. The UAV has a design with reversible motors to generate bidirectional thrust instead of dual motors in opposite directions. It also has a transformation mechanism with a single servo motor to change between the coplanar and omnidirectional modes. This allows the UAV to operate as an underactuated hexacopter in the coplanar mode for efficient flight, then transition to a fully actuated omnidirectional mode for complex maneuvers.
2. Closed-Loop Controller Gain Scheduling via Real-Time Machine Learning for UAVs
PERFORMANCE DRONE WORKS LLC, 2024
Online gain scheduling of a closed-loop controller for an unmanned aerial vehicle (UAV) using a trained machine learning model. The closed-loop controller, like a PID controller, adjusts motor output to control the UAV. The machine learning model takes real-time motor data as input and predicts optimal gains for the controller based on that data. This allows adaptive gain scheduling without needing a detailed UAV model or payload knowledge. The model predicts weight characteristics from motor data to derive gains for changing payloads.
3. Drone-Based Forest Fire Management System with Cloud Integration and Autonomous Machine Learning Algorithms
DROVID TECHNOLOGIES SPA, 2024
A system for forest fire prevention and management using drones, featuring a cloud-based platform for data storage, processing, and analysis. The system integrates machine learning algorithms for autonomous drone operation, target path prediction, and automatic detection of fire risks. It enables real-time monitoring, instant warnings, and data-driven decision-making for forest fire control and prevention.
4. Method for Generating Flight Decisions in UAVs Using Multi-Objective Optimization with Adaptive Hyperparameters
SOUTHERN UNIVERSITY OF SCIENCE AND TECHNOLOGY, 2023
A method for generating flight decisions for unmanned aerial vehicles (UAVs) that improves flexibility in autonomous navigation tasks by optimizing multiple objectives simultaneously. The method constructs a flight decision model based on mission requirements, defines a target learning function to optimize the model, and updates the model's hyperparameters to obtain a target flight decision that balances competing objectives such as flight time and risk.
5. Autonomous UAV Navigation System with Decision Agent-Based High-Level Planner and Probabilistic Event Processor
US GOV AIR FORCE, 2023
Autonomous drone navigation system that enables unmanned aerial vehicles (UAVs) to operate independently in complex environments. The system employs a high-level planner that generates decision agents to select appropriate actions based on changing conditions, and incorporates a probabilistic event processor to analyze sensor data and predict outcomes. The planner considers contingent state transitions and generates plans that can adapt to unexpected situations, enabling the UAV to operate autonomously in uncertain environments.
6. Automated Aircraft Control Method Utilizing Dynamic Action Sequence Selection Based on Real-Time Mission State Analysis
THE BOEING CO, 2022
A method for controlling an aircraft using automated systems that determines a sequence of actions to reach a target state based on the current mission state, rather than relying on pre-programmed rules. The system identifies the target state and current mission state, selects a sequence of actions from a pool of potential actions, and performs the actions in the selected order as long as their preconditions are met. This approach enables dynamic adaptation to changing circumstances without requiring human intervention.
7. Neural Network-Based Aircraft Control System Utilizing Simulator-Driven State-Dependent Reward Updates and Candidate Input Selection with Randomized Offset
THE BOEING CO, 2022
Training a neural network to control an aircraft using a flight simulator, where the network is updated based on reward values determined from state data generated by the simulator in response to control inputs, and the control inputs are selected from a candidate input generated by the network and a randomized offset input.
8. Reinforcement Learning Device with Environmental and Control Information Acquisition for UAV Flight Control
RAKUTEN GROUP INC, 2022
A learning device for reinforcement learning of a control model that outputs control information for performing flight control of an unmanned aerial vehicle (UAV). The device includes an environmental information acquisition unit that acquires environmental information including weather information, a control information acquisition unit that acquires control information output from the control model, a reward identifying unit that identifies a reward representing an evaluation of the UAV's action based on the control information, and a learning control unit that controls reinforcement learning of the control model using the identified reward.
9. Unmanned Vehicle System with Programmable Polymer-Based Morphing and Detachable Drone Nesting Mechanism
UNITED SERVICES AUTOMOBILE ASSOCIATION, 2022
Morphing unmanned vehicles that can transform into smaller drones, change shape, or reorient sensors based on conditions like damage or small spaces. The vehicles can also nest and detach for cooperative missions with shared flight paths. The morphing is done using programmable polymers that transform when conditions are detected. This enables the vehicles to adapt and maneuver in constrained areas or avoid damage. The morphing can be triggered by factors like moisture, smoke, temperature, time, weather, etc.
10. Autonomous Aerial Drone with AI-Driven Navigation and Sensor-Integrated Obstacle Avoidance
KARBASI ARDAVAN, 2022
Autonomous aerial drone that enables self-sustaining flight capabilities through advanced AI-driven navigation and obstacle avoidance. The drone integrates multiple sensors, including cameras and sensors, to detect and respond to environmental changes, while its onboard AI system continuously optimizes flight paths and collision avoidance strategies. The system enables autonomous flight, route planning, and real-time obstacle avoidance, eliminating the need for human operator intervention.
11. High-Altitude Platform with Neural Network for UAV Monitoring and Anomaly Detection
STEIN EYAL, 2022
A high-altitude platform for safe navigation of unmanned aerial vehicles (UAVs) using machine learning. The platform, equipped with a neural network, monitors UAVs in its airspace and predicts potential flight hazards, such as collisions or communication disruptions. It can automatically adjust UAV flight plans to avoid hazards and optimize operations, while also enabling real-time communication and surveillance services. The platform can detect and classify anomalies, such as passive intermodulation (PIM) on telecommunications structures, and enable autonomous UAV inspection and detection of PIM sources.
12. Flight Control Method Utilizing Multi-Layer Zeroing Neural Network for Real-Time Motor Control in Unmanned Aircraft
UNIV SOUTH CHINA TECH, 2022
A flight control method for stable unmanned aircraft flight, comprising: acquiring real-time flight operation data using sensors; solving motor control quantities using a multi-layer zeroing neural network; and obtaining a corresponding power allocation scheme. The neural network is designed based on the aircraft's differential equations, enabling stable flight control through real-time data processing.
13. Autonomous Drone Flight System with Big Data-Driven Route Generation and One-Point Control Integration
RGBLAB CO LTD, 2021
A big data-based autonomous flight system for drones that generates optimal flight routes using spatial information and enables one-point autonomous flight control. The system comprises a drone, ground control system, drone IoT server, and AI big data server that collaborate to determine the best route to a destination based on real-time spatial data, eliminating the need for manual control or predefined flight paths.
14. Adaptive Learning Control Model for Autonomous Aircraft Utilizing Real-Time Environmental Data Integration
RAKUTEN INC, 2021
Learning control model for autonomous aircraft that adapts to dynamic environments by incorporating real-time environmental data. The model learns to optimize flight control strategies based on changing object positions and movements, enabling more flexible and efficient flight operations. The learning process incorporates rewards that adapt to environmental conditions, such as terrain features or object velocity, to enable the model to learn optimal control policies that balance performance and safety.
15. Unmanned Aerial Vehicle Control System Utilizing Deep Neural Network Trained via Reinforcement Learning
BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO LTD, 2021
Training an unmanned aerial vehicle (UAV) control system using AI to improve precision and performance. It involves using a deep neural network trained with reinforcement learning to output control signals from UAV sensor data. The network is trained in simulated and real environments to minimize the difference between actual and targeted UAV states. The neural network is then used to control the UAV.
16. Unmanned Aerial Vehicle Mission System with Real-Time Flight Plan and Parameter Adjustment Based on In-Mission Data Analysis
DRONEDEPLOY INC, 2021
Adaptive mission execution for unmanned aerial vehicles (UAVs) that adjusts flight plans and parameters during a mission to ensure successful completion. The method involves analyzing UAV data during a mission to detect issues like low battery, instability, or component failures. It then makes real-time adjustments like altering flight paths, speeds, or areas covered based on the detected issues to compensate and complete the mission. This enables autonomous UAVs to adapt and recover from unexpected problems mid-flight.
17. Finite-Time Neurodynamics-Based Flight Control Method for Multi-Rotor UAVs with Real-Time Dynamics Model Integration
SOUTH CHINA UNIVERSITY OF TECHNOLOGY, 2021
A stable flight control method for multi-rotor unmanned aerial vehicles (UAVs) based on finite-time neurodynamics. The method uses real-time flight data to establish a dynamics model of the UAV, which is then solved using a finite-time varying-parameter convergence differential neural network. The solution is transmitted to the UAV's motor speed regulators to control its motion. The method enables fast and accurate tracking of time-varying targets, such as aerial photography orbits, and provides robustness against disturbances and parameter variations.
18. System for Generating Flight Policies via Deep Reinforcement Learning with Meta-Learning and Multi-Network Training
LOON LLC, 2021
A system for generating optimal flight policies for aerial vehicles using deep reinforcement learning. The system includes a simulation module, a replay buffer, and a learning module that processes input frames to output a neural network encoding a learned flight policy. The system can train multiple neural networks simultaneously using a meta-learning approach, and deploy the learned policies in an operational navigation system to control aerial vehicle movement according to desired objectives.
19. Unmanned Aerial Vehicle Control System Utilizing Execution Blocks for Real-Time Command Generation
IBM, 2021
A system for dynamically controlling unmanned aerial vehicles (UAVs) using execution blocks. The system receives media and events from a deployed UAV, sends them to an AI service for analysis, and generates execution blocks based on the insights. These blocks are then sent to an edge device for generating vehicle-specific commands, enabling real-time control of the UAV in response to changing conditions or detected objects.
20. Unmanned Aerial Vehicle with Mode Transition Capability and Lift-Generating Flight Mechanism
AEROVIRONMENT, 2021
An unmanned aerial vehicle (UAV) that transitions from a terminal homing mode to a separate mode, such as a target search mode, in response to a mode transition signal. The UAV can sustain level flight in the separate mode and can generate a magnitude of lift greater than its weight to transition between modes. The transition can be initiated by an external signal, an operator command, or autonomous onboard processing.
Get Full Report
Access our comprehensive collection of 22 documents related to this technology