Revolutionizing drone navigation: AI algorithms take flight

A leading-edge project led by University of Missouri researchers aims to equip drones with autonomous visual navigation capabilities, potentially transforming the way drones operate and assist in critical scenarios like natural disasters.

University of Missouri students spent a month at Yuma Proving Grounds in Arizona, one of the largest military installations in the world, working to collect visible and infrared video data using custom-built drones. Their project helped build the foundation for this two-year project supported by the U.S. Army Engineer Research and Development Center. Photo courtesy U.S. Department of Defense.

March 26, 2024
Contact: Eric Stann, 573-882-3346, StannE@missouri.edu  

Thanks to smart algorithms powered by artificial intelligence (AI), drones could one day pilot themselves — no humans needed — using visual landmarks to help them navigate from one point to another. That’s the aim of a two-year project led by University of Missouri researchers and supported by a $3.3 million grant from the U.S. Army Engineer Research and Development Center (ERDC), the premier research and development center for the U.S. Army Corps of Engineers.

The ability to operate autonomously becomes critical in situations when there is an interruption or loss of signal from GPS navigation, such as following a natural disaster or in military situations, said Kannappan Palaniappan, a Curators’ Distinguished Professor of electrical engineering and computer science and principal investigator on the project.

“This typically occurs in the aftermath of natural disasters, occlusions in the built environment and terrain or from human-involved intervention,” Palaniappan said. “Most drones operating today require GPS navigation to fly, so when they lose that signal, they aren’t able to find their way around and will typically just land wherever they are. Unlike ground-based GPS navigation apps, which can reroute you if you miss a turn, there’s currently no option for airborne drones to re-route in these situations.”

University of Missouri students spent a month at Yuma Proving Grounds in Arizona, one of the largest military installations in the world, working to collect visible and infrared video data using custom-built drones. Their project helped build the foundation for this two-year project supported by the U.S. Army Engineer Research and Development Center. Photo courtesy U.S. Department of Defense.

Currently, someone must manually fly a drone and have a high level of situational awareness to keep it clear of obstacles in its surroundings, like buildings, trees, mountains, bridges, signs or other prominent structures, while staying within the drone pilot’s line of sight. Now, through a combination of visual sensors and algorithms, Palaniappan and team are developing software that will allow drones to fly on their own — independently perceiving and interacting with their environment while achieving specific goals or objectives.

“We want to take the range of skills, attributes, contextual scene knowledge, mission planning and other capacities that drone pilots possess and incorporate them — along with weather conditions — into the drone’s software so it can make all of those decisions independently,” Palaniappan said.

Advancing intelligent scene perception

In recent years, advancements in visual sensor technology like light detection and ranging, or lidar, and thermal imaging have allowed drones to perform limited advanced-level tasks such as object detection and visual recognition. When combined with the team’s algorithms — powered by deep learning and machine learning, a subset of AI — drones could assist in developing 3D or 4D advanced imagery for mapping and monitoring applications.

“After a severe storm or a natural disaster, there will be damage to buildings, bridges, road networks and other forms of infrastructure. A 3D reconstruction of the area could help first responders and government officials understand how much damage has taken place,” said Kannappan Palaniappan. Source: Adobe Stock
Kannappan Palaniappan

“As humans, we’ve been incorporating 3D models and dynamical knowledge of movement patterns in our surroundings using our visual system since we were little kids,” Palaniappan said. “Now, we’re trying to decode the salient features of the human visual system and build those capabilities into autonomous vision-based aerial and ground-based navigation algorithms.”

Developing advanced imagery capabilities requires computer-related resources like processing power, memory or time. That capability is beyond what’s currently available through the software system typically available on board a drone. So, the MU-led team is investigating how to leverage the strength of cloud, high-performance and edge computing methods for a potential solution.

“After a severe storm or a natural disaster, there will be damage to buildings, waterways and other forms of infrastructure,” Palaniappan said. “A 3D reconstruction of the area could help first responders to government officials understand how much damage has taken place. By allowing the drone to collect the raw data and transmit that information to the cloud, the cloud supporting high performance computing software can complete the analysis and develop the 3D digital twin model without the need for additional software to be physically installed and accessible on the drone.”

MU’s team includes Prasad Calyam, Filiz Bunyak and Joshua Fraser. The team also includes researchers from Saint Louis University, University of California-Berkeley and University of Florida.

Read more from the College of Engineering

Subscribe to

Show Me Mizzou

Stay up-to-date with the latest news by subscribing to the Show Me Mizzou newsletter.

Subscribe