Drones Like Us: Creating UAVs That Avoid Collisions Like a Human Pilot

In 2012, Congress embedded a deadline in its package of funding to the Federal Aviation Administration requiring the agency to open the national airspace to commercial and privately-owned Unmanned Aerial Vehicles, or UAVs (best known as drones) by September, 30th, 2015. In preparation, the FAA has outlined what capabilities they will require from UAVs before they can be flown in the same areas as other aircraft. Chief among them is the ability to sense and avoid oncoming objects as effectively as a human pilot.

Most UAVs take their commands from people on the ground and rely entirely on an operator to steer them clear of obstacles. As a consequence, the FAA has kept tight restrictions on who can operate UAVs domestically. In 2004, the agency began allowing some research and military groups to fly them, but always with the caveat that the aircraft remain within view of the ground controller and far away from piloted planes.

Last summer NASA enlisted a team of computer engineers from Boston University to build software for the next generation of UAVs, specifically aiming at enabling them to avoid collisions without direction from ground-based controllers. The algorithms they are using closely simulate visual processing in the human brain.

At NASA’s Langley Research Center, Mark Motter has taken a very different approach, using mounted cameras and sophisticated algorithms to endow UAVs with some of the awareness and intelligence of a human pilot.

A “synthetic brain”

The project has brought Motter into collaboration with Massimiliano Versace of the Neuromorphics Laboratory, part of the Center for Computational Neuroscience and Neural Technology (CompNet) at Boston University. The overarching goal of Versace’s work is to replicate neural processing in algorithms at a biological scale, creating, in essence, a synthetic brain.

For the project with NASA, Versace and his colleagues at BU are modeling aspects of visual awareness in the brain, isolating those processes that enable humans to instinctively spot and dodge oncoming objects.

Humans do this, in part, by calculating how their own movement changes the apparent size and direction of objects in the visual field, a process known as optic flow.

“When a person’s walking forward, for example, the world sort of grows and things move by,” says Tim Barnes, a postdoctoral researcher in Neuromorphics Lab who also designs software for autonomous navigation in Mars rovers. These changes can add detail to the scene we’re looking at. “One thing that optic flow would be particularly useful for in the forest case would be understanding of which branches in the canopy belong to which trees,” explains Barnes. “It’s hard to trace them when they overlap each other. But once you start walking, one branch moves left and another moves right, which helps you match them to their respective trunks.”

Similar changes can also give you a sense of the direction and speed your own body is moving relative to the objects around you.

By recording electrical activity in the brain, researchers have been able to isolate the specific cells that respond to changes in optic flow. No one knows for sure how these cells interact with each other, and so it’s not yet possible to directly replicate the process. But the algorithms that Barnes and Versace use are directly inspired by this neural strategy.

Replicating optic flow

This upgrade to UAV capability is critical given the aircrafts’ propensity to crash. FAA records show an Eagle Eye drone going down in Graford, Texas in 2006, and two Cobra drones crashing in Arizona in 2008, one that collided with a light-pole, and another that careened into the ground after the engine failed.

An accurate estimation of optic flow would give UAVs the smarts to see another plane in the distance and calculate whether it’s on a course for collision. But the system still must decide what, if anything, to do about it. The answer may seem straightforward (swerve!) until you consider that the UAV will be tracking multiple objects at once — a plane coming towards it, a hot air balloon going up, a mountain staying still but getting bigger. The system is taking in a great deal of information, but it can only react in one way at a given moment.

Humans have the same problem. We deal with it by shifting our attention. “We have many tendencies, and, at a certain point they want to take control of your behavior. But there’s only one allowable outcome that is appropriate for you. You can think of attention as a way in which many different brain areas compete for action,” says Versace.

When a car comes charging at us, we instinctively attend to that one specific stimulus. But for software, this takes training. So Versace has built a reward and punishment system into his algorithm that mimics the human experience of pain. During test runs in a flight simulator called FlightGear, the algorithm takes control of a virtual airplane and must decide how to react to various obstacles, whether to stay on course or move to avoid a collision. When it makes the right choice, it is rewarded. When it doesn’t it is punished. The trials hone the instincts of the system pressing it to pay attention and respond to potential threats. The goal is to design a system that does the bulk of its training before an actual flight.

Finally, there is the weight problem. The software will have to function on hardware that is small enough and light enough for UAVs to carry. Motter, of NASA, conducts field tests in a 5-10 pound airplane and says that a collision avoidance unit would have to weigh under a pound and be about the size of typical digital camera to be practical.

With the mandate from Congress, the deadline is tight for systems like this to be up and running.

Morgan E. Peck is a contributing writer to Tech Page One.

Morgan Peck
Morgen Peck covers technology with a focus on digital currencies and neurotechnology. Her work has appeared on Scientific American and BBC Future.
Morgan Peck
Tags: Business,Technology