How Drones May Avoid Collisions by Sharing Knowledge
If the U.S. Federal Aviation Administration allows the widespread use of commercial drones, the skies could soon buzz with swarms of unmanned aerial vehicles–especially in dense urban cores. That means drones will be tasked with autonomously avoiding collisions, as their numbers will be too high to rely on human air-traffic controllers at all times.
The Stanford Intelligent Systems Laboratory is just one team of more than 130 working with NASA to solve how to manage drone traffic. The traffic-management system, which will be under development for the next few years, will help drones communicate with each other and avoid potential collisions.
“They’re going to be doing much more unusual missions that will require them to fly in flight paths that are curvy,” says Mykel Kochenderfer, director of the Stanford laboratory. “Being robust to that uncertainty is very, very important.”
A recent paper published by Kochenderfer and mechanical engineering graduate student Hao Yi Ong describes a quick decision process the traffic-management system can use to reroute drones and avoid a collision. Their team ran more than a million simulations for conflict situations for anywhere between two and 10 drones. Drones were given varying levels of information about the other drones in the system and then were tested on their response time and how often they ran into conflict.
The Stanford researchers found that drones could make the quickest decisions when they were paired with the closest other drone, and the two solely considered the other’s behavior. The slowest response occurred when drones considered their own surroundings and then fed their results into a central system that sent decisions back to the entire group. Decision time always increased as more drones entered the simulation, but the system was always able to make a decision on rerouting a drone within 50 milliseconds.
While drones feeding their data into a central decision-making system was the slowest, it was also the safest. Drones were the least likely to encounter conflict when they fed data into a central system. Drones that received location data about other drones and assumed they would stay on the same path were the most likely to encounter conflict.
The Stanford lab also works with autonomous cars and air-traffic control for conventional planes. One of its projects, which Kochenderfer developed in part with former colleagues at MIT, involved using a small amount of computing power to decide how a plane should avoid a collision. Traditionally, collision avoidance has been guided by nearly 2,000 pages of documents that detail every possible scenario and how to react. Stanford and MIT’s solution is currently being standardized for use on all large aircraft.
NASA plans to spend 2016 testing the drone-traffic-management systems it has developed thus far at the drone test sites set up across the U.S. by the FAA. Back in November, a NASA team flew a drone at Moffett Field in California while simulating conflicts with drones generated on a computer, triggering an early version of the traffic-management system to alert the drones about the potential collisions. The FAA also tested similar systems developed by drone software and services company Precision Hawk (see “FAA Will Test Drones’ Ability to Steer Themselves Out of Trouble”).
“To allow large-scale UAS [unmanned aircraft systems] with a mix of beyond visual line of sight and within visual line of sight, we need a system that consists of technologies to manage airspace and capabilities on the UAS itself, rules of the airspace, and procedures for managing contingencies and emergencies,” says Parimal Kopardekar, who leads NASA’s drone-traffic-control program.
Kochenderfer says the Stanford researchers have tested their work in simulations, but have yet to see it operate with real drones. Validating that it works in the air is one of the final steps.
“This is one of the most exciting areas of aerospace right now—the use of drones,” Kochenderfer says. “Many of the applications they enable can lead to new economic models, but the potential for saving lives and improving efficiency, I think that’s really quite interesting.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.