Skip to Content

Doomsday Grants Will Advance Important AI Research

Machines have shown some impressive flashes of intelligence in recent years. Do we need to start teaching them right from wrong?

Even if you don’t fear the imminent rise of super-intelligent machines, there’s reason to cheer new funding going into researching the topic, as it could help make artificial intelligence more practical in the near term.

Recent progress in computer science, especially machine learning, has coincided with some remarkably forthright speculation about where artificial intelligence could be taking us. Last year, billionaire entrepreneur Elon Musk openly warned that AI research risked “summoning the demon” and could pose the “biggest existential threat” to humanity. Other prominent figures, including Bill Gates and Stephen Hawking, have also expressed concern about the potential risks of developing truly intelligent machines.

The Future of Life Institute, an organization founded in Cambridge, Massachusetts, to mitigate the potential existential risks posed by AI, this week announced $7 million in grants for projects dedicated to “keeping AI robust and beneficial.” The grant was funded in large part by Musk, who has given $10 million to the institute.

Perhaps it does make sense to consider such undesirable outcomes, but we’re still a long way from creating anything we might consider genuinely intelligent. While some of the 37 projects to receive funding from the Future of Life Institute explore pretty far-out scenarios involving extremely powerful AI, others address important efforts to make software more dependable, accountable, and useful in complex or ambiguous contexts.

For example:

* Fuxin Li, a research scientist at Georgia Tech, will study ways to understand and predict errors in deep-learning systems. This is a very worthwhile effort. While these advanced neural networks have produced spectacular results in recent years in areas such as image and voice recognition, they can fail in surprising ways.

* Stefano Ermon, an assistant professor at Stanford University, will investigate ways to make autonomous agents behave rationally in complex situations. Eventually, this might, for example, help an automated car weigh the risks posed by different actions in a complex situation, enabling it to ultimately act in a way that we would find more responsible and ethically acceptable.

* Seth Herd, a researcher at the University of Colorado, will seek to apply neuroscience research on human decision-making to efforts to build computer hardware with so-called neuromorphic systems, inspired by the brain. This might yield important insights on designing these systems and point to some novel applications.

* Manuela Veloso, a professor at Carnegie Mellon University, will lead an effort to develop ways for machines to explain their behavior to humans. It can be difficult even for experts to understand why a machine behaves in an unexpected way. Improving this situation could be especially important as robots start to work more closely alongside humans.

Ironically, these research projects also show just how far we are from building something that could conceivably take over the world. And while perhaps it makes sense to consider future risks while there’s still plenty of time, let’s hope that anxiety over futuristic scenarios doesn’t pose a risk to meaningful technological progress.