Even if you don’t fear the imminent rise of super-intelligent machines, there’s reason to cheer new funding going into researching the topic, as it could help make artificial intelligence more practical in the near term.
Recent progress in computer science, especially machine learning, has coincided with some remarkably forthright speculation about where artificial intelligence could be taking us. Last year, billionaire entrepreneur Elon Musk openly warned that AI research risked “summoning the demon” and could pose the “biggest existential threat” to humanity. Other prominent figures, including Bill Gates and Stephen Hawking, have also expressed concern about the potential risks of developing truly intelligent machines.
The Future of Life Institute, an organization founded in Cambridge, Massachusetts, to mitigate the potential existential risks posed by AI, this week announced $7 million in grants for projects dedicated to “keeping AI robust and beneficial.” The grant was funded in large part by Musk, who has given $10 million to the institute.
Perhaps it does make sense to consider such undesirable outcomes, but we’re still a long way from creating anything we might consider genuinely intelligent. While some of the 37 projects to receive funding from the Future of Life Institute explore pretty far-out scenarios involving extremely powerful AI, others address important efforts to make software more dependable, accountable, and useful in complex or ambiguous contexts.
* Fuxin Li, a research scientist at Georgia Tech, will study ways to understand and predict errors in deep-learning systems. This is a very worthwhile effort. While these advanced neural networks have produced spectacular results in recent years in areas such as image and voice recognition, they can fail in surprising ways.
* Stefano Ermon, an assistant professor at Stanford University, will investigate ways to make autonomous agents behave rationally in complex situations. Eventually, this might, for example, help an automated car weigh the risks posed by different actions in a complex situation, enabling it to ultimately act in a way that we would find more responsible and ethically acceptable.
* Seth Herd, a researcher at the University of Colorado, will seek to apply neuroscience research on human decision-making to efforts to build computer hardware with so-called neuromorphic systems, inspired by the brain. This might yield important insights on designing these systems and point to some novel applications.
* Manuela Veloso, a professor at Carnegie Mellon University, will lead an effort to develop ways for machines to explain their behavior to humans. It can be difficult even for experts to understand why a machine behaves in an unexpected way. Improving this situation could be especially important as robots start to work more closely alongside humans.
Ironically, these research projects also show just how far we are from building something that could conceivably take over the world. And while perhaps it makes sense to consider future risks while there’s still plenty of time, let’s hope that anxiety over futuristic scenarios doesn’t pose a risk to meaningful technological progress.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
If they ever hit our roads for real, other drivers need to know exactly what they are.
This is the first image of the black hole at the center of our galaxy
The stunning image was made possible by linking eight existing radio observatories across the globe.
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.