“Mastering the Game of Go with Deep Neural Networks and Tree Search”
By David Silver et al.
Nature, January 18, 2016
The history of AI has been marked by ambitious time lines for success followed by disappointments, so it was heartening news when a program developed by Google’s DeepMind group was able to defeat a champion-level Go player a full decade before such a feat was thought possible. Go had been viewed as the ultimate challenge for game-playing AI systems. But the researchers behind the program told reporters that the milestone was even more significant: “Our hope is that one day [our methods] could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling.”
Personal Challenge 2016: Simple AI
By Mark Zuckerberg, January 3, 2016
If your run-of-the-mill programmer declared a New Year’s resolution to build a virtual personal assistant it would not be news, but when the multibillionaire CEO of Facebook set himself that challenge for 2016, people took notice. Facebook has invested heavily in artificial-intelligence research, and Zuckerberg’s vision for a system “kind of like Jarvis in Iron Man” will build on the company’s recent advances in voice recognition. The hope is to control his home through simple commands and facial recognition so that, for example, friends and family can come and go without needing a key.
The Future of the Professions: How Technology Will Transform the Work of Human Experts
By Richard Susskind and Daniel Susskind
Oxford University Press, January 1, 2016
As expert systems become increasingly capable of doing things like providing medical and legal advice, drawing up building plans, and teaching students, the authors predict, these and other artificial-intelligence technologies will affect white-collar professions in the 21st century in much the same way blue-collar work was transformed by automation in the 20th century. In anticipation of these changes, they propose a fundamental rethinking of how expertise is produced and distributed in society.
“Can This Man Make AI More Human?”
By Will Knight
MIT Technology Review, December 17, 2015
Instead of feeding computers reams of data in the traditional approach to artificial intelligence, NYU researcher Gary Marcus is attempting to train them to behave more intelligently by closely following the way infants and adolescents pick up concepts. Tech Review’s AI correspondent Will Knight chronicles how Marcus’s startup Geometric Intelligence is developing systems that are more flexible than traditional deep-learning algorithms in complex environments.
“Human-Level Concept Learning through Probabilistic Program Induction”
Brenden M. Lake et al.
Science, December 11, 2015
The Turing test is usually viewed as a conversational challenge for AI systems, but researchers at NYU, the University of Toronto, and MIT report that a new deep-learning algorithm can pass a visual Turing test by drawing the letters of the alphabet in a way that is indistinguishable from human writing. With their algorithm, the researchers have created a system that can learn from just a single example in a classification task, rather than the hundreds of examples machine-learning algorithms usually require.
Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
By John Markoff
Ecco, August 25, 2015
In his latest book, Pulitzer Prize–winning New York Times science writer John Markoff charts the rise of automation from the first industrial robots of the postwar era to the increasingly sophisticated machines ever more prevalent in our workplaces, public spaces, and homes. Markoff focuses particularly on the minds behind the machines at places like Google and Apple, exploring the dichotomy between those who seek to build robots to replace humans in certain tasks, like Andy Rubin, former head of robotics at Google, and those who aim to develop intelligent machines to augment human intelligence in day-to-day life, like Siri developer Tom Gruber.
“Our Fear of Artificial Intelligence”
By Paul Ford
MIT Technology Review, February 11, 2015
Responding to ideas in Oxford philosopher Nick Bostrom’s 2014 book Superintelligence, writer Paul Ford looks at whether it’s reasonable to fear that runaway AI machines will become self-aware and act in their own interests. Some prominent members of the AI community argue that these anxieties are based on a fundamental misunderstanding of how close researchers are to achieving anything resembling sentient machines. But others argue that even if thinking machines are a long way off, researchers working toward that goal must anticipate problems and contain them if possible.
Open Letter on Autonomous Weapons
Future of Life Institute, July 18, 2015
An open letter signed by more than 3,000 of the world’s top scientists and AI researchers calls for a ban on autonomous weapons that select and engage targets without human intervention and beyond meaningful human control. The letter writers acknowledge the potential advantages of removing humans from the front lines of war but argue that a “global AI arms race” in the coming decades would ultimately be bad for humanity.
“The Errors, Insights, and Lessons of Famous AI Predictions”
By Stuart Armstrong et al.
Journal of Experimental & Theoretical Artificial Intelligence, June 24, 2014
From the start, the AI field has been marked by a series of notable predictions about exactly when machines will exhibit something approaching human-level intelligence. This paper analyzes a few of the more famous predictions, beginning with the claim before AI’s founding conference at Dartmouth in 1956 that just 10 scientists could make “a significant advance” toward simulated intelligence over just two months. The authors go on to break down the ideas in Ray Kurzweil’s 1999 book The Age of Spiritual Machines into dozens of testable predictions for the year 2009, calculating a success rate of around 50 percent.
Our Final Invention: Artificial Intelligence and the End of the Human Era
By James Barrat
Thomas Dunne Books, October 1, 2013
This book by a longtime chronicler of AI research asks whether self-aware machines will be as benevolent as their engineers intend them to be. Noting that computer intelligence will inevitably be unpredictable and inscrutable to humans, Barrat argues, “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans.”
International Conference on Robotics and Artificial Intelligence
April 20–22, 2016
International Conference on Artificial Intelligence and Statistics
May 9–11, 2016
International AAAI Conference on Web and Social Media
May 17–20, 2016
International Conference on Distributed Computing and Artificial Intelligence
June 1–3, 2016
International Conference on Machine Learning
June 19–24, 2016
Conference on Uncertainty in Artificial Intelligence
June 25–29, 2016
International Joint Conference on Artificial Intelligence
July 9–15, 2016
IEEE World Congress on Computational Intelligence
July 24–29, 2016
European Conference on Artificial Intelligence
August 29–September 2, 2016
The Hague, Netherlands
Conference on Neural Information Processing Systems
December 5–10, 2016
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.