Skip to Content

A Closer Look at Artificial Intelligence

Industry resources and upcoming events.
March 28, 2016

OUTSIDE READING

“Mastering the Game of Go with Deep Neural Networks and Tree Search”
By David Silver et al.
Nature, January 18, 2016

The history of AI has been marked by ambitious time lines for success followed by disappointments, so it was heartening news when a program developed by Google’s DeepMind group was able to defeat a champion-level Go player a full decade before such a feat was thought possible. Go had been viewed as the ultimate challenge for game-playing AI systems. But the researchers behind the program told reporters that the milestone was even more significant: “Our hope is that one day [our methods] could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling.”

Personal Challenge 2016: Simple AI
By Mark Zuckerberg, January 3, 2016

If your run-of-the-mill programmer declared a New Year’s resolution to build a virtual personal assistant it would not be news, but when the multibillionaire CEO of Facebook set himself that challenge for 2016, people took notice. Facebook has invested heavily in artificial-intelligence research, and Zuckerberg’s vision for a system “kind of like Jarvis in Iron Man” will build on the company’s recent advances in voice recognition. The hope is to control his home through simple commands and facial recognition so that, for example, friends and family can come and go without needing a key.

The Future of the Professions: How Technology Will Transform the Work of Human Experts
By Richard Susskind and Daniel Susskind
Oxford University Press, January 1, 2016

As expert systems become increasingly capable of doing things like providing medical and legal advice, drawing up building plans, and teaching students, the authors predict, these and other artificial-­intelligence technologies will affect white-collar professions in the 21st century in much the same way blue-collar work was transformed by automation in the 20th century. In anticipation of these changes, they propose a fundamental rethinking of how expertise is produced and distributed in society.

“Can This Man Make AI More Human?”
By Will Knight
MIT Technology Review, December 17, 2015

Instead of feeding computers reams of data in the traditional approach to artificial intelligence, NYU researcher Gary Marcus is attempting to train them to behave more intelligently by closely following the way infants and adolescents pick up concepts. Tech Review’s AI correspondent Will Knight chronicles how Marcus’s startup Geometric Intelligence is developing systems that are more flexible than traditional deep-learning algorithms in complex environments.

“Human-Level Concept Learning through Probabilistic Program Induction”
Brenden M. Lake et al.
Science, December 11, 2015

The Turing test is usually viewed as a conversational challenge for AI systems, but researchers at NYU, the University of Toronto, and MIT report that a new deep-learning algorithm can pass a visual Turing test by drawing the letters of the alphabet in a way that is indistinguishable from human writing. With their algorithm, the researchers have created a system that can learn from just a single example in a classification task, rather than the hundreds of examples machine-learning algorithms usually require.

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
By John Markoff
Ecco, August 25, 2015

In his latest book, Pulitzer Prize–­winning New York Times science writer John ­Markoff charts the rise of automation from the first industrial robots of the postwar era to the increasingly sophisticated machines ever more prevalent in our workplaces, public spaces, and homes. Markoff focuses particularly on the minds behind the machines at places like Google and Apple, exploring the dichotomy between those who seek to build robots to replace humans in certain tasks, like Andy Rubin, former head of robotics at Google, and those who aim to develop intelligent machines to augment human intelligence in day-to-day life, like Siri developer Tom Gruber.

“Our Fear of Artificial Intelligence”
By Paul Ford
MIT Technology Review, February 11, 2015

Responding to ideas in Oxford philosopher Nick Bostrom’s 2014 book Superintelligence, writer Paul Ford looks at whether it’s reasonable to fear that runaway AI machines will become self-aware and act in their own interests. Some prominent members of the AI community argue that these anxieties are based on a fundamental misunderstanding of how close researchers are to achieving anything resembling sentient machines. But others argue that even if thinking machines are a long way off, researchers working toward that goal must anticipate problems and contain them if possible.

Open Letter on Autonomous Weapons
Future of Life Institute, July 18, 2015

An open letter signed by more than 3,000 of the world’s top scientists and AI researchers calls for a ban on autonomous weapons that select and engage targets without human intervention and beyond meaningful human control. The letter writers acknowledge the potential advantages of removing humans from the front lines of war but argue that a “global AI arms race” in the coming decades would ultimately be bad for humanity.

“The Errors, Insights, and Lessons of Famous AI Predictions”
By Stuart Armstrong et al.
Journal of Experimental & Theoretical Artificial Intelligence, June 24, 2014

From the start, the AI field has been marked by a series of notable predictions about exactly when machines will exhibit something approaching human-level intelligence. This paper analyzes a few of the more famous predictions, beginning with the claim before AI’s founding conference at Dartmouth in 1956 that just 10 scientists could make “a significant advance” toward simulated intelligence over just two months. The authors go on to break down the ideas in Ray Kurzweil’s 1999 book The Age of Spiritual Machines into dozens of testable predictions for the year 2009, calculating a success rate of around 50 percent.

Our Final Invention: Artificial Intelligence and the End of the Human Era
By James Barrat
Thomas Dunne Books, October 1, 2013

This book by a longtime chronicler of AI research asks whether self-aware machines will be as benevolent as their engineers intend them to be. Noting that computer intelligence will inevitably be unpredictable and inscrutable to humans, Barrat argues, “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans.”

 

CALENDAR

International Conference on Robotics and Artificial Intelligence
April 20–22, 2016
Los Angeles
icrai.org

International Conference on Artificial Intelligence and Statistics
May 9–11, 2016
Cadiz, Spain
aistats.org

International AAAI Conference on Web and Social Media
May 17–20, 2016
Cologne, Germany
icwsm.org

International Conference on Distributed Computing and Artificial Intelligence
June 1–3, 2016
Seville, Spain
dcai-conference.net

International Conference on Machine Learning
June 19–24, 2016
New York
icml.cc/2016

Conference on Uncertainty in Artificial Intelligence
June 25–29, 2016
New York
auai.org/uai2016

International Joint Conference on Artificial Intelligence
July 9–15, 2016
New York
ijcai-16.org

IEEE World Congress on Computational Intelligence
July 24–29, 2016
Vancouver, Canada
wcci2016.org

European Conference on Artificial Intelligence
August 29–September 2, 2016
The Hague, Netherlands
ecai2016.org

Conference on Neural Information Processing Systems
December 5­–10, 2016
Barcelona, Spain
nips.cc/Conferences/2016

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.