AI’s Very Disruptive Time
Ryan Adams knows his timing has been perfect. A professor of computer science at Harvard since 2011 and a cohost of the machine-learning podcast Talking Machines, Adams was leading a group doing research on intelligent algorithms when his 15-month-old machine-learning startup, Whetlab, was purchased by Twitter last summer.
Whetlab’s technology automates some of the hardest parts of building large-scale machine-learning systems. It was created to take on difficult machine-learning challenges like visual objection recognition and speech processing.
Harvard researchers started using the tool in a wide range of projects, from biomedical robots to chemistry problems, and Netflix used an early open-source version to experiment with deep learning as well.
Now on leave from Harvard, Adams spoke to Business Reports senior editor Nanette Byrnes at Twitter’s offices in Cambridge, Massachusetts, about the exploding interest in machine learning.
Artificial intelligence has moved from being a focus of academic study to a commercial tool. What’s driving that? New algorithms, fast computers, tons of available data?
As much as anything else, I think investment in AI has made a big difference. At this point, there’s been billions of dollars of investment by tech companies, and that makes things go faster.
Like Twitter buying your company. How can machine learning make Twitter better? Can you give an example?
There are immense opportunities for improving the way that Twitter content is organized, helping you find the new things that are going on, helping you discover communities that you can interact with, and just ways that Twitter can be a better experience for its users. One of the challenges you can imagine is combining the interesting information people provide links to and trying to understand that content as it relates to the content that’s on Twitter.
How much of AI techniques like deep learning are still a mystery?
Right now deep learning is very much on the empirical end of things. You know that important stuff is clearly going on. These [deep-learning systems] are doing cool stuff. We understand it very little, but they do work.
It can be difficult to define AI, and even the proper test of artificial intelligence is up for debate.
Part of the challenge of this is the need to anthropomorphize the concept of intelligence. We use the phrase “artificial” intelligence, as though intelligence isn’t a property of the world. We don’t call airplanes artificial birds, and they don’t have artificial flight. They have actual flight, right?
That’s a very anthropocentric view, that if there was another intelligent thing it would be artificial. And so I think it’s very hard to come up with a definition of intelligence that is not anthropocentric, and I don’t have one.
If you went back and you said to an early thinker about AI, 50 or 60 years ago, “You’re going to have with you at all times a device, and essentially it can answer any question that you’d like to answer across a huge range of topics; it can understand your voice and provide a view on any place in the world, tell you how to get from point A to point B”—if you explained in the abstract what your smartphone is capable of doing via Google and various kinds of mapping tools and Siri—I think that person would say, “That’s AI.” Yet what we expect from the tools that we use just changes massively over time.
So far companies have been remarkably open about sharing AI insights, releasing open-source software, allowing staff to publish papers and speak at conferences, and so on. How long do you think that will last?
Opening up the code is good for contributing back to the community, helps recruit top machine-learning talent, and also lets companies take advantage of improvements the larger community makes to the tools.
Why don’t these companies feel like they’re giving away the farm when they give away their code and their ideas? Because other companies don’t have Google’s computing power, they don’t have Twitter’s computing power, and they don’t have the data, right? So you can have the ideas. You can have the code. But if you don’t have the data and you don’t have the horsepower, what are you going to do with them?
What form do you think AI will take?
AI doesn’t look very much like a robot that’s suddenly very smart to me, I don’t think. I think AI just looks like tools that get better and better all the time.
One thing I do worry about is that I think we’re on the cusp of having the ability (with machine learning and AI) to synthesize media to create something that’s very difficult to distinguish from the real thing. These are very dangerous tools to have in a society that depends increasingly on things like video to represent truth.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.