Skip to Content
Artificial intelligence

Bill Gates isn’t too scared about AI

“The best reason to believe that we can manage the risks is that we have done it before.”

Bill Gates attends the World Leaders' Summit at COP26
Getty Images

Bill Gates has joined the chorus of big names in tech who have weighed in on the question of risk around artificial intelligence. The TL;DR? He’s not too worried, we’ve been here before.

The optimism is refreshing after weeks of doomsaying—but it comes with few fresh ideas. 

The billionaire business magnate and philanthropist made his case in a post on his personal blog GatesNotes today. “I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them,” he writes.

According to Gates, AI is “the most transformative technology any of us will see in our lifetimes.” That puts it above the internet, smartphones, and personal computers, the technology he did more than most to bring into the world. (It also suggests that nothing else to rival it will be invented in the next few decades.)

Gates was one of dozens of high-profile figures to sign a statement put out by the San Francisco–based Center for AI Safety a few weeks ago, which reads, in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But there’s no fearmongering in today’s blog post. In fact, existential risk doesn’t get a look in. Instead, Gates frames the debate as one pitting “longer-term” against “immediate” risk, and chooses to focus on “the risks that are already present, or soon will be.”

“Gates has been plucking on the same string for quite a while,” says David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in the UK. Gates was one of several public figures who talked about the existential risk of AI a decade ago, when deep learning first took off, says Leslie: “He used to be more concerned about superintelligence way back when. It seems like that might have been watered down a bit.”

Gates doesn’t dismiss existential risk entirely. He wonders what may happen “when”—not if —“we develop an AI that can learn any subject or task,” often referred to as artificial general intelligence, or AGI.

He writes: “Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all? But thinking about these longer-term risks should not come at the expense of the more immediate ones.”

Gates has staked out a kind of middle ground between deep-learning pioneer Geoffrey Hinton, who quit Google and went public with his fears about AI in May, and others like Yann LeCun and Joelle Pineau at Meta AI (who think talk of existential risk is “preposterously ridiculous” and “unhinged”) or Meredith Whittaker at Signal (who thinks the fears shared by Hinton and others are “ghost stories”).

It’s interesting to ask what contribution Gates makes by weighing in now, says Leslie: “With everybody talking about it, we’re kind of saturated.”

Like Gates, Leslie doesn’t dismiss doomer scenarios outright. “Bad actors can take advantage of these technologies and cause catastrophic harms,” he says. “You don't need to buy into superintelligence, apocalyptic robots, or AGI speculation to understand that.”

“But I agree that our immediate concerns should be in addressing the existing risks that derive from the rapid commercialization of generative AI,” says Leslie. “It serves a positive purpose to sort of zoom our lens in and say, ‘Okay, well, what are the immediate concerns?’”

In his post, Gates notes that AI is already a threat in many fundamental areas of society, from elections to education to employment. Of course, such concerns aren’t news. What Gates wants to tell us is that although these threats are serious, we’ve got this: “The best reason to believe that we can manage the risks is that we have done it before.”

In the 1970s and ’80s, calculators changed how students learned math, allowing them to focus on what Gates calls the “thinking skills behind arithmetic” rather than the basic arithmetic itself. He now sees apps like ChatGPT doing the same with other subjects.

In the 1980s and ’90s, word processing and spreadsheet applications changed office work—changes that were driven by Gates’s own company, Microsoft.

Again, Gates looks back at how people adapted and claims that we can do it again. “Word processing applications didn’t do away with office work, but they changed it forever,” he writes. “The shift caused by AI will be a bumpy transition, but there is every reason to think we can reduce the disruption to people’s lives and livelihoods.”

Similarly with misinformation: we learned how to deal with spam, so we can do the same for deepfakes. “Eventually, most people learned to look twice at those emails,” Gates writes. “As the scams got more sophisticated, so did many of their targets. We’ll need to build the same muscle for deepfakes.”

Gates urges fast but cautious action to address all the harms on his list. The problem is that he doesn’t offer anything new. Many of his suggestions are tired; some are facile.

Like others in the last few weeks, Gates calls for a global body to regulate AI, similar to the International Atomic Energy Agency. He thinks this would be a good way to control the development of AI cyberweapons. But he does not say what those regulations should curtail or how they should be enforced.

He says that governments and businesses need to offer support such as retraining programs to make sure people do not get left behind in the job market. Teachers, he says, should also be supported in the transition to a world in which apps like ChatGPT are the norm. But Gates does not specify what this support would look like.

And he says that we need to get better at spotting deepfakes, or at least use tools that detect them for us. But the latest crop of tools cannot detect AI-generated images or text well enough to be useful. As generative AI improves, will the detectors keep up?

Gates is right that “a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.” But he often falls back on a conviction that AI will solve AI’s problems—a conviction that not everyone will share.

Yes, immediate risks should be prioritized. Yes, we have steered through (or bulldozed over) technological upheavals before and we could do it again. But how?

“One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers,” Gates writes.

That’s still the case.

Deep Dive

Artificial intelligence

What is AI?

Everyone thinks they know but no one can agree. And that’s a problem.

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

How to use AI to plan your next vacation

AI tools can be useful for everything from booking flights to translating menus.

Why Google’s AI Overviews gets things wrong

Google’s new AI search feature is a mess. So why is it telling us to eat rocks and gluey pizza, and can it be fixed?

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.