Skip to Content
Silicon Valley

That wasn’t Google I/O — it was Google AI

If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.

May 11, 2023
view of stage from audience where Dan Deacon performs as DJ in front of a screen with a giant blue flamingo. A person in a Duck costume is dancing to the right of him.
Dan Deacon performing at Google IO, assisted by generative AI tools and a person in a duck suitMat Honan

Things got weird at yesterday’s Google I/O conference right from the jump, when the duck hit the stage.  

The day began with a musical performance described as a “generative AI experiment featuring Dan Deacon and Google’s MusicLM, Phenaki, and Bard AI tools.” It wasn’t clear exactly how much of it was machine-made and how much was human. There was a long, lyrically rambling dissertation about meeting a duck with lips. Deacon informed the audience that we were all in a band called Chiptune and launched into a song with various chiptune riffs layered on top of each other. Later he had a song about oat milk? I believe the lyrics were entirely AI generated. Someone wearing a duck suit with lipstick came out and danced on stage. It was all very confusing. 

Then again, everything about life in the AI era is a bit confusing and weird. And this was, no doubt, the AI show. It was Google I/O as Google AI. So much so that on Twitter, the internet’s comment section, person after person used #GoogleIO to complain about all the AI talk, and exhorted Google to get on with it and get to the phones. (There was an eagerly anticipated new phone, the Pixel Fold. It folds.) 

Yet when Google CEO Sundar Pichai, who once ran the company’s efforts with Android, stepped on stage, he made it clear what he was there to talk about. It wasn’t a new phone—it was AI. He opened by going straight at the ways AI is in everything the company does now. With generative AI, he said, “we are reimagining all our core products, including Search.” 

I don’t think that’s quite right. 

At Google in 2023, it seems pretty clear that AI itself now is the core product. Or at least it’s the backbone of that product, a key ingredient that manifests itself in different forms. As my colleague Melissa Heikkilä put it in her report on the company’s efforts: Google is throwing generative AI at everything

The company made this point in one demo after another, all morning long. A Gmail demo showed how generative AI can compose an elaborate email to an airline to help you get a refund. The new Magic Editor in Google Photos will not only remove unwanted elements but reposition people and objects in photos, make the sky brighter and bluer, and then adjust the lighting in the photo so that all that doctoring looks natural. 

In Docs, the AI will create a full job description from just a few words. It will generate spreadsheets. Help you plan your vacation in Search, adjust the tone of your text messages to be more professional (or more personable), give you an “immersive view” in Maps, summarize your email, write computer code, seamlessly translate lip-sync videos. It is so deeply integrated into not only the Android operating system but the hardware itself that Google now makes “the only phone with AI at its center,” as Google’s Rick Osterloh said in describing the G2 chip. Phew. 

Google I/O is a highly, highly scripted event. For months now the company has faced criticism that its AI efforts were being outpaced by the likes of OpenAI’s ChatGPT or Microsoft Bing. Alarm bells were sounding internally, too. Today felt like a long-planned answer to that. Taken together, the demos came across as a kind of flex—a way to show what the company has under the hood and how it can deploy that technology throughout its existing, massively popular products (Pichai noted that the company has five different products with more than 2 billion users). 

And yet at the same time, it is clearly trying to walk a line, showing off what it can do but in ways that won’t, you know, freak everyone out.

Three years ago, the company forced out Timnit Gebru, the co-lead of its ethical AI team, essentially over a paper that raised concerns about the dangers of large language models. Gebru’s concerns have since become mainstream. Her departure, and the fallout from it, marked a turning point in the conversation about the dangers of unchecked AI. One would hope Google learned from it; from her. 

And then, just last week, Geoffrey Hinton announced he was stepping down from Google, in large part so he’d be free to sound the alarm bell about the dire consequences of rapid advancements in AI that he fears could soon enable it to surpass human intelligence. (Or, as Hinton put it, it is “quite conceivable that humanity is just a passing phase in the evolution of intelligence.”) 

And so, I/O yesterday was a far cry from the event in 2018, when the company gleefully demonstrated Duplex, showcasing how Google Assistant could make automated calls to small businesses without ever letting the people on those calls know they were interacting with an AI. It was an incredible demo. And one that made very many people deeply uneasy.

Again and again at this year’s I/O, we heard about responsibility. James Manyika, who leads the company’s technology and society program, opened by talking about the wonders AI has wrought, particularly around protein folding, but was quick to transition to the ways the company is thinking about misinformation, noting how it would watermark generated images and alluding to guardrails to prevent their misuse. 

There was a demo of how Google can deploy image provenance to counter misinformation, debunking an image search effectively by showing the first time it (in the example on stage, a fake photo purporting to show that the moon landing was a hoax) was indexed. It was a little bit of grounding amidst all the awe and wonder, operating at scale. 

And then … on to the phones. The new Google Pixel Fold scored the biggest applause line of the day. People like gadgets.

The phone may fold, but for me it was among the least mind-bending things I saw all day. And in my head, I kept returning to one of the earliest examples we saw: a photo of a woman standing in front of some hills and a waterfall

Magic Editor erased her backpack strap. Cool! And it also made the cloudy sky look a lot more blue. Reinforcing this, in another example—this time with a child sitting on a bench holding balloons—Magic Editor once again made the day brighter and then adjusted all the lighting in the photos so the sunshine would look more natural. More real than real.

How far do we want to go here? What’s the end goal we are aiming for? Ultimately, do we just skip the vacation altogether and generate some pretty, pretty pictures? Can we supplant our memories with sunnier, more idealized versions of the past? Are we making reality better? Is everything more beautiful? Is everything better? Is this all very, very cool? Or something else? Something we haven’t realized yet?

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.