Skip to Content
Artificial intelligence

Why Big Tech’s bet on AI assistants is so risky

Tech companies have not solved some of the persistent problems with AI language models.

an Ai genie emerges from a lamp
Stephanie Arnett/MITTR | Envato

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Since the beginning of the generative AI boom, tech companies have been feverishly trying to come up with the killer app for the technology. First it was online search, with mixed results. Now it’s AI assistants. Last week, OpenAI, Meta, and Google launched new features for their AI chatbots that allow them to search the web and act as a sort of personal assistant. 

OpenAI unveiled new ChatGPT features that include the ability to have a conversation with the chatbot as if you were making a call, allowing you to instantly get responses to your spoken questions in a lifelike synthetic voice, as my colleague Will Douglas Heaven reported. OpenAI also revealed that ChatGPT will be able to search the web.  

Google’s rival bot, Bard, is plugged into most of the company’s ecosystem, including Gmail, Docs, YouTube, and Maps. The idea is that people will be able to use the chatbot to ask questions about their own content—for example, by getting it to search through their emails or organize their calendar. Bard will also be able to instantly retrieve information from Google Search. In a similar vein, Meta too announced that it is throwing AI chatbots at everything. Users will be able to ask AI chatbots and celebrity AI avatars questions on WhatsApp, Messenger, and Instagram, with the AI model retrieving information online from Bing search. 

This is a risky bet, given the limitations of the technology. Tech companies have not solved some of the persistent problems with AI language models, such as their propensity to make things up or “hallucinate.” But what concerns me the most is that they are a security and privacy disaster, as I wrote earlier this year. Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale. 

I’ve covered the significant security problems with AI language models before. Now that AI assistants have access to personal information and can simultaneously browse the web, they are particularly prone to a type of attack called indirect prompt injection. It’s ridiculously easy to execute, and there is no known fix. 

In an indirect prompt injection attack, a third party “alters a website by adding hidden text that is meant to change the AI’s behavior,” as I wrote in April. “Attackers could use social media or email to direct users to websites with these secret prompts. Once that happens, the AI system could be manipulated to let the attacker try to extract people’s credit card information, for example.” With this new generation of AI models plugged into social media and emails, the opportunities for hackers are endless. 

I asked OpenAI, Google, and Meta what they are doing to defend against prompt injection attacks and hallucinations. Meta did not reply in time for publication, and OpenAI did not comment on the record. 

Regarding AI’s propensity to make things up, a spokesperson for Google did say the company was releasing Bard as an “experiment,” and that it lets users fact-check Bard’s answers using Google Search. “If users see a hallucination or something that isn’t accurate, we encourage them to click the thumbs-down button and provide feedback. That’s one way Bard will learn and improve,” the spokesperson said. Of course, this approach puts the onus on the user to spot the mistake, and people have a tendency to place too much trust in the responses generated by a computer.

For prompt injection, Google confirmed it is not a solved problem and remains an active area of research. The spokesperson said the company is using other systems, such as spam filters, to identify and filter out attempted attacks, and is conducting adversarial testing and red teaming exercises to identify how malicious actors might attack products built on language models. “We’re using specially trained models to help identify known malicious inputs and known unsafe outputs that violate our policies,” the spokesperson said.  

Now, I get that there will always be early teething pains with every new product launch. But it’s saying a lot when even early cheerleaders of AI language model products have not been that impressed. Kevin Roose, a New York Times columnist, found that Google’s assistant was good at summarizing emails but also told him about emails that weren’t in his inbox. 

TL;DR? Tech companies shouldn’t be so complacent about the purported “inevitability” of AI tools. Ordinary people don’t tend to adopt technologies that keep failing in annoying and unpredictable ways, and it’s only a matter of time until we see the hackers using these new AI assistants maliciously. Right now, we are all sitting ducks. 

I don’t know about you, but I intend to wait a little longer before letting this generation of AI systems snoop around in my email. 

Deeper Learning

This robotic exoskeleton can help runners sprint faster

Now this is cool. An exoskeleton can help runners increase their speed by encouraging them to take more steps, allowing them to cover short distances more quickly. A team of researchers at Chung-Ang University in Seoul, South Korea, built a lightweight exosuit that helps people run faster by assisting their hip extension—the powerful motion that propels a runner forward. The suit’s sensors feed data into algorithms that track each runner’s individual running style and speed.

Harder, better, faster, stronger: The team tested the exosuit on nine young male runners, none of whom were considered to be elite athletes. The men sprinted outside in a straight line for 200 meters twice, once wearing the exosuit and once without. On average, the participants managed to run the distance 0.97 seconds faster when they were wearing the suit than when they weren’t.  Read more from Rhiannon Williams here

Bits and Bytes

Hollywood writers and studios reached a deal on the use of AI
The Writers Guild of America and the Alliance of Motion Picture and Television Producers have reached a deal ending the Hollywood writers’ strike and agreeing on terms of use for AI. The deal  stipulates that AI systems can’t be used to write or rewrite any scripts, and that studios must disclose when they give writers AI-generated materials. Writers will also be able to decide whether their scripts are used to train AI models. This move ensures that people can use AI as a tool, rather than simply being replaced by it. (Wired)

A French AI company launched an AI chatbot that gives detailed instructions on murder and ethnic cleansing
Eugh. Mistral, a French startup founded by former Meta and DeepMind people, has launched an open-source AI language model, which outperforms Meta’sLlama in some metrics. But unlike Llama, Mistral has no content filters and spews toxic content with no restrictions. (404 Media

OpenAI is making plans for a consumer device
OpenAI is in “advanced talks” with former Apple designer Sir Jony Ive and SoftBank to build the “iPhone of artificial intelligence.” It’s unclear what that device would look like or do. Consumer hardware is tricky to get right. Many tech companies have announced—and scrapped—ambitious plans to roll out consumer tech. My money is on a voice-controlled AI assistant. (The Information

These 183,000 books are fueling the biggest fight in publishing and tech
This will come in handy in copyright lawsuits against tech companies—a searchable database that lets you see which books and authors have been scraped into data sets to train generative AI systems. (The Atlantic

Update: This story has been updated since the newsletter was published yesterday. It now includes Google’s response to how it mitigates against prompt injection attacks.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.