Skip to Content
Artificial intelligence

Here’s how Microsoft could use ChatGPT

Plus: Roomba testers feel misled after intimate images ended up on Facebook.

"""
Stephanie Arnett/MITTR

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.  

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring. 

Let’s start with online search, the application that’s received the most coverage and attention. ChatGPT’s popularity has shaken Google, which reportedly considers it a “code red” for the company’s ubiquitous search engine. Microsoft is reportedly hoping to integrate ChatGPT into its (more maligned) search engine Bing. 

It could work as a front end to Bing that answers people’s queries in natural language, according to Melanie Mitchell, a researcher at the Santa Fe Institute, a research nonprofit. AI-powered search could mean that when you ask something, instead of getting a list of links, you get a complete paragraph with the answer. 

However, there’s a good reason why Google hasn’t already gone ahead and incorporated its own powerful language models into Search. Models like ChatGPT have a notorious tendency to spew biased, harmful, and factually incorrect content. They are great at generating slick language that reads as if a human wrote it. But they have no real understanding of what they are generating, and they state both facts and falsehoods with the same high level of confidence.

When people search for information online today, they are presented with an array of options, and they can judge for themselves which results are reliable. A chat AI like ChatGPT removes that “human assessment” layer and forces people to take results at face value, says Chirag Shah, a computer science professor at the University of Washington who specializes in search engines. People might not even notice when these AI systems  generate biased content or misinformation—and then end up spreading it further, he adds.

When asked, OpenAI was cryptic about how it trains its models to be more accurate. A spokesperson said that ChatGPT was a research demo, and that it’s updated on the basis of real-world feedback. But it’s not clear how that will work in practice, and accurate results will be crucial if Microsoft wants people to stop “googling” things. 

In the meantime, it’s more likely that we are going to see apps such as Outlook and Office get an AI injection, says Shah. ChatGPT’s potential to help people write more fluently and more quickly could be Microsoft’s killer application. 

Language models could be integrated into Word to make it easier for people to summarize reports, write proposals, or generate ideas, Shah says. They could also give email programs and Word better autocomplete tools, he adds. And it’s not just all word-based. Microsoft has already said it will use OpenAI’s text-to-image generator DALL-E to create images for PowerPoint presentations too. 

We are also not too far from the day when large language models can respond to voice commands or read out text, such as emails, Shah says. This might be a boon for people with learning disabilities or visual impairments.

Online search is also not the only type of search the app could improve. Microsoft could use it to help users search for emails and documents. 

But here’s the important question people aren’t asking enough: Is this a future we really want? 

Adopting these technologies too blindly and automating our communications and creative ideas could cause humans to lose agency to machines. And there is a risk of “regression to the meh,” where our personality is sucked out of our messages, says Mitchell.

“​The bots will be writing emails to the bots, and the bots will be responding to other bots,” she says. “That doesn't sound like a great world to me.” 

Language models are also great copycats. Every single prompt entered into ChatGPT helps train it further. In the future, as these technologies are further embedded into our daily tools, they can learn our personal writing style and preferences. They could even manipulate us to buy stuff or act in a certain way, warns Mitchell. 

It’s also unclear if this will actually improve productivity, since people will still have to edit and double-check the accuracy of AI-generated content. Alternatively, there’s a risk that people will blindly trust it, which is a known problem with new technologies. 

“We'll all be the beta testers for these things,” Mitchell says.

Deeper Learning

Roomba testers feel misled after intimate images ended up on Facebook

Late last year, we published a bombshell story about how sensitive images of people collected by Roomba vacuum cleaners ended up leaking online. These people had volunteered to test the products, but it had never remotely occurred to them that their data could end up leaking in this way. The story offered a fascinating peek behind the curtain at how the AI algorithms that control smart home devices are trained. 

The human cost: In the weeks since the story’s publication, nearly a dozen Roomba testers have come forward. They feel misled and dismayed about how iRobot, Roomba’s creator, handled their data. They say it wasn’t clear to them that the company would share test users’ data in a sprawling, global data supply chain, where everything (and every person) captured by the devices’ front-facing cameras could be seen, and perhaps annotated, by low-paid contractors outside the United States who could screenshot and share images at their will. Read more from my colleague Eileen Guo.

Bits and Bytes

Alarmed by AI chatbots, universities have started revamping how they teach
The college essay is dead, long live ChatGPT. Professors have started redesigning their courses to take into account that AI can write passable essays. In response, educators are shifting towards oral exams, group work, and handwritten assignments.  (The New York Times

Artists have filed a class action lawsuit against Stable Diffusion
A group of artists have filed a class action lawsuit against Sta­bil­ity.AI, DeviantArt, and Mid­jour­ney for using Stable Diffusion, an open sourced text-to-image AI model. The artists claim these companies stole their work to train the AI model. If successful, this lawsuit could force AI companies to compensate artists for using their work.

The artist's lawyers argue that the “mis­ap­pro­pri­a­tion” of copyrighted works could be worth roughly $5 bil­lion. By way of comparison, the thieves who carried out the biggest art heist ever made off with works worth a mere $500 million. 

Why are so many AI systems named after Muppets?
Finally, an answer to the biggest minor mystery around language models. ELMo, BERT, ERNIEs, KERMIT — a surprising number of large language models are named after Muppets. Many thanks to James Vincent for answering this question that has been bugging me for years. ​​(The Verge)

Before you go... A new MIT Technology Report about how industrial design and engineering firms are using generative AI is set to come out soon. Sign up to get notified when it’s available.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.