Skip to Content

Is the Chatbot Trend One Big Misunderstanding?

China’s messaging services show that conversational interfaces are not always desirable.
April 25, 2016

Picture a nightmarish future in which half-witted “conversational user interfaces” drive us insane with mindless conversation and misunderstandings. As U.S. tech companies rush to emulate the success of China’s messaging platforms, this could be where we’re headed. 

Dan Grover, a product manager for China’s wildly successful WeChat messaging service, argues that the chatbots and chatbot tools being developed by U.S. companies, such as Microsoft and Facebook, are inspired, in large part, by the many chat-based services now available in China. These chat services can be used for all sorts of tasks, including transferring money, paying restaurant bills, and looking up flight information. 

The chatbots being developed in the U.S. are designed to perform tasks, such as searching for a flight or ordering a pizza, via a friendly back and forth with users. But, as Grover notes, China’s most successful chat interfaces forgo natural language in favor of more conventional input mechanisms such as multiple choice answers or buttons that appear in chat bubbles.

That might seem backward, but it’s actually a more efficient way to order a pizza through a messaging platform than using a chatbot. Grover points out that conversational user interfaces require far more actions from a user than a simple messaging interface (73 taps compared to 16 when ordering a pizza, for example).

Nearly 700 million people use WeChat to do everything from pay bills to chat with friends.

To some extent, I fear, he is right that chatbots may be incredibly annoying. And things may be even more annoying if app designers fail to appreciate the significant challenges that remain with using computers to parse and respond to natural language.

But it would also be a mistake to give up on conversational interfaces altogether. For more open-ended tasks, some sort of dialogue may well be more desirable. And as devices like Amazon’s voice-controlled Echo device show, when speaking to a device you really need something that can converse, even if only on a very basic level. 

(Read more: Dan Grover, "How to Prevent a Plague of Dumb Chatbots")

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.