Skip to Content
Artificial intelligence

Baidu shows off its instant pocket translator

The Chinese internet giant says it’s made significant strides in machine translation thanks to neural networks.
March 27, 2018

Baidu showed off the speed of its pocket translator for the first time in the United States during an afternoon presentation at MIT Technology Review's EmTech Digital conference in San Francisco. 

The Chinese Internet giant has made significant strides improving machine language translation since 2015, using an advanced form of artificial intelligence known as deep learning, said Hua Wu, the company’s chief scientist focused on natural-language processing. On stage, the Internet-connected device was able to almost instantly translate a short conversation between Wu and senior editor Will Knight. It easily rendered Knight’s questions—including “Where can I buy this device?” and “When will machines replace humans?”—into Mandarin, and relayed Wu’s responses in clear, if machine-inflected, English. 

(Knight’s own rough Mandarin, however, seemed to be a challenge beyond the device’s current ability.)

The product taps into Baidu’s translation software over the cloud and doubles as a wi-fi hot spot. The company specifically designed the gadget, which currently only converts between English, Chinese, and Japanese, to help tourists more easily navigate foreign cities. Baidu launched the device in December, but so far it can only be leased at travel agencies and airports in China. 

Additional languages and markets are likely to come in the future.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.