Skip to Content

China’s Baidu Releases Its AI Code

“China’s Google” is joining U.S. tech giants in giving away some of its code.
January 14, 2016

Google and Facebook aren’t the only ones vying to be the standard bearer for the hottest AI technique around. China’s leading Internet search company, Baidu, which is also investing heavily in a popular and powerful machine-learning technology called deep learning, today released some key code that it uses to make this AI software run very efficiently.

Baidu’s code was recently used to build an impressive speech-recognition system called Deep Speech 2. For some short sentences, this system is better than most humans at recognizing speech correctly (see “Baidu’s Deep-Learning System Rivals People at Speech Recognition”). This is an especially useful technology for Baidu, because it offers a better way for the company’s many millions of users to access its services, especially on mobile. Typing Chinese characters on a smartphone is tricky and complex, and many people in China already prefer to use their voice to send short messages or to search the Web for information.

Deep learning allows computers to perform impressive feats, such as transcribing voice or recognizing objects in images almost flawlessly. A large simulated neural network is fed input, such as audio of a certain word or images showing a particular object, and over time this network will “learn” to recognize almost any new example.

Baidu’s code, called Warp-CTC, is essentially an improved implementation of a deep-learning algorithm developed some time ago that’s been designed to run very quickly on the latest computer chips. A startup called Nervana, which offers a deep-learning framework to companies that cannot or do not want to develop their own, is already using Warp-CTC as part of its technology.

Baidu and other tech companies are making the code they use for deep learning freely available (see “Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology”) because they are eager to benefit from public research. They also want to encourage researchers and startups that might eventually be acquired to develop machine-learning systems that will be compatible with their own technology.

The key ingredient needed to create powerful AI applications using such software is the data owned by the likes of Facebook, Google, and Baidu, but even some of this data might increasingly be released (see “Giant Yahoo Data Dump Aims to Help Computers Know What You Want”).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.