Skip to Content

Chinese Search Giant Baidu Hires Man Behind the “Google Brain”

Leading AI researcher Andrew Ng, previously associated with Google, will lead a new effort by China’s Baidu to create software that understands the world.

Baidu has long been referred to as “China’s Google” because it dominates Web search in the country. Today the comparison grew more apt: Baidu has opened a new artificial-intelligence research lab in Silicon Valley that will be overseen by Andrew Ng, a Stanford professor who played a key role at Google in a field called deep learning. He was also a cofounder of the online education company Coursera.

Recent advances have triggered a technological arms race in Silicon Valley, with big Web companies competing for the best academic talent. Like Google, Facebook, and other companies rushing to invest in deep learning, Baidu is motivated by the promise of dramatic advances in artificial intelligence.

Deep learning makes it possible for machines to process large amounts of data using simulated networks of simple neurons, crudely modeled on those found in biological brains. The approach has yielded dramatically improved software for tasks such as image and speech recognition (see “Deep Learning”), and it could ultimately allow apps, devices, and Internet services to understand things like images and text as well as humans do.

Although the recent boom in deep learning has its origins in academia, interest exploded in 2012 after Google researchers collaborating with Ng announced a breakthrough on a project dubbed “Google Brain.” They built software that analyzed 10 million photos taken from YouTube videos and learned to recognize thousands of objects, including human and cat faces, without human guidance (see “Self-Taught Software”).

Since then, U.S. tech giants have competed to hire leading figures in the relatively small field (see “Is Google Cornering the Market on Deep Learning?” and “Facebook Launches Advanced AI Effort”), and they have started to demonstrate how the approach can advance the technology they offer consumers. Google and Microsoft have used deep learning to improve speech recognition and translation (see “Google Puts Its Virtual Brain Technology to Work” and “Microsoft Brings Star Trek’s Voice Translator to Life”). Meanwhile, Facebook’s deep-learning researchers recently demonstrated face-processing software that comes close to matching human performance (see “Facebook Software Matches Faces Almost as Well as You Do”).

Before deciding to open its new lab, located in Sunnyvale, Baidu had achieved good results adding deep learning to several products since late 2012, says Kai Yu, director of the company’s deep-learning lab in Beijing. The technology can be seen in Baidu’s translation app, which identifies objects snapped on a smartphone with their Chinese and English names. It is also used in the company’s ad-targeting technology. “We got an immediate return from adding deep learning to our ad system,” says Yu. “It increased the click-through rate significantly.”

Yu’s Beijing lab is focused on applying deep learning to existing Baidu products and those that will be introduced soon. The new Silicon Valley lab will work on more fundamental research, he says. The hope is that this broad remit and Ng’s star quality, combined with Baidu’s capacious stores of images, text, and video, will lure leading talent. “In Silicon Valley there’s a huge talent pool that is so unique,” says Yu. “We really want something revolutionary to come from the lab.”

Ng will guide that effort in his new position as Baidu’s head of research, overseeing the Silicon Valley lab, Yu’s lab, and another lab in Beijing that’s dedicated to big data. He will work out of the Sunnyvale lab, in which Baidu says it will invest $300 million over five years.

The lab’s research is led by Adam Coates, previously a PhD student and postdoctoral researcher in Ng’s Stanford research group. Coates says a major focus will be on building software that learns without human input, as the Google Brain system did—an approach known as unsupervised learning.

Unsupervised systems require less effort from programmers, but so far they have relatively poor accuracy, at least compared with humans. Google’s cat-recognition system reached around 70 percent accuracy, for example. “The biggest open question is ‘How can you use unsupervised learning to get to human-level performance?’” says Coates. But the payoff for improving even a little should be big. “So many of the products that we want to build are things that we want to interact with the world,” he says. “It’s applicable to robots and autonomous cars and mobile apps.”

Eugenio Culurciello, a researcher at Purdue University who works on chips with neural networks built in (see “AI Chip to Help Computers Understand Images”), says the excitement about deep learning is justified. He points to how its methods have toppled the benchmarks that researchers use to rank machine-learning software. “Usually you improve by 2 percent on what came before,” he says. “These guys have been improving by 10 or 20 percent.”

Such results are why Facebook CEO Mark Zuckerberg made a surprise appearance at the NIPS conference for neural-network research last year. However, Michael Mozer, a professor at the University of Colorado, Boulder, and a board member of the NIPS Foundation, points out that the core algorithms the neural networks use are much the same as those that triggered a surge of optimism about artificial intelligence in the late 1980s. Recent breakthroughs have come from finding “tricks” that allow these algorithms to be used at much larger scale, says Mozer. “The people that stuck with it are deservedly reaping the benefits now,” he says, but deep learning is not such a big leap forward for the field as it is sometimes made out to be.

For now, relatively few people are versed in the tricks needed to get deep learning to work well, says Culurciello. “If you want to beat the crowd now, you have to try and buy the people that really know this stuff—otherwise you’ll be a few years behind,” he says.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.