Nature. The Proceedings of the National Academy of Sciences. The Journal of the American Medical Association.
These are some the most elite academic journals in the world. And last year, one tech company, Alphabet’s Google, published papers in all of them.
The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume.
Behind the surge is Google’s growing investment in artificial intelligence, particularly “deep learning,” a technique whose ability to make sense of images and other data is enhancing services like search and translation (see “10 Breakthrough Technologies 2013: Deep Learning”).
According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago.
We sought out similar data from the Web of Science, a service of Clarivate Analytics, which confirmed the upsurge. Clarivate said that the impact of Google’s publications, according to a measure of publication strength it uses, was four to five times the world average. Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.
The publication explosion is no accident. Google has more than tripled the number of machine learning researchers working for the company over the last few years, according to Yoshua Bengio, a deep-learning specialist at the University of Montreal. “They have recruited like crazy,” he says.
And to capture the first-round picks from computation labs, companies can’t only offer a Silicon Valley-sized salary. “It’s hard to hire people just for money,” says Konrad Kording, a computational neuroscientist at Northwestern University. “The top people care about advancing the world, and that means writing papers the world can use, and writing code the world can use.”
At Google, the scientific charge has been spearheaded by DeepMind, the high-concept British AI company started by neuroscientist and programmer Demis Hassabis. Google acquired it for $400 million in 2014.
Hassabis has left no doubt that he’s holding onto his scientific ambitions. In a January blog post, he said DeepMind has a “hybrid culture” between the long-term thinking of an academic department and “the speed and focus of the best startups.” Aligning with academic goals is “important to us personally,” he writes. Kording, one of whose post-doctoral students, Mohammad Azar, was recently hired by DeepMind, says that “it’s perfectly understood that the bulk of the projects advance science.”
Last year, DeepMind published twice in Nature, the same storied journal where the structure of DNA and the sequencing of the human genome were first reported. One DeepMind paper concerned its program AlphaGo, which defeated top human players in the ancient game of Go; the other described how a neural network with a working memory could understand and adapt to new tasks.
Then, in December, scientists from Google’s research division published the first deep-learning paper ever to appear in JAMA, the august journal of America’s physicians. In it, they showed a deep-learning program could diagnose a cause of blindness from retina images as well as a doctor. That project was led by Google Brain, a different AI group, based out of the company’s California headquarters. It also says it prioritizes publications, noting that researchers there “set their own agenda.”
The contest to develop more powerful AI now involves hundreds of companies, with competition most intense between the top tech giants such as Google, Facebook, and Microsoft. All see the chance to reap new profits by using the technology to wring more from customer data, to get driverless cars on the road, or in medicine. Research is occurring in a hot house atmosphere reminiscent of the early days of computer chips, or of the first biotech plants and drugs, times when notable academic firsts also laid the foundation stones of new industries.
That explains why publication score-keeping matters. The old academic saw “publish or perish” is starting to define the AI race, leaving companies that have weak publication records at a big disadvantage. Apple, famous for strict secrecy around its plans and product launches, found that its culture was hurting its efforts in AI, which have lagged those of Google and Facebook.
So when Apple hired computer scientist Russ Salakhutdinov from Carnegie Mellon last year as its new head of AI, he was immediately allowed to break Apple’s code of secrecy by blogging and giving talks. At a major machine-learning science conference late last year in Barcelona, Salakhutdinov made the point of announcing that Apple would start publishing, too. He showed a slide: “Can we publish? Yes.”
Salakhutdinov will speak at MIT Technology Review’s EmTech Digital event on artificial intelligence next week in San Francisco.
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
What does GPT-3 “know” about me?
Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me?
An AI that can design new proteins could help unlock new cures and materials
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
Automated techniques could make it easier to develop AI
Automated machine learning promises to speed up the process of developing AI models and make the technology more accessible.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.