Generally, computers slow down as they age. Their processors struggle to handle newer software. Apple even deliberately slows its iPhones as their batteries degrade. But Google researchers have published details of a project that could let a laptop or smartphone learn to do things better and faster over time.
The researchers tackled a common problem in computing, called prefetching. Computers process information much faster than they can pull it from memory to be processed. To avoid bottlenecks, they try to predict which information is likely to be needed and pull it in advance. As computers get more powerful, this prediction becomes progressively harder.
In a paper posted online this week, the Google team describes using deep learning—an AI method that employs a large simulated neural network—to improve prefetching. Although the researchers haven’t shown how much this speeds things up, the boost could be big, given what deep learning has brought to other tasks.
“The work that we did is only the tip of the iceberg,” says Heiner Litz of the University of California, Santa Cruz, a visiting researcher on the project. Litz believes it should be possible to apply machine learning to every part of a computer, from the low-level operating system to the software that users interact with.
Such advances would be opportune. Moore’s Law is finally slowing down, and the fundamental design of computer chips hasn’t changed much in recent years. Tim Kraska, an associate professor at MIT who is also exploring how machine learning can make computers work better, says the approach could be useful for high-level algorithms, too. A database might automatically learn how to handle financial data as opposed to social-network data, for instance. Or an application could teach itself to respond to a particular user’s habits more effectively.
“We tend to build general-purpose systems and hardware,” Kraska says. “Machine learning makes it possible that the system is automatically customized, to its core, to the specific data and access patterns of a user.”
Kraska cautions that using machine learning remains computationally expensive, so computer systems won’t change overnight. “However, if it is possible to overcome these limitations,” he says, “the way we develop systems might fundamentally change in the future.”
Litz is more optimistic. “The grand vision is a system that is constantly monitoring itself and learning,” he says. “It’s really the start of something really big.”
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
The therapists using AI to make therapy better
Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.
DeepMind says its new language model can beat others 25 times its size
RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network
2021 was the year of monster AI models
GPT-3, OpenAI’s program to mimic human language, kicked off a new trend in artificial intelligence for bigger and bigger models. How large will they get, and at what cost?
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.