Your next computer could improve with age

Generally, computers slow down as they age. Their processors struggle to handle newer software. Apple even deliberately slows its iPhones as their batteries degrade. But Google researchers have published details of a project that could let a laptop or smartphone learn to do things better and faster over time.
The researchers tackled a common problem in computing, called prefetching. Computers process information much faster than they can pull it from memory to be processed. To avoid bottlenecks, they try to predict which information is likely to be needed and pull it in advance. As computers get more powerful, this prediction becomes progressively harder.
In a paper posted online this week, the Google team describes using deep learning—an AI method that employs a large simulated neural network—to improve prefetching. Although the researchers haven’t shown how much this speeds things up, the boost could be big, given what deep learning has brought to other tasks.
“The work that we did is only the tip of the iceberg,” says Heiner Litz of the University of California, Santa Cruz, a visiting researcher on the project. Litz believes it should be possible to apply machine learning to every part of a computer, from the low-level operating system to the software that users interact with.
Such advances would be opportune. Moore’s Law is finally slowing down, and the fundamental design of computer chips hasn’t changed much in recent years. Tim Kraska, an associate professor at MIT who is also exploring how machine learning can make computers work better, says the approach could be useful for high-level algorithms, too. A database might automatically learn how to handle financial data as opposed to social-network data, for instance. Or an application could teach itself to respond to a particular user’s habits more effectively.
“We tend to build general-purpose systems and hardware,” Kraska says. “Machine learning makes it possible that the system is automatically customized, to its core, to the specific data and access patterns of a user.”
Kraska cautions that using machine learning remains computationally expensive, so computer systems won’t change overnight. “However, if it is possible to overcome these limitations,” he says, “the way we develop systems might fundamentally change in the future.”
Litz is more optimistic. “The grand vision is a system that is constantly monitoring itself and learning,” he says. “It’s really the start of something really big.”
Deep Dive
Artificial intelligence
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.