Skip to Content
MIT News: Under the dome

The recommender revolution

Can algorithms help us know ourselves better?

recommendation algorithm concept
Rose Wong

As Amazon’s Jeff Bezos would cheerfully agree, insightful recommendations make for great business. People like you really do want to know what people like you like. That knowledge proves ideal for training ever-smarter algorithms; the more data, the merrier. From TikTok to Twitter to Meta to LinkedIn, the world’s most influential digital platforms constantly (machine) learn to offer better recommendations and advice.

Much as steam engines energized the Industrial Age, recommendation engines are the prime movers digitally driving 21st-century advice worldwide. And it’s becoming increasingly clear that the best advice we now receive is more likely to come from smart machines than clever people. These ingenious mechanisms relentlessly convert data into relevant, diverse, novel, and even serendipitous options. They learn from the choices people make, explore, and ignore. That means they are the irresistible—and inevitable—future of innovative advice. “Recommender systems are the most important AI system of our time,” Nvidia CEO and cofounder Jensen Huang said in 2021. “It is the engine for search, ads, online shopping, music, books, movies, user-generated content, news.”

Spotify’s Discover Weekly, for example, promises music lovers worldwide a personalized playlist of songs they’ve never heard before but will surely enjoy. Netflix algorithms make binge-worthy video programming not just probable but also predictable. Google Maps advises the surest, safest, and swiftest way to get to where you’re going. Indeed, Alibaba’s Taobao—the world’s biggest e-commerce site—insists it knows its online shoppers even better than they know themselves.

Netflix’s data scientists and designers literally reengineered the company’s user experience around the motto, “Everything is a recommendation.” The streaming video pioneer configured its interface to implicitly or explicitly advise users on what series they should next watch for hours at a time. Binge viewing—and binge viewers—became Netflix’s new normal. When “everything is a recommendation,” recommendations are everything.

Recommendation engines represent a global revolution in how choice can be personalized, packaged, presented, experienced, and understood. But that revolution—those choice architectures—needs to be better understood. It frames people’s future.

Commercial recommenders like Amazon’s and Alibaba’s offer good advice on what people might purchase. But the real recommender revolution revolves less around what you want to buy than around who you want to be. That distinction is not subtle; challenging choices force people to think twice about what they really want—and want to do. 

Truly well-designed, algorithmically attuned, and data-rich recommenders invariably promote greater self-awareness. And the recommendations, suggestions, and advice people choose to ignore about what to watch, where to go, and who to follow can be every bit as revelatory as what they heed. Both in theory and in practice, better choices invite better outcomes.

As Aristotle observed more than 2,000 years ago, “Choice, not chance, determines your destiny.” Do these Aristotle 2.0 choice architectures—as I like to call them—ultimately inspire and empower billions of people? Or do they determine destinies, turning users into puppets who obligingly default to algorithmic agendas? Ultimately, people decide whether to accept the recommendations or not. And history suggests the destinies they choose largely depend on who they think they really are. 

The recommendation engine’s true power goes beyond self-awareness to self-discovery. People experience effective recommenders not just as bespoke, customized, and personalized advisors but as gateways to greater self-knowledge. Their advice becomes a digital mirror, enabling and inspiring introspection and reflection. That’s not just powerful, it’s empowering. Again, Aristotle anticipated and appreciated this: “Knowing yourself is the beginning of all wisdom.”

Three essential, interrelated elements elevate recommendation engines into self-discovery engines. The first is obvious: recommenders reliably, simply, and easily offer measurably better choices. That is, users quickly recognize that they’d not easily have discovered these choices on their own. Similarly, they know that friends, colleagues, and family members would have been unlikely to suggest this artist or that video or these kinds of job opportunities. But these recommendation engines also help the companies that employ them when their advice inspires curiosity and further discovery—the desire to learn more about that artist or see other videos in that genre or get greater insight into those employers.

The architectural and technical genius of recommender-system design lies in its compelling blend of data gathering, ongoing algorithmic innovation, and network effects. The more people use these systems, the more valuable they become; the more valuable they become, the more people use them. Machine-learning capabilities accelerate that virtuous cycle to ensure recommendations and advice become ever-more relevant and compelling. People can better see themselves in the proffered choices. That is, these options and opportunities best reflect what they want to do next.

The pleasure of empowerment is the next essential element that promotes self-discovery. People can act upon their better choices; they have agency. They can play that song or view that video or see whether public transportation or Uber is faster. They can indulge their curiosity and explore their options. They can go social and share a recommended action with a friend: “What do you think?” That empowerment leads to interaction, and those interactions can empower: “Alexa, play me a sad song … No, not that sad!” People become partners in producing ever-better choices.

Digital distinctions between self-awareness, self-discovery, and self-improvement blur. Someone who learns higher mathematics better through, say, visualizations and simulations rather than equations, for example, might get a curated YouTube mini-curriculum featuring explanatory videos from Numberphile, 3Blue1Brown, and Mathologer. Like a talented math tutor, this digital advisor would quickly learn what imagery and examples generated the greatest engagement, exploration, and learning—and make recommendations accordingly. 

Finally, better choices and greater empowerment yield keener self-insight. People can learn more about themselves just as their recommenders learn more about them. The music playlists they build, the shows they binge, the books they annotate—what do those choices say about who they are and what they want to become? Recommendation engines can become introspection engines.

If one could scan, aggregate, and analyze all the recommenders, recommendations, and digital advice one came across, what life-changing insights would emerge? We’ve moved from an era in which humans gave good advice about good advice to one in which machines give great recommendations about great recommendations. 

My bet is that the key to understanding who we really are and who we really want to become will depend on the machines that learn with us, from us, and for us. Their software will become our “selfware.”

Michael Schrage is a visiting scholar in the Sloan School of Management and author of Recommendation Engines (MIT Press, 2022)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.