Skip to Content
Artificial intelligence

Pricing algorithms can learn to collude with each other to raise prices

February 12, 2019

If you shop on Amazon, an algorithm rather than a human probably set the price of the service or item you bought. Pricing algorithms have become ubiquitous in online retail as automated systems have grown increasingly affordable and easy to implement. But while companies like airlines and hotels have long used machines to set their prices, pricing systems have evolved. They have moved from rule-based programs to reinforcement-learning ones, where the logic of deciding a product’s price is no longer within a human’s control.

If you recall, reinforcement learning is a subset of machine learning that uses penalties and rewards to incentivize an AI agent toward a specific goal. AlphaGo famously used it to beat the best human players at the ancient board game Go. Within a pricing context, these systems are given a goal such as to maximize overall profit; then they experiment with different strategies in a simulated environment to find the optimal one. A new paper now suggests that these systems could pose a huge problem: they quickly learn to collude.

Researchers at the University of Bologna in Italy created two simple reinforcement-learning-based pricing algorithms and set them loose in a controlled environment. They discovered that the two completely autonomous algorithms learned to respond to one another’s behavior and quickly pulled the price of goods above where it would have been had either operated alone.

“What is most worrying is that the algorithms leave no trace of concerted action,” the researchers wrote. “They learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.” This risks driving up the price of goods and ultimately harming consumers.

This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.