Skip to Content
Artificial intelligence

Having mastered Space Invaders, chess, and Go, AI tackles video soccer

Google’s artificial-intelligence researchers have created a football simulator for training the next generation of machine-learning algorithms.
Soccer field
Soccer fieldVienna Reyes | Unsplash

Google leads the world in research on machine intelligence. Its DeepMind subsidiary, in particular, has an impressive list of achievements under its belt. DeepMind’s neural networks have achieved superhuman performance in a wide range of games. These include Atari video games such as Pong, Breakout, and Space Invaders and more complex challenges such as the online multiplayer game Starcraft.

DeepMind has also had remarkable success with more traditional games. In 2016, its AlphaGo machine famously beat one of the world’s strongest professional Go players, the first time a machine had triumphed in this way. In the process, AlphaGo found entirely new ways of playing that have revolutionized the way humans think about the game.

Not content to sit on its laurels, Google is now turning its attention to more open-ended games where unpredictability plays a more important role. And its next target is video soccer.

Karol Kurach and colleagues at Google Research’s Brain Team have created a soccer video game called the Google Research Football Environment to allow researchers to test their algorithms in a world that is physics-based, customizable, easy to use, and endlessly reproducible. They’ve made this world available with an open-source license so that researchers anywhere can use it to develop better soccer-playing algorithms.

First some background. One of the challenges for AI researchers is to find tasks that offer new problems for machine-learning algorithms. Straightforward video games like Pong or Breakout are sometimes just too easy for these algorithms, which can achieve superhuman performance after just a few hours of training.

But some of the more complex video games, such as Starcraft, are too challenging. Starcraft is a real-time strategy game for multiple players and takes place in a large online universe. AI researchers have become interested in it because it allows them play against other humans and against game-based AI systems in complex environments.

However, the game is so vast and intricate that it requires huge computational resources to gather relevant data and to train a machine-learning system. And these resources are not available to most researchers.

Another problem is that many promising online environments run on proprietary code that researchers cannot change or even see. That makes it impossible to know how the game makes important decisions or to experiment with different decision-making processes.

Finally, many games are entirely deterministic: they will play out in exactly the same way given the same inputs. That makes them straightforward for learning algorithms to beat.

But that’s not how things work in the real world, where the ability to cope with unexpected actions is an important skill. The only way for machines to learn this skill is by training in unpredictable environments. But the unpredictability must be controllable—too little and the game is too easy, while too much makes learning too hard. Creating such an environment is tricky.

That’s where soccer simulators come in. These have certain levels of predictability based on the physics of the game. But there is also plenty of unpredictability that arises from the tactics of opposing players, the mismatch between players in situations such as tackling, and so on.

So Kurach and colleagues have built their own simulator. As a base, they used a publicly available game called Gameplay Football, which allows full soccer games complete with goals, fouls, corners, penalties, offsides, and so on. “The Football Environment provides a physics-based 3D football simulation where agents have to control their players, learn how to pass in between them and how to overcome their opponent’s defense in order to score goals,” say the Google team.

The researchers have modified this to provide a measure of success for machines, based on how closely the machine can maneuver the ball to the opponent’s goal in a controlled fashion. This is necessary because the standard measure of success—goals—reflects a relatively rare event and does not provide a way for machines to monitor their progress from moment to moment. 

Google v the world

The team has also created several standard environments of varying complexity in which to train and test AI machines. The tasks the machine faces include scoring into an empty goal, running and scoring against a keeper, navigating a 3 vs. 1 scenario to score while encouraging passing, and so on. The overall test is a standard game with all the usual rules, played against a machine-based opponent.

The learning algorithm can play against other machines or against humans. This gives it experience with a broad range of strategies. And it avoids the scenario in which the machine simply learns the weaknesses of a machine-based opponent, which may not be applicable to games in general. “This provides a challenging reinforcement learning problem as football requires a natural balance between short-term control, learned concepts such as passing, and high level strategy,” say Kurach and colleagues.

That’s interesting work that has the potential to help machine learning work in more realistic environments. But it also raises the possibility that machines will learn new soccer strategies that humans have never considered, just as they did for Go.

These are tactics that might even play out in robo-soccer tournaments, or even in games between humans.

Whether these strategies work just as well for real soccer as they do for the simulated variety will be an interesting question to watch. That will be fascinating for AI researchers and football fans alike.

Ref: arxiv.org/abs/1907.11180 : Google Research Football: A Novel Reinforcement Learning Environment

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.