Skip to Content
Artificial intelligence

A VR film/game with AI characters can be different every time you watch or play

Agence is neither a movie nor a game, which has frustrated some critics, but it gives a taste of what the future of AI filmmaking could be.
Transitional Forms

The square-faced, three-legged alien shoves and jostles to get at the enormous plant taking over its tiny planet. But each bite just makes the forbidden fruit grow bigger. Suddenly the plant’s weight flips the whole sphere upside down and all the little creatures drop into space.

Quick! Reach in and catch one!

Agence, a short interactive VR film from Toronto-based studio Transitional Forms and the National Film Board of Canada, won’t be breaking any box office records. Falling somewhere in the no-man’s-land between movies and video games, it may struggle to find an audience at all. But as the first example of a film that uses reinforcement learning to control its animated characters, it could be a glimpse into the future of filmmaking.

“I am super passionate about artificial intelligence because I believe that AI and movies belong together,” says the film’s director, Pietro Gagliano.

Gagliano previously won the first-ever Emmy for a VR experience in 2015. Now he and producer David Oppenheim at the National Film Board of Canada are experimenting with a kind of storytelling they call dynamic film. “We see Agence as a sort of silent-era dynamic film,” says Oppenheim. “It’s a beginning, not a blockbuster.”

Agence was debuted at the Venice International Film Festival last month and was released this week to watch/play via Steam, an online video-game platform. The basic plot revolves around a group of creatures and their appetite for a mysterious plant that appears on their planet. Can they control their desire, or will they destabilize the planet and get tipped to their doom? Survivors ascend to another world. After several ascensions, there is a secret ending, says Oppenheim.  

Gagliano and Oppenheim want viewers to have the option of sitting back and watching a story unfold, with the AI characters left to their own devices, or getting involved and changing the action on the fly. There’s a broad spectrum of interactivity, says Gagliano: “A lot of interactive films have decision moments, when you can branch the narrative, but I wanted to create something that let you transform the story at any point.”

A certain degree of interactivity comes from choosing the type of AI that controls each character. You can make some use rule-based AI, which guides the character using simple heuristics—if this happens, then do that. Then you can make others become reinforcement-learning agents trained to seek rewards however they like, such as fighting for a bite of the fruit. Characters that follow rules stick closer to Gagliano’s direction; RL agents inject some chaos.

But you can also lean in. Using VR controls or a game pad, you can grab characters and move them around, plant more giant flowers, and help balance the planet. The characters carry on with their business around you, seeking their rewards as best they can.

The film got some interest in Venice, says Oppenheim: “A lot of people come looking for that mix of story and interactivity. Introducing AI into the mix was something that people responded really well to.”

Gagliano’s mother also likes it. When he showed it to her, she spent the whole time breaking up fights between the creatures. “She was like, ‘You behave! You go back here and you play nicely,’” he says. “That was a storyline I wasn't expecting.”

But people expecting a game have had a cooler response. “Gamers treat it more as a puzzle,” says Oppenheim. And the short running time and lack of challenge have put off some online reviewers.

Still, the pair see Agence as a work in progress. They want to collaborate with other AI developers to give their characters different desires, which would lead to different stories. In the long run, they think, they could use AI to generate all parts of a film, from character behavior to dialogue to entire environments. It could create surprising, dreamlike experiences for all of us, says Oppenheim. 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.