MIT Technology Review Subscribe

A Game of Civilization May Help People Understand AI’s Existential Threat

Researchers at the University of Cambridge built a game mod for Civilization that allows people to grow—and tame—a human-beating superintelligence.

The game: Shahar Avin from the university’s Centre for the Study of Existential Risk says the game allows players to build AI R&D capacity in their virtual world. Success is a smarter-than-human AI. Simultaneously, rogue AI risk builds and is mitigated only through safety research.

Advertisement

The goal: “We wanted to test our understanding of the concepts relating to existential risk from artificial superintelligence by deploying them in a restricted yet rich world simulator, [and also test how] players respond … when given the opportunity” to make AI decisions themselves, Avin told me.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Fear or acceptance: The project is also an outreach tool, taking “serious issues to a wider audience.” Avin is “aware” that it could could perpetuate fears about AI. He argues, though, that players “see the risk, but also see the range of tools at their disposal to mitigate it.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement