Skip to Content
Artificial intelligence

Could We Build a Machine with Consciousness?

October 26, 2017

Not quite yet, but neuroscience research is giving us some clues about how it may be possible in the not-too-distant future.

In a paper published in Science today, a trio of neuroscientists, led by Stanislas Dehaene from Collège de France in Paris, try to pin down exactly what we mean by “consciousness” in order to work out whether machines could ever possess it. As they see it, there are three kinds of consciousness—and computers have so far mastered only one of them.

One is subconsciousness, the huge range of processes in the brain where most human intelligence lies. That's what powers our ability to, say, determine a chess move or spot a face without really knowing how we did it. That, the researchers say, is broadly comparable to the kind of processing that modern-day AIs, such as DeepMind’s AlphaGo or Face++’s facial recognition algorithms, are good at.

When it comes to actual consciousness, the team splits it into two distinct types. The first is the way we maintain a huge range of thoughts at once, all accessible to other parts of the brain, making abilities like long-term planning possible. The second is an ability to obtain and process information about ourselves, which allows us to do things like reflect on mistakes. These two forms of consciousness, say the researchers, are yet to be present in machine learning.

But glimmers are beginning to emerge in some avenues of research. Last year, for instance, DeepMind developed a deep-learning system that can keep some data on hand for use during its ruminations, which is a step toward global information availability. And the adversarial neural networks dreamed up by Ian Goodfellow (one of our 35 Innovators Under 35 of 2017), which can evaluate whether AI-generated data is realistic, are headed in the direction of self-awareness.

Those are, admittedly, small advances toward the kinds of processes that the researchers say would give rise to human consciousness. But if a machine could be endowed with more functional versions, conclude the researchers, it “would behave as though it were conscious ... it would know that it is seeing something, would express confidence in it, would report it to others ... and may even experience the same perceptual illusions as humans.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.