Skip to Content
Artificial intelligence

Could We Build a Machine with Consciousness?

October 26, 2017

Not quite yet, but neuroscience research is giving us some clues about how it may be possible in the not-too-distant future.

In a paper published in Science today, a trio of neuroscientists, led by Stanislas Dehaene from Collège de France in Paris, try to pin down exactly what we mean by “consciousness” in order to work out whether machines could ever possess it. As they see it, there are three kinds of consciousness—and computers have so far mastered only one of them.

One is subconsciousness, the huge range of processes in the brain where most human intelligence lies. That's what powers our ability to, say, determine a chess move or spot a face without really knowing how we did it. That, the researchers say, is broadly comparable to the kind of processing that modern-day AIs, such as DeepMind’s AlphaGo or Face++’s facial recognition algorithms, are good at.

When it comes to actual consciousness, the team splits it into two distinct types. The first is the way we maintain a huge range of thoughts at once, all accessible to other parts of the brain, making abilities like long-term planning possible. The second is an ability to obtain and process information about ourselves, which allows us to do things like reflect on mistakes. These two forms of consciousness, say the researchers, are yet to be present in machine learning.

But glimmers are beginning to emerge in some avenues of research. Last year, for instance, DeepMind developed a deep-learning system that can keep some data on hand for use during its ruminations, which is a step toward global information availability. And the adversarial neural networks dreamed up by Ian Goodfellow (one of our 35 Innovators Under 35 of 2017), which can evaluate whether AI-generated data is realistic, are headed in the direction of self-awareness.

Those are, admittedly, small advances toward the kinds of processes that the researchers say would give rise to human consciousness. But if a machine could be endowed with more functional versions, conclude the researchers, it “would behave as though it were conscious ... it would know that it is seeing something, would express confidence in it, would report it to others ... and may even experience the same perceptual illusions as humans.”

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

Google just launched Bard, its answer to ChatGPT—and it wants you to make it better

Under pressure from its rivals, Google is updating the way we look for information by introducing a sidekick to its search engine.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.