Not quite yet, but neuroscience research is giving us some clues about how it may be possible in the not-too-distant future.

In a paper published in Science today, a trio of neuroscientists, led by Stanislas Dehaene from Collège de France in Paris, try to pin down exactly what we mean by “consciousness” in order to work out whether machines could ever possess it. As they see it, there are three kinds of consciousness—and computers have so far mastered only one of them.

One is subconsciousness, the huge range of processes in the brain where most human intelligence lies. That's what powers our ability to, say, determine a chess move or spot a face without really knowing how we did it. That, the researchers say, is broadly comparable to the kind of processing that modern-day AIs, such as DeepMind’s AlphaGo or Face++’s facial recognition algorithms, are good at.

When it comes to actual consciousness, the team splits it into two distinct types. The first is the way we maintain a huge range of thoughts at once, all accessible to other parts of the brain, making abilities like long-term planning possible. The second is an ability to obtain and process information about ourselves, which allows us to do things like reflect on mistakes. These two forms of consciousness, say the researchers, are yet to be present in machine learning.

But glimmers are beginning to emerge in some avenues of research. Last year, for instance, DeepMind developed a deep-learning system that can keep some data on hand for use during its ruminations, which is a step toward global information availability. And the adversarial neural networks dreamed up by Ian Goodfellow (one of our 35 Innovators Under 35 of 2017), which can evaluate whether AI-generated data is realistic, are headed in the direction of self-awareness.

Those are, admittedly, small advances toward the kinds of processes that the researchers say would give rise to human consciousness. But if a machine could be endowed with more functional versions, conclude the researchers, it “would behave as though it were conscious ... it would know that it is seeing something, would express confidence in it, would report it to others ... and may even experience the same perceptual illusions as humans.”