Skip to Content
Artificial intelligence

Neural networks don’t understand what optical illusions are

Machine-vision systems can match humans at recognizing faces and can even create realistic synthetic faces. But researchers have discovered that the same systems cannot recognize optical illusions, which means they also can’t create new ones.
Image of black and white optical illusion
Image of black and white optical illusion
Image of black and white optical illusionPixabay

Human vision is an extraordinary facility. Although it evolved in specific environments over many millions of years, it is capable of tasks that early visual systems never experienced. Reading is a good example, as is identifying artificial objects such as cars, planes, road signs, and so on.

But the visual system also has a well-known set of shortcomings that we experience as optical illusions. Indeed, researchers have identified many ways in which these illusions cause humans to misjudge color, size, alignment, and movement.

The illusions themselves are interesting because they provide insight into the nature of the visual system and perception. So ways of finding new illusions that explore these limits would be hugely useful.

Image of concentric circle optical illusion
Concentric circles?

Which is where deep learning comes in. In recent years, machines have learned to recognize objects and faces in images and then to create similar images themselves. So it’s easy to imagine that a machine-vision system ought to be able to learn to recognize illusions and then to create its own.

Enter Robert Williams and Roman Yampolskiy at the University of Louisville in Kentucky. These guys have attempted this feat but found that things aren’t so simple. Current machine-learning systems cannot generate their own optical illusions—at least not yet. Why not?

First some background. The recent advances in deep learning are based on two advances. The first is the availability of powerful neural networks and one or two programming tricks that make them good at learning.

The second is the creation of huge annotated databases that machines can learn from. Teaching a machine to recognize faces, for example, requires many tens of thousands of images containing faces that are clearly labeled. With that information, a neural net can learn to spot characteristic facial patterns—two eyes, a nose, and a mouth, for example. And even more impressive, a pair of them—called a generative adversarial network—can teach each other to create realistic, but totally synthetic, images of faces.

Williams and Yampolskiy set out to teach a neural network to identify optical illusions in the same way. The computing horsepower is easily available, but the necessary databases are not. So the researchers’ first task was to create a database of optical illusions for training.

That turns out to be hard. “The number of static optical illusion images is in the low thousands, and the number of unique kinds of illusions is certainly very low, perhaps only a few dozen,” they say.

That represents a challenge for current machine-learning systems. “Creating a model capable of learning from such a small and limited dataset would represent a huge leap in generative models and understanding of human vision,” they say.

So Williams and Yampolskiy compiled a database of over 6,000 images of optical illusions and then trained a neural network to recognize them. Then they built a generative adversarial network to create optical illusions for itself.

The results were disappointing. “Nothing of value was created after 7 hours of training on an Nvidia Tesla K80,” say the researchers, who have made their database available for others to use.

Nevertheless, this is an interesting result. “The only optical illusions known to humans have been created by evolution (for instance, eye patterns in butterfly wings) or by human artists,” they point out.

In both cases, humans play a crucial role by providing valuable feedback—humans can see the illusion.

But machine-vision systems cannot. “It seems unlikely that [a generative adversarial network] could learn to trick human vision without being able to understand the principles behind these illusions,” say Williams and Yampolskiy.

They may not be easy, because there are crucial differences between machine-vision systems and the human visual system. Various researchers are developing neural networks that resemble the human visual system ever more closely. Perhaps an interesting test will be whether they can see illusions or not.

In the meantime, Williams and Yampolskiy are not optimistic. “It seems that a dataset of illusion images might not be sufficient to create new illusions,” they say. So for the moment, optical illusions are a bastion of human experience that machines cannot conquer.

Ref: : Optical Illusions Images Dataset

Deep Dive

Artificial intelligence

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

Conceptual illustration of a therapy session
Conceptual illustration of a therapy session

The therapists using AI to make therapy better

Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.

a Chichuahua standing on a Great Dane
a Chichuahua standing on a Great Dane

DeepMind says its new language model can beat others 25 times its size

RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network

THE BLOB, 1958, promotional artwork
THE BLOB, 1958, promotional artwork

2021 was the year of monster AI models

GPT-3, OpenAI’s program to mimic human language,  kicked off a new trend in artificial intelligence for bigger and bigger models. How large will they get, and at what cost?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.