Thank a new approach to spoofing image recognition AIs, developed by a team from Kyushu University in Japan, for that joke.

Trying to catch out AIs is a popular pastime for many researchers, and we've reported  machine-learning spoofs in the past. The general approach is to add features to images that will incorrectly trigger a neural network and have it identify what it sees as something else entirely.

The new research, published on the arXiv, describes an algorithm that can efficiently identify the best pixels to alter in order to confuse an AI into mislabeling a picture. By changing just one pixel in a 1,024-pixel image, the software can trick an AI about 74 percent of the time. That figure rises to around 87 percent if five pixels are tweaked.

You can see some examples above, which spoofed an AI into calling an airplane a dog, a frog a truck, and a horse a car, among other things.

As the Register points out, 1,024-pixel images are pretty damn small, which means that larger images would need hundreds of pixels tweaked. But the push to make AIs fall over using the smallest number of possible changes is interesting and worrying, in equal measure. As we've explained in the past, finding a way to protect AIs from these kinds of tricks is quite difficult, because we still we don’t truly understand the inner workings of deep neural networks.

Until we do, such hacks will be hard to avoid.