The new research, published on the arXiv, describes an algorithm that can efficiently identify the best pixels to alter in order to confuse an AI into mislabeling a picture. By changing just one pixel in a 1,024-pixel image, the software can trick an AI about 74 percent of the time. That figure rises to around 87 percent if five pixels are tweaked.
You can see some examples above, which spoofed an AI into calling an airplane a dog, a frog a truck, and a horse a car, among other things.
As the Register points out, 1,024-pixel images are pretty damn small, which means that larger images would need hundreds of pixels tweaked. But the push to make AIs fall over using the smallest number of possible changes is interesting and worrying, in equal measure. As we've explained in the past, finding a way to protect AIs from these kinds of tricks is quite difficult, because we still we don’t truly understand the inner workings of deep neural networks.