AI researchers develop a way to trick facial recognition systems

Subtle distortions to the images makes them invisible to facial recognition.

Subtle distortions to the images makes them invisible to facial recognition.

Original images (left) can be recognised by facial recognition. The centre image has been distorted slightly, and can't be recognised. The image on the right shows the subtle changes that have been made. (P. Aarabi, A. Bose)
Listen6:55

Using AI to fool AI

Researchers from the University of Toronto have been fighting fire with fire in an attempt to find ways to protect the privacy of online pictures. They've used an artificial intelligence system to find ways to defeat the AI face recognition systems used to identify people in photos.

Photo sites like Google Photos and social media platforms have algorithms that can scan uploaded pictures and identify faces in them. These platforms use the these systems to suggest "tagging" you, or sorting them into albums. Privacy advocates, however, are concerned that face recognition can also be used for other purposes - like "big data" for generating profiles for targeted advertising.

An image humans can see but computers can't recognize

Parham Aarabi and his student Avishek Bose, from the department of electrical and computer engineering, have found a way to beat the system. Their algorithm makes tiny changes to images, which are barely visible to humans, but which make the faces in the images unrecognizable to current face recognition algorithms.

The image on the left can be recognised by the computer. Subtle alterations to the image on the right disrupt the face recognition algorithm, but are nearly invisible to humans. (P. Aarabi, A. Bose)

The researchers used "adversarial learning" to pit a face recognition algorithm against another AI system that experimented with changes needed to photos of faces to defeat the face recognition. The algorithm refined its search by working towards changes that would be minimally detectable by human eyes, so people could still recognize the images even if the computers couldn't.

Exploiting the weakness of AI face recognition

The method exploits the differences between the way humans and computers recognize faces. Face recognition algorithms often focus on particular features. 

According to Aarabi, the AI attempting to disable face recognition found a weakness. "If you just adjust the pixels at the corner of the eye or the edge of the lips, just the right amount, or just change the colour slightly, then the main detection AI is not able to find a face."

These subtle distortions disrupted the algorithm's ability to see faces. Human brains, on the other hand, seem to use a much more generalized and robust strategy that tolerates these small distortions.

The image on the left can be recognised by the computer. Subtle alterations to the image on the right disrupt the face recognition algorithm, but are nearly invisible to humans. (P. Aarabi, A. Bose)

Aarabi hopes that within a year or so, he could make this algorithm available as a kind of filter to apply to photos. "One way this could be applied to is a website or an app, where anyone can upload their image and then get an image back that prevents most face detectors from recognizing it. At that point, they could then upload it to social media or send it to friends, knowing that their photo and their face would be a bit safer than if they hadn't done this process."