Evaluating Anti-Facial Recognition Tools

May 30, 2023
Kevin Bryson

In 2020, Clearview AI scraped freely available pictures of people from Facebook, YouTube, and other websites to create a database of more than 3 billion photos and then sold its software to police and law enforcement agencies. The ACLU brought a lawsuit arguing that this violated privacy laws, which led to a settlement limiting sales of Clearview’s databases. In the wake of such stories, researchers across the world are working to develop anti-facial recognition tools that can combat unregulated and unauthorized uses of facial recognition technologies.

In a study presented on May 23-26th at the 44th IEEE Symposium on Security and Privacy, researchers at the University of Chicago developed a framework to systematically evaluate existing anti-facial recognition tools and describe how creators of similar tools can design them to be even more resilient to the evolving landscape of facial recognition.

“Facial recognition systems in particular, have highlighted a legitimate need for personal agency over data use, issues of misidentification, and significant racial disparities in their accuracy,” said graduate student Emily Wenger, first author on the paper and member of the SAND lab at UChicago. "Anti-facial recognition (AFR) technologies are valuable tools that help users counteract unwanted facial recognition. By systematizing state-of-the-art AFR tools, our work provides much-needed structure and organization of this burgeoning field, and identifies viable research directions in this space.”

Anti-facial recognition tools

Several anti-facial recognition tools are already on the market. For instance, Invisible Mask is a hat designed to project infrared lights that disrupt the way the camera sees your face. Another example creates modifications of facial features given another facial image.

Modifications of facial features can be made given another facial image.
Modifications of facial features can be made given another facial image. Image from A Systematical Solution for Face De-identification https://arxiv.org/pdf/2107.08581.pdf

In 2020, Wenger and collaborators, Shawn Shan, Jiayun Zhang, Huiying Li, and UChicago professors Ben Zhao and Heather Zheng, developed an app motivated by Clearview AI called Fawkes, which puts a ‘filter’ on your photos before they are uploaded to corrupt them for facial recognition models that scrape online sources. Since the launch, there have been 840,000 total downloads of their app and several updates to account for changes in facial recognition technologies.

The ever-evolving landscape of facial recognition technology, alongside the varying approaches of anti-facial recognition tools, led Wenger and her collaborators to create a framework of evaluating existing anti-facial recognition tools’ efficacy.

“We wanted to understand if tools like Fawkes would be effective in the long-term,” said Wenger.

The facial recognition process

To understand what it would mean for a tool to be effective long-term, Wenger and her collaborators developed a framework for evaluating the tools. They approached facial recognition as an assembly line of processes. 

The first step in the process is data collection. This can be scraping the web for images of faces or using images from surveillance cameras. Second, images must be processed to remove background elements and/or crop the image to contain mostly a face. Third, a machine learning algorithm called a feature extractor is trained to identify notable facial features, transforming that into a vector, or list, of numbers. Fourth, the processed face images and feature vectors are stored in a reference database to allow for the final step, query matching, in which a facial recognition system extracts a feature vector from an unidentified picture and tries to match it to a face and vector in the reference database.

data flow of facial recognition process
Image provided by Wenger and collaborators

Within this framework of processes, they compared and categorized the features and design properties of the swath of existing anti-facial recognition tools. Invisible Mask, for example, is a physical device that prevents unauthorized query matching. Fawkes, on the other hand, digitally corrupts the reference database images.

Evaluating efficacy

The next step in the research was to evaluate the disruptions of each stage and determine the qualities of a tool that best support privacy and usability for users.

 "Anti-facial recognition (AFR) technologies are valuable tools that help users counteract unwanted facial recognition. By systematizing state-of-the-art AFR tools, our work provides much-needed structure and organization of this burgeoning field, and identifies viable research directions in this space,” Wenger said.

example of framework
Image provided by Wenger and collaborators

The researchers then compared anti-facial recognition tools with five design properties relating to effectiveness as a technical tool, usability, and accessibility. 

The first property is long-term robustness against evolutions of facial recognition systems. Naturally, this is a property that no current system can provide a strong guarantee against as of yet. The researchers also acknowledge that, while tools like Fawkes that target the system’s reference image collection can corrupt understanding of face images, they cannot consistently prevent inclusion or corruption from all the sources feeding a facial recognition system. On the other hand, a tool like the Invisible Mask can offer only one-time protection and does not address scenarios in which a facial recognition system adds your distorted face to its database and attempts to recognize you at a later time. 

Second, they consider broad protection coverage, which means providing protection for people whose images may already be contained in a facial recognition database. Certain tools like the Invisible Mask have this property by definition since they aim to disrupt facial recognition as it happens in a live scenario. For tools disrupting other stages, this consideration becomes much more challenging to design for.

Next, they shift focus to usability and accessibility for users. The researchers propose that as little reliance on third parties as possible is desired for anti-facial recognition tools. 

“Even in a hypothetical situation in which Facebook designed an anti-facial recognition system that helped prevent images uploaded on Facebook from being used to train effective facial recognition models, an individual has to trust the entity,” said Wenger.

In terms of accessibility, the researchers define two connected properties, minimal disruption to a user and minimal impact on others. 

For Fawkes, the disruption comes in the form of an additional step required to post a quick selfie, if privacy is desired. Similarly, the Invisible Mask requires the hat to be worn at all times a facial recognition system may be in use, which could be impractical at best or impossible at worst. Additionally, Fawkes and tools targeting the final three stages could have unintentional consequences for other users: misidentification, which currently affects people of darker skin tones and women at higher rates.

“There are significant racial disparities in their accuracy,” said Wenger. “In general, if a person’s face isn’t enrolled in the system, it can produce false matches of other people who are enrolled.”

The future of anti-facial recognition tools

By evaluating existing tools within their framework, Wenger and her collaborators reveal important needs in the anti-facial recognition space. For an anti-facial recognition tool to be effective, it has to be resilient to the future and facial recognition techniques not yet employed.

For this reason, Wenger said, "In a vacuum, yes, Fawkes should work in the long term. However, the security world is not a vacuum. Countermeasures against Fawkes have been and will continue to be developed.”

For example, there is currently no way of providing long-term guarantees for protection. A person’s face generally doesn't change much over time and a facial recognition system that can overcome some anti-facial recognition disruption will eventually render that tool ineffective.

Adding on to the challenges, Microsoft Azure recently cut off research access to their facial recognition tool, making it prohibitively expensive for researchers to evaluate these tools at all.

“Given this reality, I have come to view Fawkes as more of a normative statement about the importance of giving users agency against unwanted facial recognition, rather than a silver bullet solution to this problem," said Wenger.

Despite the rising adoption of facial recognition and mounting challenges, the researchers think that, with more focus on tools that disrupt the data collection and reference database creation stage, wider-reaching and more long-term protection will eventually follow.

“I remain hopeful about the development of practical user-centric tools,” said Wenger.

Related News

Faculty, Research, Students