Futurism logo

Can Master Faces Beat Face Recognition Systems?

A recent experiment by a group in Tel Aviv had sent discussion boards buzzing with talk on master faces and how they can allegedly spoof real faces to beat facial recognition systems.

By Kavi LanPublished 4 years ago 3 min read
Image by Unsplash

Face recognition has seen a steady rise in adoption in the last decade. Thanks largely to phone manufacturers like Apple and Samsung which have started embedding face recognition systems in their handsets, consumers like us have grown increasingly familiar with this technology. In fact, thanks to face recognition, you now only have to raise your phone to unlock it – no more passwords.

Face-enforced biometrics gained even more mainstream usage when the pandemic hit in 2020 and left much of the world with touch aversion. Almost overnight, demand for non-touch biometrics such as face recognition terminals for using access control, time and attendance, border control, law enforcement and financial services have hit record levels. We have immediately taken to the convenience, accuracy, and speed that face recognition offers. Furthermore, face recognition had been marketed as one of the most hygienic modalities of biometrics, a fact that has driven its widespread use across many applications.

Master Faces VS. Face Recognition System

Yet the rapid adoption of face recognition has also opened it to greater scrutiny. Aside from privacy issues, hot button topic is security. Some are saying that facial recognition is somehow an inherent flawed technology. Critics and industry insiders tried to raise the awareness of racial bias, since the algorithms are mostly trained to identify white faces and will struggle while being asked to recognize black ones.

A recent experiment by a group in Tel Aviv had sent discussion boards buzzing with talk on master faces and how they can allegedly spoof real faces to beat face recognition systems.

Using AI (Nvidia’s StyleGAN system), the researchers created computer-generated fake faces which they hoped would serve as the “master keys” that could bypass face recognition systems.

By exploiting the tendency and comparing those faces to real photos in LFW (the University of Massachusetts’ Labeled Faces in the Wild) dataset, the researchers use a classifier algorithm to keep the fake images if there happens a strong resemblance. Those results are used to train a separate algorithm to create even better fake ones when the whole process comes all over again.

Some of these artificial master faces were indeed able to bypass three models. However, it turned out that these faces were of older Caucasian men with white hair who had mustaches.

The Flaw for the Master Face Experiment

Observers quickly pointed out that a small set of very similar-looking images being able to represent a large population in the dataset that may indicate a flaw in the design of the experiment.

As it turns out, children, elderly people beyond 80 years, women, and other ethnicities were indeed poorly represented in the dataset of the experiment. In fact, faces with darker skin, women, younger and older people are less accurate and less unlikely to bypass face recognition system. The lack of diversity in the dataset might have undermined the results of the attempted spoofing exploit. This has prompted observers to suggest that future experiments use a more inclusive dataset to be able to arrive at more conclusive findings.

The Future of Face Recognition

Yet even if the Tel Aviv experiment had design flaws that prevented observers from drawing solid conclusions, it serves a critical purpose of keeping the biometric industry on its toes about the rising security threats. Already, security experts have been hard at work hardening facial recognition systems with multi-factor authentication which combine fingerprint, face, and card reading, plus anti-spoofing features such as Live Detection. Simply put, this is a double-checking mechanism for added security.

With AI advancing at breakneck speeds, it is only a matter of time when an artificially generated master face would be able to impersonate a vast proportion of faces in the real world. This indeed sounds terrifying, and should therefore, be a wakeup call for security experts to always be up to the challenge and be relentlessly seeking ways to put digital identity systems ten steps ahead of attackers.

tech

About the Creator

Kavi Lan

Hi I'm Kavi Lan, a writer and marketer focusing on biometrics technology.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.