We are now in a position to detect most current Deepfake attacks on Age Estimation Systems: Project DefAI updates

Published on
Project DefAI updates

Onur Yürüten

Head of Age Assurance Solutions, Privately SA

Privately is helping build an "age-aware" Internet that helps online and offline services deliver age appropriate experiences, and protect children and other vulnerable groups from online harms. AI-based age estimation solutions provide the key technology that enables businesses to action these objectives.

While our on-device age estimation tools offer an easy, accurate and privacy-by-design way for users to validate their age, it is common to find users trying to bypass or spoof the age estimation system. There will always be some children trying to pass off as adults, and some adults trying to pass off as children. The cheapest attempts to spoof biometrics systems would be via presenting printed photos or photos directly from the screen:

Geniune Face
Standard paper attack
Standard Screen attack

There are many mechanisms to detect such spoofing attempts and these involve active liveness and passive liveness technologies. However, with the advancements in artificial intelligence, increasingly sophisticated tools become available en masse. Therefore, AI also offers new challenges to biometric systems as a whole.

Generative AI can equip anybody to create new vulnerabilities in the form of “Deepfakes”: methods that either a) manipulate faces of individuals with digital filters to appear younger or older or b) generate entirely synthetic faces. It is becoming increasingly cheaper to execute such methods on live age estimation systems, and the industry is developing defence mechanisms against this growing threat.

Production
Presentation Attack
Injection Attack

Enter the DefAI (https://defaiproject.com/ ), the UK-Swiss joint project to develop prevention mechanisms against AI attacks on Age estimation solutions. Our consortium has researched and developed novel ways  to prevent presentation and injection attacks that use Deepfake media to fool the age estimation systems.

At the halfway mark of our project, and we are observing very encouraging results:

  • Our technology is able to detect most of the off-the-shelf virtual data injection sources, as such, most of the standard attacks are halted before they go further.
  • Preliminary studies indicated that only 5% of the deepfake attempts were successfully caught in the affected industries. Our in-lab tests indicate that with our techniques, we can already have up to 18-fold increased accuracy in correctly identifying deepfake attacks. Our baseline models indicate < 5% false-accept and false-reject rates. These numbers are bound to further improve through the project.
  • Current tech stack’s footprint sits at double digit MB, with a clear pathway to reduce it to single-digit MB. As such, it’s becoming increasingly easier to deploy it into on-device SDKs. This would be an unprecedented achievement and consistent with our privacy-by-design architecture.
  • Part of our research efforts has already been published in peer-reviewed scientific venues.
  • We have established an ethical, reusable framework for collecting biometric data to develop presentation and injection attack detectors which has been subject to legal scrutiny both in Switzerland as well as the United Kingdom.

We have great expectations from this project. In the second half of our project, we will emphasise on the following:

  • We will establish the certification scheme for these novel mechanisms, and standardisation of the certification process itself to be executed by consortium partner AVID.
  • Through our experience in the age estimation market, we know that high accuracy scores cannot fully reassure our clients and Internet users - but our absolute commitment to data privacy does. Which is why, as we build our product as safe and privacy-preserving as possible, we will make the data processing components run completely on device: no personal image or audio shall ever leave the user’s device.
  • We will raise public awareness of such privacy-preserving technologies to protect our children online, allowing parents, users, and regulators to understand quickly which services are safer in this quickly evolving tech area.

Next read

Safeguarding children in end-to-end encrypted networks: A preventative approach

Discover other posts

Article
News

Interested?
Let’s talk.

To find out more about how you can integrate Privately's solutions into your devices, platforms or apps, please get in touch.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.