Five Minutes of Training Helps People Detect AI-Generated Fake Faces

Researchers have shown that a short, five-minute training session can significantly enhance people’s ability to identify AI-generated faces, providing a fast and practical tool to combat deepfake fraud and digital identity threats.

Example stimuli. Female synthetic faces (a), male synthetic faces (b), female real faces (c) and male real faces (d). The final row (e) contains some of the synthetic images used in the training task, each of which has rendering artefacts that were highlighted to participants. For example, the first image has hair that is poorly rendered, and the second image has three, rather than four incisors. 

Example stimuli. Female synthetic faces (a), male synthetic faces (b), female real faces (c) and male real faces (d). The final row (e) contains some of the synthetic images used in the training task, each of which has rendering artefacts that were highlighted to participants. For example, the first image has hair that is poorly rendered, and the second image has three, rather than four incisors. 

Just five minutes of targeted training can significantly improve people’s ability to identify fake faces created by artificial intelligence, according to new research by scientists from the University of Reading, University of Greenwich, University of Leeds, and University of Lincoln.

Study design and participant performance

The study tested 664 participants to assess their ability to distinguish between genuine human faces and AI-generated ones produced by the software StyleGAN3. Without prior training, “super-recognisers,” individuals with exceptionally strong face recognition skills, correctly identified fake faces only 41% of the time. Participants with typical recognition abilities achieved an accuracy rate of 31%. By comparison, random guessing would yield approximately 50% accuracy, indicating that untrained participants performed worse than chance.

Five-minute training improves accuracy

A new group of participants received a brief five-minute training session that explained common rendering errors made by generative AI systems, such as unrealistic hair textures, irregular tooth counts, and asymmetric facial features. Following this brief intervention, performance improved significantly: super-recognisers achieved 64% accuracy, and typical participants improved to 51%.

Implications for digital security

“Computer-generated faces pose genuine security risks,” said Dr. Katie Gray, lead researcher at the University of Reading. “They have been used to create fake social media profiles, bypass identity verification systems, and forge documents. The latest generation of AI faces are so realistic that people often judge them as more lifelike than real faces.”

Dr. Gray added that this simple, easy-to-implement training could be combined with the natural abilities of super-recognisers to help address real-world identity verification and online fraud challenges.

Understanding how people detect fake faces

The study found that training improved both super-recognisers and typical observers to a similar degree. This suggests that the two groups may rely on different visual cues when identifying synthetic faces, rather than differing only in overall perceptual accuracy.

Challenges from advanced AI systems

The research, published in Royal Society Open Science, utilized faces generated by StyleGAN3, the most advanced generative model available at the time of the study. The system’s realism posed a considerable challenge: participants’ untrained accuracy was lower than in previous studies that used older AI models. Future work will explore how long these training effects persist and how human expertise, particularly that of super-recognisers, can complement AI-driven deepfake detection systems.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Frequent AI Use in Programming Class Tied to Lower Student Performance