Researchers have shown that a short, five-minute training session can significantly enhance people’s ability to identify AI-generated faces, providing a fast and practical tool to combat deepfake fraud and digital identity threats.

Example stimuli. Female synthetic faces (a), male synthetic faces (b), female real faces (c) and male real faces (d). The final row (e) contains some of the synthetic images used in the training task, each of which has rendering artefacts that were highlighted to participants. For example, the first image has hair that is poorly rendered, and the second image has three, rather than four incisors.
Just five minutes of targeted training can significantly improve people’s ability to identify fake faces created by artificial intelligence, according to new research by scientists from the University of Reading, University of Greenwich, University of Leeds, and University of Lincoln.
Study design and participant performance
The study tested 664 participants to assess their ability to distinguish between genuine human faces and AI-generated ones produced by the software StyleGAN3. Without prior training, “super-recognisers,” individuals with exceptionally strong face recognition skills, correctly identified fake faces only 41% of the time. Participants with typical recognition abilities achieved an accuracy rate of 31%. By comparison, random guessing would yield approximately 50% accuracy, indicating that untrained participants performed worse than chance.
Five-minute training improves accuracy
A new group of participants received a brief five-minute training session that explained common rendering errors made by generative AI systems, such as unrealistic hair textures, irregular tooth counts, and asymmetric facial features. Following this brief intervention, performance improved significantly: super-recognisers achieved 64% accuracy, and typical participants improved to 51%.
Implications for digital security
“Computer-generated faces pose genuine security risks,” said Dr. Katie Gray, lead researcher at the University of Reading. “They have been used to create fake social media profiles, bypass identity verification systems, and forge documents. The latest generation of AI faces are so realistic that people often judge them as more lifelike than real faces.”
Dr. Gray added that this simple, easy-to-implement training could be combined with the natural abilities of super-recognisers to help address real-world identity verification and online fraud challenges.
Understanding how people detect fake faces
The study found that training improved both super-recognisers and typical observers to a similar degree. This suggests that the two groups may rely on different visual cues when identifying synthetic faces, rather than differing only in overall perceptual accuracy.
Challenges from advanced AI systems
The research, published in Royal Society Open Science, utilized faces generated by StyleGAN3, the most advanced generative model available at the time of the study. The system’s realism posed a considerable challenge: participants’ untrained accuracy was lower than in previous studies that used older AI models. Future work will explore how long these training effects persist and how human expertise, particularly that of super-recognisers, can complement AI-driven deepfake detection systems.
Source:
Journal reference: