AI Creates Ultra-Realistic Fake Photos That Fool Even Familiar Eyes

Researchers from Swansea, Lincoln, and Ariel universities reveal that AI-generated images of real people are now so convincing that even familiar viewers can’t tell them apart from genuine photos, raising urgent calls for detection safeguards.

Example stimuli depicting matched image pairs from Experiments 2–4. Images are real (top row) and synthetic (bottom row). Image attributions (top row left to right): Jay Dixit (cropped); Red Carpet Report on Mingle Media TV (cropped); Dominick D (cropped); Toglenn (cropped). Photographs are from Wikimedia Commons (2025) (https://commons.wikimedia.org/)

Example stimuli depicting matched image pairs from Experiments 2–4. Images are real (top row) and synthetic (bottom row). Image attributions (top row left to right): Jay Dixit (cropped); Red Carpet Report on Mingle Media TV (cropped); Dominick D (cropped); Toglenn (cropped). Photographs are from Wikimedia Commons (2025) (https://commons.wikimedia.org/)

A new study has revealed that artificial intelligence can now generate images of real people that are virtually indistinguishable from genuine photographs.

Researchers use ChatGPT and DALL·E to create faces

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

Participants struggle to spot fake photos

They found that participants were unable to reliably distinguish them from authentic photos - even when they were familiar with the person's appearance.

Across four separate experiments the researchers noted that adding comparison photos or the participants' prior familiarity with the faces provided only limited help.

Study published in Cognitive Research: Principles and Implications

The research has just been published by journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of "deepfake realism," showing that AI can now produce convincing fake images of real people which could erode trust in visual media.

Professor Jeremy Tree warns of misinformation risks

Professor Jeremy Tree, from the School of Psychology, said: "Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

"The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency."

Experiments test recognition of real versus synthetic faces

One of the experiments, which involved participants from the US, Canada, the UK, Australia, and New Zealand, showed subjects a series of facial images, both real and artificially generated, and asked them to identify which were real and which were artificially generated. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Celebrity test highlights realism of AI imagery

Another experiment asked participants to distinguish between genuine pictures of Hollywood stars, such as Paul Rudd and Olivia Wilde, and computer-generated versions. Again, the study's results showed just how difficult it can be for individuals to spot the authentic version.

Deepfake realism raises ethical and social challenges

The researchers say that AI's ability to produce novel or synthetic images of real people opens up several avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a specific product or political stance, which could influence public opinion of both the celebrity and the brand or organization they are portrayed as supporting.

Call for improved AI image detection tools

Professor Tree added, "This study shows that AI can create synthetic images of both new and known faces that most people can't tell apart from real photos. Familiarity with a face or having reference images didn't help much in spotting the fakes; that is why we urgently need to find new ways to detect them.

"While automated systems may eventually outperform humans at this task, for now, it's up to viewers to judge what's real."

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI-Driven Incentive System Enhances Digital Twin Performance In 6G Networks