Ground-breaking study reveals public’s struggle to detect deepfakes
A recent study conducted by iProov, a leading provider of science-based biometric identity verification solutions, has unveiled a staggering statistic: 99.9% of people are unable to accurately identify AI-generated deepfakes.
The research highlights a growing threat to banks, governments, and businesses that rely on remote identity verification. As deepfake technology becomes more sophisticated, organisations can no longer depend solely on human judgment to verify identities.
Only 0.1% of respondents were able to correctly identify all deepfake and real stimuli, including images and videos.
Participants were 36% less effective at detecting deepfake videos compared to images, raising serious concerns about video-based identity verification and remote onboarding.
Despite increasing awareness of deepfakes, many remain unaware of the technology. Approximately 22% of participants had never heard of deepfakes before the study, indicating a need for more education.
Over 60% of participants were confident in their deepfake detection skills, despite poor performance.
Nearly 30% of respondents took no action when suspecting a deepfake, with 48% unsure how to report one.
Deepfake scams are costing organisations millions, such as the recent £20 million deepfake attack on Arup, where criminals used deepfake videos to impersonate executives.
Professor Edgar Whitley, a digital identity expert at the London School of Economics and Political Science, emphasised the urgency of the threat: “Security experts have been warning of the dangers posed by deepfakes for individuals and organizations alike for some time. This study shows that organizations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating users.”
Andrew Bud, founder and CEO of iProov, echoed this sentiment: “Just 0.1% of people could accurately identify the deepfakes, underlining how vulnerable both organizations and consumers are to the threat of identity fraud in the age of deepfakes. Even when people do suspect a deepfake, our research tells us that the vast majority take no action at all. Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk. It’s down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritizes both security and individual control, ensuring that organizations and users can keep pace and remain protected from these evolving threats.”
The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content, and its findings underscore the urgent need for enhanced security measures in the fight against deepfake fraud.