Collaboration to push the current understanding and capabilities of deepfakes detection technologies lies at the centre of addressing this “urgent national priority”.
Deepfakes have spiked over the last few years with innovations of artificial intelligence accelerating. The latest AI model called DeepSeek, which only launched in January, has rattled other global AI competitors and put the US on the backfoot to defend its own famous creation, ChatGPT.
A project 8 million deepfakes will be shared in 2025, an increase of 500,000 from 2023, with many criminal and exploitative repercussions. Just some of the harmful activities AI-generated deepfakes can enable range from online child sexual abuse, stolen identities and election fraud.
As a result, the acceleration of deepfake detection tools must be commissioned. A big event in the space is led by a cohort including the Home Office, the Department for Science, Innovation and Technology, ACE and the renowned Alan Turing Institute. The Deepfake Detection Challenge aims to source and raise solutions that are capable of detecting fake media. 150 people attended proposing challenge statements for how current tools could be pushed further.
With 17 submissions, approximately two million products were created using balanced training datasets made up of real and synthetic data.
Within the time limits of the challenge work, it was clear that curated and representative data of operational scenarios enabled the communities coming together, as well as advancing the deepfake detection technology.
Data was a top priority in the commissions to the ACE. ACE leveraged its expertise from the Deepfake Detection Challenge to create a reusable ‘gold standard’ dataset designed to “effectively” test detection tools by vendors.