A Financial Times journalist has revealed he is the target of a widespread deepfake scam on Facebook and Instagram, raising fresh concerns about Meta’s ability to police fraudulent content on its platforms.
The journalist discovered a manipulated “avatar” — a digital likeness resembling him — being used in advertisements to promote a fake investment group. Despite repeated efforts by both himself and Financial Times colleagues to have the fraudulent material removed, new versions continue to surface.
Initially alerted to the scam on March 11, the journalist found that Meta, the parent company of Facebook and Instagram, had been profiting from the advertisements promoting the fraudulent scheme. While Meta removed the original ads following complaints, further investigations revealed that the problem was far more extensive than initially thought.
Analysis of Meta’s Ad Library found that at least three different deepfake videos and numerous Photoshopped images had been used across more than 1,700 adverts, reaching over 970,000 users in the EU alone. Experts believe the actual reach could be significantly higher, particularly in the UK.
The scam operated through at least ten fake accounts, with new accounts appearing to replace those banned.
Meta insists it is working to combat fraud, employing AI tools and facial recognition technologies. A Meta spokesperson said it is against their policies to “impersonate public figures” and confirmed the “removal of reported ads and accounts”. Meta cited the persistence and evolving tactics of scammers as a major challenge.
Despite these assurances, the journalist expressed skepticism about Meta’s efforts, questioning why a company with such vast resources cannot prevent known scams from resurfacing. UK government officials pointed to regulations under the Online Safety Act and Meta’s own ad policies that prohibit misleading or deceptive promotions.