Deepfake pornography has all the markings of revenge pornography when real images can be obtained without consent to purport to show real people. The synthetic result is devastating for victims and pushes the risks of online spaces further that policy changes are needed now by the greatest web companies, like Google. 

Deepfakes are seen millions of times in an instant, but also engrained in the World Wide Web once they are uploaded, making it difficult to remove them. Google has simplified the removal of deepfakes for victims in the aftermath of this crime. The change makes it less convoluted to request for images or videos to be taken down in one go rather than having to reference the web address or HTML. Other searches including a person’s name will not show more explicit results and the web ranking of these sites will be downgraded if a high volume of removal requests are logged. 

Intelligent systems will be able to search and remove duplicate images. The internet has been infiltrated by the viral nature of celebrity deepfakes being shared as well as affecting private individuals. Google, Meta and Twitter have been grappling with a solution to stop sinister AI creations proliferating on their online platforms. Meanwhile, the finished UK Online Safety Act offers a ban to disseminate explicit deepfaked content. 

Emma Higham, a Google product manager said: “We are in the middle of a technology shift.”

“As we monitor our own systems, we’ve seen that there is a rise in removal requests for this kind of content.”

Some criticised Google’s slow response exacerbating sexual abuse and potentially child sex abuse material.