Taylor Swift is Not Alone: The Growing Nightmare of AI Deepfake Porn
By: Akos Balogh
Megastar Taylor Swift is never far from the headlines.
She’s currently in riding the wave of her final, Australian leg of her Eras tour, drawing sold-out stadiums. But in late January, Tay Tay made headlines for another reason, and a disturbing one. Social media site X (formerly known as Twitter) lit up with AI-generated sexually explicit ‘deepfakes’ of Swift. These images looked like Swift, thanks to the carbon copy quality AI software that generated them, but were made without her consent or knowledge.
One image shared by a user on X was viewed 47 million times before the account was suspended.
X suspended several accounts that posted the faked images of Swift, but the images were shared on other social media platforms and continued to spread despite those companies’ efforts to remove them. After outrage and pushback from hundreds of Swifties, X banned searches for ‘Taylor Swift’, and took down the X accounts that shared the images.
While these deepfake images made the news thanks to Swift’s celebrity status, it turns out that AI-generated deepfakes are rife across the dark underbelly of the internet, targeting mainly women and children. Taylor Swift is not alone when it comes to being harassed by deepfake pornography.
According to NBC news in America:
The FBI said it is difficult to calculate the number of minors who are sexually exploited [by AI-generated sexual deepfakes]. But the agency said it has seen a rise in the number of open cases involving crimes against children. There were more than 4,800 cases in 2022, which grew from more than 4,100 the year before.
It’s a growing problem, and here’s why:
1) Thanks to recent advances in AI technology, nearly anyone can generate a sexually explicit image/video of someone else:
While in the past it was difficult to generate sexually explicit images and videos, generative AI technology now makes it easy. In June last year, the FBI issued a public statement saying the technology used to create nonconsensual pornographic deepfake photos and videos was improving and being used for harassment and sextortion.[1]
2) If you share any normal images or videos of yourself online, they could be manipulated and used against you
The disturbing phenomenon of ‘revenge porn’ made headlines in the mid-2010s, where unhinged former partners would share pornographic content of their former partners online to punish and shame them. However, such ‘revenge porn’ was real video, as opposed to artificially generated video. One needed to have a real video of a person, to share it.
But as legal scholar and University of Georgetown Professor Dr. Mary Anne Franks (who deals with the issue of deepfake online content) pointed out in a recent interview:
‘Now it’s possible for anyone to use very easily accessible technology to make it seem as though you are naked or engaged in some kind of sexually explicit activity. All they really need is a few photos or videos of your face, things that they can get from innocuous places like social media sites. The next thing you know, a person can produce an image or a video of someone that makes it really look as though it’s an intimate depiction, when in fact it never took place.’
Considering nearly everyone has photos or videos of themselves on social media, we all have the potential to be victims.
But then again, we don’t have to post anything: videos made of us (with or without our consent) could be used against us. New Jersey high school student Francesca Mani was targeted in this way at her high school by her peers. According to Franks, who represented Mani:
We know it was one or more boys at the same high school, who took innocuous photos of their peers, including Mani, who was at least was 14 at the time that this happened. She’s now 15. They created imagery that depicts them nude or in sexually explicit positions, and have distributed them in ways that Francesca’s not even entirely sure what the scope of this is, because no one’s talking about it, and the school hasn’t been particularly forthcoming about exactly what this imagery, what it’s like.’
3) Deepfakes don’t just reflect the ugliness of human nature: they also shape (some) people to become predators and abusers.
Yes, generating Deepfakes is a sinful action taken by sinful people: technology will always reflect humanity.
And yet, like all technology, deepfake AI technology shapes us and shapes society. It creates impulses and rewarding behaviours that might not have existed otherwise. It bombards individuals, especially young people, with ideas and possibilities for abuse that they might not have conceived on their own, effectively turning technology users into potential predators. [2]
4) The Paradigm shift we all need to make: seeing is no longer believing in a deep fake-infested online world.
Thanks to deepfake technology, we now need to be sceptical of images and videos we see online unless it’s from a verified source. Seeing is no longer believing in a deepfake world.
While some of the Big Tech companies are working together to watermark AI-generated content, to let users know it’s not real, this is too little too late: watermarking can be removed, and there’s already so much content out there. According to Paul Raetzer, a thought leader in the AI technology space:
The average citizen has no idea that AI is capable of doing these things…The only true way to address this is through AI literacy that makes people aware that this [content] exists and it’s possible to develop very realistic videos, images, and audio.’
5) What should Christians do if you’re Deepfaked? Remember your ultimate identity.
Getting deepfaked is incredibly distressing, and can lead to all sorts of psychological and reputational harms.
But one of the biggest harms is feeling ashamed. Shame is the feeling of being unpresentable to those around us [3], and let’s face it: few things would make us feel as unpresentable (and thus ashamed) as having degrading naked pictures of us over the internet. To say we would want the earth to swallow us in that situation would be an understatement.
As Christians, however, the gospel gives us enormous resources to combat this shame.
No matter how unpresentable AI deepfakes might lead us to feel before the watching world, we have been made presentable before our Heavenly Father through the washing by the precious blood of Jesus, which cleanses us from all sin (1 John 1:7). Being made presentable – being made righteous – this God-given reality can help us combat feelings of shame that might threaten to overwhelm us.
Yes, even as Christians, being deepfaked will distress us and hurt us. But as those washed clean by the blood of Jesus, being deepfaked doesn’t have to destroy us.
And so if you are deepfaked, be sure to take the necessary practical steps such as contacting the Social Media platform where your photos/videos are found, and consulting a lawyer. But then, as you walk through the challenging valley, remember that it’s not your fault: being deepfaked is something done to you, illegally and immorally; you’re not responsible for it.
And amid your distress, remember who you are in Christ: washed clean, holy, presentable before the Only Person whose view of you ultimately matters.
Article supplied with thanks to Akos Balogh.
About the Author: Akos is the Executive Director of the Gospel Coalition Australia. He has a Masters in Theology and is a trained Combat and Aerospace Engineer.
Feature image: Photo by Rosa Rafael on Unsplash