The circulation of explicit and pornographic pictures of megastar Taylor Swift this week shined a light on artificial intelligence's ability to create convincingly real, damaging â and fake â images.
But the concept is far from new: People have weaponized this type of technology against women and girls for years. And with the rise and increased access to AI tools, experts say itâs about to get a whole lot worse, for everyone from school-age children to adults.
Already, some high schools students across the world, from to , have reported their faces were manipulated by AI and shared online by classmates. Meanwhile, a young well-known female Twitch streamer discovered her likeness was being used in a fake, explicit pornographic video that spread quickly throughout the gaming community.
âItâs not just celebrities [targeted],â said Danielle Citron, a professor at the University of Virginia School of Law. âItâs everyday people. Itâs nurses, art and law students, teachers and journalists. Weâve seen stories about how this impacts high school students and people in the military. It affects everybody.â
But while the practice isnât new, Swift being targeted could bring more attention to the growing issues around AI-generated imagery. Her enormous contingent of loyal âSwiftiesâ expressed their outrage on social media this week, bringing the issue to the forefront. In 2022, a ahead of her Eras Tour concert sparked rage online, leading to several legislative efforts to crack down on consumer-unfriendly ticketing policies.
âThis is an interesting moment because Taylor Swift is so beloved,â Citron said. âPeople may be paying attention more because itâs someone generally admired who has a cultural force. ⌠Itâs a reckoning moment.â
âNefarious reasons without enough guardrailsâ
The fake images of Taylor Swift predominantly spread on social media site X, previously known as Twitter. The photos â which show the singer in sexually suggestive and explicit positions â were viewed tens of millions of times before being removed from social platforms. But nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other, less regulated channels.
Although stark warnings have circulated about how misleading AI-generated images and videos could be used to derail presidential elections and , thereâs been less public discourse on how womenâs faces have been manipulated, without their consent, into often aggressive pornographic videos and photographs.
The growing trend is the AI equivalent of a practice known as ârevenge porn.â And itâs becoming increasingly hard to determine if the photos and videos are authentic.
Whatâs different this time, however, is that Swiftâs loyal fan base banded together to use the reporting tools to effectively take the posts down. âSo many people engaged in that effort, but most victims only have themselves,â Citron said.
Although it took 17 hours for X to take down the photos, many manipulated images remain posted on social media sites. According to Ben Decker, who runs Memetica, a digital investigations agency, social media companies âdonât really have effective plans in place to necessarily monitor the content.â
Like most major social media platforms, Xâs policies of âsynthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.â But at the same time, X has largely its content moderation team and relies on automated systems and user reporting. (In the EU, X is currently being investigated over its content moderation practices).
The company did not respond to CNNâs request for comment.
Other social media companies also have reduced their content moderations teams. Meta, for example, to its teams that tackle disinformation and co-ordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the in the US and around the world.
Decker said what happened to Swift is a âprime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square.â
When asked about the images on Friday, White House press secretary Karine Jean-Pierre said: âIt is alarming. We are alarmed by the reports of the circulation of images that you just laid out â false images, to be more exact, and it is alarming.â
A growing trend
Although this technology has been available for a while now, it is getting renewed attention now because of the offending photos of Swift.
Last year, a New Jersey high school student launched a campaign for federal legislation to address after she said photos of her and 30 other female classmates were manipulated and possibly shared online.
Francesca Mani, a student at Westfield High School, over the lack of legal recourse to protect victims of AI-generated pornography. Her mother told CNN it appeared âa boy or some boysâ in the community created the images without the girlsâ consent.
âAll school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,â Westfield Superintendent Dr. Raymond González told CNN in a statement at the time.
In February 2023, when a high-profile male video game streamer on the popular platform Twitch was of some of his female Twitch streaming colleagues. The Twitch streamer âSweet Anita â later told CNN it is âvery, very surreal to watch yourself do something youâve never done.â
The rise and access to AI-generated tools has made it easier for anyone to create these types of images and videos, too. And there also exists a much wider world of unmoderated not-safe-for-work AI models in open source platforms, according to Decker.
Cracking down on this remains tough. Nine US states currently have against the creation or sharing of non-consensual deepfake photography, synthetic images created to mimic oneâs likeness, but none exist on the federal level. Many to of the Communications Decency Act, which protects online platforms from being liable over user-generated content.
âYou canât punish it under child pornography laws ⌠and itâs different in the sense that no child sexual abuse happening,â Citron said. âBut the humiliation and the feeling of being turned into an object, having other people see you as a sex object and how you internalize that feeling ⌠is just so awfully disruptive to your social esteem.â
How to protect your images
People can take a few small steps to help protect themselves from their likeness being used in non-consensual imagery.
Computer security expert David Jones, from IT services company , advises that people should consider keeping profiles private and sharing photos only with trusted people because âyou never know who could be looking at your profile.â
Still, many people who participate in ârevenge pornâ personally know their targets, so limiting what is shared in general is the safest route.
In addition, the tools used to create explicit images also require a lot of raw data and images that show faces from different angles, so the less someone has to work with the better. Jones warned, however, that because AI systems are becoming more efficient, itâs possible in the future only one photo will be needed to create a deepfake version of another person.
Hackers can also seek to exploit their victims by gaining access to their photos. âIf hackers are determined, they may try to break your passwords so they can access your photos and videos that you share on your accounts,â he said. âNever use an easy-to-guess password, and never write it down.â
CNN's Betsy Kline contributed to this report.