In early 2024, disturbing AI-generated pornographic images of pop icon Taylor Swift—labeled “Taylor Swift AI”—went viral on X (formerly Twitter). Within just 17 hours, one post alone racked up more than 45 million views before the account behind it was suspended. Fans—Swifties—quickly rallied, flooding the platform with hashtags like #ProtectTaylorSwift in an attempt to drown out the harmful images.
But the damage had already been done.
This wasn’t just about one celebrity’s privacy being violated. It was a flashing red warning sign about how unregulated and dangerous deepfake technology has become.
And here’s the chilling part: Taylor Swift is not alone. Women, girls, and even teens are increasingly being targeted with AI-generated intimate images—with little to no legal protection.
In this post, we’ll study what this scandal says about the darker side of AI, where the law currently falls short, and what needs to change if we want to build a safer digital world.
Deepfakes are AI-generated images or videos that manipulate reality. Originally, creating one required advanced computing power and hours of data. Today? All it takes is a written prompt and a tool anyone can download.
That’s reportedly how the Taylor Swift deepfakes were created. Malicious users on forums like 4chan typed a few prompts into a text-to-image generator, producing explicit images that looked terrifyingly real.
The danger isn’t just in the content itself—it’s in its believability. Victims don’t need to physically do anything compromising for someone to fabricate convincing evidence that they did. And because deepfakes can look real, victims often find it hard to prove that an image is fake.
That creates devastating consequences:
The numbers are staggering. According to research from Sensity AI, up to 95% of deepfakes on the internet are sexually explicit, and the overwhelming majority of victims are women. And it’s not just adults—high school students like Francesca Mani and her classmates in New Jersey were targeted with explicit AI-generated images.
If celebrities with massive fan bases and legal teams are vulnerable, imagine how defenseless the average person is.
Here’s the uncomfortable truth: right now, there’s no federal law in the U.S. that directly bans AI-generated porn.
That means victims—whether they’re celebrities or everyday people—have very little legal recourse. Over the years, lawmakers have introduced bills to tackle the problem, but none have made it through Congress. Some notable attempts include:
Each of these bills shows that lawmakers are aware of the threat. But political gridlock, coupled with the challenges of defining what counts as a “deepfake,” has left victims without consistent protection.
So where do people turn? State laws.
Some states have tried to step in where Congress hasn’t. California, Florida, New York, Indiana, and Washington are among those that have passed laws targeting deepfake pornography. But the protections vary widely.
For instance:
That sounds promising, but here’s the problem: many state laws still don’t cover all scenarios. Some only apply if the victim’s real body—not just their face—was used. Others require proof of intent to harass, which is extremely difficult to establish.
And because laws vary state by state, a victim in one part of the country may have legal protection, while someone in another state may have none at all.
It’s like playing legal roulette.
When criminal laws fail, victims often turn to civil lawsuits. These can include claims of defamation, invasion of privacy, or intentional infliction of emotional distress. But these cases are often uphill battles.
Here’s why:
Even the FBI has warned about the growing threat. In a public alert, the agency noted that deepfake porn can spread on social media, dating apps, and even professional networking sites before victims even know it exists.
This isn’t just a problem of private harm—it’s a public safety issue.
The Taylor Swift incident proved one thing loud and clear: anyone can be a target. If one of the most famous women in the world can be victimized, so can your friend, your coworker—or you.
So what do experts and advocates suggest?
It’s not about slowing down innovation. It’s about making sure technology develops responsibly.
Taylor Swift’s deepfake scandal wasn’t just a celebrity headline—it was a wake-up call.
Deepfake porn isn’t a niche issue or a futuristic concern. It’s happening right now, targeting women and girls across the country. For every Taylor Swift, there are thousands of victims without fan bases, without money, and without anyone to defend them.
👉 Want to stay informed on how AI is shaping our legal and digital future? Follow our blog for updates, insights, and resources.