The Taylor Swift Deepfakes Were Awful. How Do We Stop the Next One?
When fake, sexually-explicit images of Taylor Swift flooded social media last week, it shocked the world. But legal experts weren’t exactly surprised, saying it’s just a glaring example of a growing problem — and one that’ll keep getting worse without changes to the law and tech industry norms.
The images, some of which were reportedly viewed millions of times on X before they were pulled down, were so-called deepfakes — computer-generated depictions of real people doing fake things. Their spread on Thursday quickly prompted outrage from Swifties, who mass-flagged the images for removal and demanded to know how something like that was allowed to happen to the beloved pop star.
But for legal experts who have been tracking the growing phenomenon of non-consensual deepfake pornography, the episode was sadly nothing new.
“This is just the highest profile instance of something that has been victimizing many people, mostly women, for quite some time now,” said Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and technology law.
Experts warned Billboard that the Swift incident could be the sign of things to come — not just for artists and other celebrities, but for normal individuals with fewer resources to fight back. The explosive growth of artificial intelligence tools over the past year has made deepfakes far easier to create, and some web platforms have become less aggressive in their approach to content moderation in recent years.
“What we’re seeing now is a particularly toxic cocktail,” Hartzog said. “It’s an existing problem, mixed with these new generative AI tools and a broader backslide in industry commitments to trust and safety.”
To some extent, images like the ones that cropped up last week are already illegal. Though no federal law squarely bans them, 10 states around the country have enacted statutes criminalizing non-consensual deepfake pornography. Victims like Swift can also theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights, and torts like invasion of privacy and intentional infliction of emotional distress.
Such images also clearly violate the rules on all major platforms, including X. In a statement last week, the company said it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” as well as “closely monitoring the situation to ensure that any further violations are immediately addressed.” Sunday to Tuesday, the site disabled searches for “Taylor Swift” out of “an abundance of caution as we prioritize safety on this issue.”
But for the victims of such images, legal remedies and platform policies often don’t mean much in practice. Even if an image is illegal, it is difficult and prohibitively expensive to try to sue the anonymous people who posted it; even if you flag an image for breaking the rules, it’s sometimes hard to convince a platform to pull it down; even if you get one pulled down, others crop up just as quickly.
“No matter her status, or the number of resources Swift devotes to the removal of these images, she won’t be completely successful in that effort,” said Rebecca A. Delfino, a professor and associate dean at Loyola Law School who has written extensively about harm caused by pornographic deepfakes.
That process is extremely difficult, and it’s almost always reactive — started after some level of damage is already done. Think about it this way: Even for a celebrity with every legal resource in the world, the images still flooded the web. “That Swift, currently one of the most powerful and known women in the world, could not avoid being victimized shows the exploitive power of pornographic deepfakes,” Delfino said.
There’s currently no federal statute that squarely targets the problem. A bill called the Preventing Deepfakes of Intimate Images Act, introduced last year, would allow deepfake victims to file civil lawsuits, and criminalize such images when they’re sexually-explicit. Another, called the Deepfake Accountability Act, would require all deepfakes to be disclaimed as such and impose criminal penalties for those that aren’t. And earlier this month, lawmakers introduced No AI FRAUD Act, which would create a federal right for individuals to sue if their voice or any other part of their likeness is used without permission.
Could last week’s incident spur lawmakers to take action? Don’t forget: Ticketmaster’s messy 2022 rollout of tickets for Taylor’s Eras tour sparked congressional hearings, investigations by state attorneys general, new legislation proposals and calls by some lawmakers to break up Live Nation under federal antitrust laws.
Experts like Delfino are hopeful that such influence on the national discourse — call it the Taylor effect, maybe — could spark a similar conversation over the problem of deepfake pornography. And they might have reason for optimism: Polling conducted by the AI thinktank Artificial Intelligence Policy Institute over the weekend showed that more than 80% of voters supported legislation making non-consensual deepfake porn illegal, and that 84% of them said the Swift incident had increased their concerns about AI.
“Her status as a worldwide celebrity shed a huge spotlight on the need for both criminal and civil remedies to address this problem, which today has victimized hundreds of thousands of others, primarily women,” Delfino said.
But even after last week’s debacle, new laws targeting deepfakes are no guarantee. Some civil liberties activists and lawmakers worry that such legislation could violate the First Amendment by imposing overly-broad restrictions on free speech, including criminalizing innocent images and empowering money-making troll lawsuits. Any new law would eventually need to pass muster at the U.S. Supreme Court, which has signaled in recent years that it is highly skeptical of efforts to restrict speech.
In the absence of writing new laws that make deepfake porn even more illegal, concrete solutions will likely require stronger action by social media platforms themselves, which have created vast, lucrative networks for the spread of such materials and are in the best position to police them.
But Jacob Noti-Victor, a professor at Cardozo School of Law who researches how the law impacts innovation and the deployment of new technologies, says it’s not as simple as it might seem. After all, the images of Swift last week were already clearly in violation of X’s rules, yet they spread widely on the site.
“X and other platforms certainly need to do more to tackle this problem and that requires large, dedicated content moderation teams,” Noti-Victor said. “That said, it’s not an easy task. Content detection tools have not been very good at detecting deepfakes so far, which limits the tools that platforms can use proactively to detect this kind of material as it’s being posted.”
And even if it were easy for platforms find and stop harmful deepfakes, tech companies have hardly been beefing up their content moderation efforts in recent years.
Since Elon Musk acquired X (then named Twitter) in 2022, the company has loosened restrictions on offensive content and fired thousands of employees, including many on the “trust and safety” teams that handle content moderation. Mark Zuckerberg’s Meta, which owns Facebook and Instagram, laid off more than 20,000 employees last year, reportedly also including hundreds of moderators. Google, Microsoft and Amazon have all reportedly made similar cuts.
Amid a broader wave of tech layoffs, why were those employees some of the first to go? Because at the end of the day, there’s no real legal requirement for platforms to police offensive content. Section 230 of the Communications Decency Act, a much-debated provision of federal law, largely shields internet platforms from legal liability for materials posted by their users. That means Taylor could try to sue the anonymous X users who posted her image, but she would have a much harder time suing X itself for failing to stop them.
In the absence of regulation and legal liability, the only real incentives for platforms to do a better job at combating deepfakes are “market incentives,” said Hartzog, the BU professor — meaning, fear of negative publicity that scares away advertisers or alienates users.
On that front, maybe the Taylor fiasco is already having an impact. On Friday, X announced that it would build a “Trust and Safety center of excellence” in Austin, Texas, including hiring 100 new employees to handle content moderation.
“These platforms have an incentive to attract as many people as possible and suck out as much data as possible, with no obligation to create meaningful tools to help victims,” Hartzog said. “Hopefully, this Taylor Swift incident advances the conversation in productive ways that results in meaningful changes to better protect victims of this kind of behavior.”
Link to the source article – https://www.billboard.com/business/legal/taylor-swift-deepfakes-illegal-stopped-1235593162/
Recommended for you
-
Akai Professional MPC Renaissance | Music Production Controller with 9GB+ Sound Library Download (24-bit / 96 kHz)
$499,99 Buy From Amazon -
Electric Drum Set, MAZAHEI 9 Pads Silicon Electronic Drum Pad with Headphone Jack Foot Pedals Drum Sticks, Build in Speakers Drum Set Kids for Christmas Birthday Gifts
$59,99 Buy From Amazon -
Vangoa EWI-100 Portable Mini Digital Electronic Wind Instrument Synthesizer Rechargeable with Removable Mouthpiece for Kids Adults Beginners
$109,95 Buy From Amazon -
Eastar 4/4 Full Size Violin Set Matte Fiddle for Beginners Adults with Hard Case, Rosin, Shoulder Rest, Bow, Tuner and Extra Strings (Imprinted Finger Guide on Fingerboard)ï¼EVA-3
$140,99 Buy From Amazon -
Donner HUSH-I Guitar For Travel – Portable Ultra-Light and Quiet Performance Headless Acoustic-Electric Guitar, Maple Body with Removable Frames, Gig Bag, and Accessories
$299,99 Buy From Amazon -
Alesis V49 MKII â USB MIDI Keyboard Controller with 49 Velocity Sensitive Keys, 8 Full Level Pads, Arpeggiator, Pitch/Mod Wheel, Note Repeat and Software Suite
$149,00 Buy From Amazon -
Guyker Guitar Locking Tuners (6 for Right) – 1:18 Lock String Tuning Key Pegs Machine Head with Hexagonal Handle Replacement for ST TL SG LP Style Electric, Folk or Acoustic Guitars – Black
$34,99 Buy From Amazon -
USB Audio Interface +48V Phantom Power with 3.55m Microphone Jack, Audio Interface for Recording Podcasting and Streaming Ultra-low Latency Noise Free XLR Audio Interface
$24,99 Buy From Amazon
Responses