Below Supernav ↴

Fans outraged after Taylor Swift victim of AI deepfake porn

  • Deepfake nude images of Taylor Swift were posted online this week
  • This has become an increasing problem with growing popularity of AI
  • Fans of the singer, lawmakers calling for more protections from fake AI
FILE - Taylor Swift performs during "The Eras Tour" in Nashville, Tenn., May 5, 2023. According to Spotify Wrapped, Swift was 2023's most-streamed artist globally. (AP Photo/George Walker IV, File)

FILE – Taylor Swift performs during “The Eras Tour” in Nashville, Tenn., May 5, 2023. (AP Photo/George Walker IV, File)

 

Main Area Top ↴

Testing on staging11

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

(NewsNation) — Taylor Swift fans are calling for lawmakers to take action after fake, nude and sexually explicit images of the singer reportedly made with artificial intelligence, surfaced online this week

The Verge reports that one of the most prominent examples of these images on X, the social media site formerly known as Twitter, received more than 45 million views, 24,000 reposts and “hundreds of thousands” of likes and bookmarks. The verified user who shared these, as well as some others, were suspended from X, though the images were continuing to be spread Friday morning.

This isn’t even the first time Swift was targeted by AI — someone once used deepfake technology to create an advertisement promoting a fake giveaway between her and popular cookware brand Le Creuset to scam people.

NewsNation has reached out to a Swift spokesperson for comment.

Taylor Swift deepfake images circulate

Fake images of Swift started being posted earlier this week.

These pictures, a report from 404media wrote, came from a Telegram group “dedicated to abusive images of women.” The New York Times reported that a cybersecurity company called Reality Defender said it determined, with 90% confidence, that the images were created using a diffusion model, which is an AI-driven technology that can be accessed by more than 100,000 apps and publicly available models.

Users took to Twitter to decry the spread of the photos, with calls to “Protect Taylor Swift,” as well other people victimized by deepfake porn. Some fans uploaded multiple pictures of Swift singing or at her concert, to combat the flood of explicit content of her.

“Creating an AI of her naked body and engaging in sexual harassment is not acceptable, regardless of her financial status,” one person wrote. “Such behavior is repulsive and should be deemed illegal.”

X’s policy bans sharing what it calls misleading media, or “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” Content using misleading media has to follow certain criteria to be removed, though.

According to X’s website, the image, video, audio or GIF must include media:

  • significantly or deceptively altered, manipulated or fabricated
  • shared in a “deceptive manner” or with “false context”
  • likely to result in widespread “confusion on public issues, impact public safety, or cause serious harm”

X Safety said on social media that its teams are actively removing all identified images of Swift and are “aking appropriate actions against the accounts responsible for posting them.”

“We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed,” the post said.

The rise of AI, and its misuse

Artificial Intelligence is a growing field — and with that, comes growing concern. AI systems like ChatGPT and Bard were able to create pieces of writing, art and computer code, with many excited over its possibilities.

 Microsoft CEO Satya Nadella said it would “boost” the global technology, which his company is rolling out in its products.

Nadella said he’s “very optimistic about AI being that general-purpose technology that drives economic growth.” Business leaders say it can automate mundane work tasks, and assist people with advanced jobs. At the same time, the new technology can also threaten jobs.

School officials sounded the alarm on AI at the beginning of 2023, with many blocking ChatGPT over concerns students were using it to cheat.

AI can even have political implications. A robocall made using AI impersonated President Joe Biden ahead of the New Hampshire primary election and urged people not to vote for him. Everyday citicizens are being scammed by fake calls as well.

And, of course, some use AI to make fake images and videos of real people: an analysis by independent researcher Genevieve Oh shared with The Associated Press showed more than 143,000 new deepfake videos were posted online in 2023, surpassing every other year combined.

Nonconsensual deepfake pornography has become an issue, primarily for women, with the problem only expected to get worse with the development of new generative AI tools.

“The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

This happened last summer at Westfield High School in New Jersey — though the school itself wasn’t made aware until October. One or more boys at the school were accused of using artificial intelligence to generate pornographic pictures of female students and sharing them on Snapchat. This sparked anger among parents, triggering a police investigation

Legal implications of AI misuse

So what can people do about this?

Evan Nierman, the CEO of Red Banyan wrote in December that hoaxes and faux images highlight the “pressing need for a new area of legal practice.” Questions abound, Nierman said, such as who is ultimately responsible for AI content: the company, or person creating it?

“Celebrities will be at risk of their own brands being infringed upon by the misuse of their likenesses and voices,” he said in The Daily Business Review. “Regular citizens without the financial means of the rich and famous are also likely to find themselves in dire straits if they are targeted by AI fakery that embroils them in legal disputes centered upon videos or audio that appears to be real, but is computer generated.”

Paula Brillson, managing attorney for Digital Law Group PLLC writes that unauthorized intrusion, false light and appropriation of your identity are all potential violations of privacy when it comes to AI-generated images.

“If your image has been stolen or used inappropriately you may be entitled to damages (including lost profits) for invasion of privacy, violation of right of publicity or defamation,” she said, adding that the first step for those who’ve been victimized is to contact an attorney.

“If you simply report the fake site(s) to the platform operators (Facebook, Instagram, etc) you may unwittingly become engaged in a whack-a-mole game as infringers will most often proceed to set up alternate accounts,” Brillson said. For those who make creative works, like photos, paintings or songs, Brillson suggests registering copyrights or using a digital fingerprint or watermark on images, to avoid plagiarism.

Scarlett Johansson is one celebrity who took legal action against an image-generating app called Lisa AI: 90s Yearbook & Avata that used her name and likeness for an online ad. Her representatives told Variety that her attorney handled the situation in a legal capacity. Big-name authors made headlines after suing OpenAI for copyright infringement, saying the company used its works to train its ChatGPT AI model without their permission.

Fighting back against AI deepfakes

Could a case like this change AI regulations? Swifties — and lawmakers — certainly hope so.

“What’s happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes w/o their consent,” Rep. Yvette Clarke, D-N.Y., said on X. “And w/ advancements in AI, creating deepfakes is easier & cheaper. This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”

USA TODAY writes that it was only able to identify 10 states that passed laws banning the practice. Currently, no federal laws regulate it.

Rep. Joe Norelle, D-NY, said the spread of the images of Swift is “appalling” and happening to “women everywhere, every day.

“It’s sexual exploitation, and I’m fighting to make it a federal crime with my legislation: the Preventing Deepfakes of Intimate Images Act,” he said in a statement on social media. The bill would make it illegal to share deepfake porn without people’s consent, and open up additional legal courses of action for those affected.

Democratic state Rep. Jason Powell of Nashville, NewsNation local affiliate WKRN said, filed a bill in Tennessee that would classify images “created or modified” by AI or other digital editing tools that show someone else’s intimate parts as an offense of unlawful exposure.

One of several students at the above-mentioned Westfield High School, 14-year-old Francesca Mani, is helping lawmakers push for AI legislation. She met with Rep. Tom Kean, Jr. of New Jersey and Morelle over the latter’s HR 3106 and Kean’s HR 64666, the AI Labeling Act of 2023, which would require disclosures for content generated by AI.

“Try to imagine the horror of receiving intimate images looking exactly like you — or your daughter, or your wife, or your sister — and you can’t prove it’s not,” Morelle said at the news conference. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

The Associated Press contributed to this report.

Tech

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

title

Main Area Bottom ↴