Fake sexually explicit images of Taylor Swift, likely generated by artificial intelligence, quickly spread across social media platforms this week, disturbing fans who saw them and reigniting calls from lawmakers to protect women and crack down against the platforms and technologies that disseminate such images.
An image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted fake images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies’ efforts to remove them.
While X said he was working to remove the images, fans of the pop superstar flooded the platform in protest. They posted related keywords, along with the phrase “Protect Taylor Swift,” in an effort to drown out explicit images and make them harder to find.
Reality Defender, a cybersecurity company focused on AI detection, determined with 90% confidence that the images were created using diffusion modeling, an AI-based technology accessible through more than 100,000 publicly available apps and templates, said Ben Colman, co-founder and chief executive officer.
As the AI industry boomed, companies rushed to release tools that let users create images, videos, text, and audio recordings with simple prompts. AI tools are wildly popular, but have made it easier and cheaper than ever to create so-called deepfakes, which depict people doing or saying things they never did.
Researchers now fear that deepfakes could become a powerful force of disinformation, allowing internet users to create non-consensual nude images or embarrassing portraits of political candidates. Artificial intelligence was used to create fake robocalls from President Biden during the New Hampshire primary, and Ms. Swift was featured this month in fake ads selling kitchenware.
“It’s always been a dark undercurrent on the Internet, nonconsensual pornography of all kinds,” said Oren Etzioni, a computer science professor at the University of Washington who works on detecting deepfakes. “This is now a particularly harmful new strain.”
“We are going to see a tsunami of explicit images generated by AI. The people who generated this see it as a success,” Mr. Etzioni said.
X has stated that it has a zero tolerance policy towards content. “Our teams actively remove any identified images and take appropriate action against the accounts responsible for posting them,” a representative said in a statement. “We are closely monitoring the situation to ensure that any further violations are immediately corrected and the content is removed.”
X noted an increase in problematic content, in particular harassment, disinformation and hate speeches, since Elon Musk bought the service in 2022. He relaxed the rules relating to the content of the website and licensed, dismissed or accepted the resignations of staff members who worked to remove this content. The platform also reinstated accounts that were previously banned for rule violations.
Although many companies that produce generative AI tools prohibit their users from creating explicit images, people find ways to break the rules. “It’s an arms race, and it seems like every time someone comes up with a guardrail, someone else figures out how to jailbreak it,” Mr. Etzioni said.
The images come from a channel on the messaging app Telegram dedicated to producing such images, according to 404 Medium, a technology news site. But deepfakes gained widespread attention after being posted on X and other social media services, where they spread quickly.
Some states have restricted pornographic and political deepfakes. But the restrictions have not had a major impact and there are no federal regulations regarding these deepfakes, Mr. Colman said. Platforms have tried to combat deepfakes by asking users to report them, but this method has not worked, he added. By the time they are reported, millions of users have already seen them.
“The toothpaste is already out of the tube,” he said.
Ms. Swift’s publicist, Tree Paine, did not immediately respond to requests for comment Thursday evening.
Ms Swift’s deepfakes have sparked new calls for action from lawmakers. Rep. Joe Morelle, a New York Democrat who introduced a bill last year that would make sharing such images a federal crime, told X that the release of the images was “appalling,” adding: “It happens to women everywhere, every day. .”
“I have repeatedly warned that AI could be used to generate non-consensual intimate images,” Senator Mark Warner, Democrat of Virginia and chairman of the Senate Intelligence Committee, said of the images on X. “It’s a deplorable situation.”
Rep. Yvette D. Clarke, a New York Democrat, said advances in artificial intelligence have made creating deepfakes easier and less expensive.
“What happened to Taylor Swift is nothing new,” she said.