Pornographic deepfakes of Taylor Swift went viral on X (formerly Twitter) this week, highlighting the dangers of AI-generated imagery online.
Synthetic or manipulated media that may deceive people isn’t allowed on X, according to its policy, and the platform’s safety team posted on Friday that it’s “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”
By Saturday, users noticed that X attempted to curb the problem by blocking “Taylor Swift” from being searched — but not certain related terms, The Verge reported.
Making Swift’s name unsearchable suggests that X doesn’t know how to handle the array of deepfake imagery and video on its platform.
Mashable was also able to produce the error page for the terms “Taylor Swift AI” and “Taylor AI.” The terms “Swift AI,” “Taylor AI Swift,” and “Taylor Swift deepfake” are searchable on the platform, though, with manipulated images still displayed on the “Media” tab.