Deepfakes depicting Taylor Swift being attacked in the stands during an NFL game demand a debate on the regulation of artificial intelligence, writes Patsy Stevenson
Newsletter offer
Subscribe to our newsletter to exclusive editorial emails from Signing times Team.
The recent controversy surrounding hyperrealistic AI-generated images showing American singer-songwriter Taylor Swift sexually assaulted has sparked another important conversation around the misuse of artificial intelligence, misogyny, and ethical considerations linked to technological progress.
Not only does this highlight the vulnerability of public figures to the misuse of deepfakes – images, videos or audio recordings created using an algorithm to replace the person in the original version with someone else – but also growing concern about misogyny perpetuated by AI. abuse.
The AI-generated images of Taylor Swift are vulgar and distressing, and were apparently created due to fans’ annoyance with her being shown at NFL games, where she is shown at screen as she watches her partner, Travis Kelce, play. Fans booed her and some took to X (formerly Twitter) to comment like “Is there anything more annoying than Taylor Swift at a football game?”
This misogyny has now taken a darker turn with the AI footage – some showing Swift being assaulted in the stands during the NFL game.
A photo was viewed 45 million times before being deleted. Such misuse of AI not only violates the victim’s privacy, but also perpetuates a culture of harm and objectification.
DO YOU LIKE THIS ARTICLE ? HELP US PRODUCE MORE
Receive the monthly Signing times newspaper and help support fearless, independent journalism that innovates, shapes the agenda and holds power to account.
We are not funded by a billionaire oligarch or an offshore hedge fund. We rely on our readers to fund our journalism. If you like what we do, subscribe.
The incident calls for further examination of the ethical considerations surrounding the use of AI technology and whether legislation should be implemented to prevent this from happening again.
US lawmakers have called for new legislation to criminalize the creation of deepfake images, but it is a controversial measure because government officials often lack the knowledge about the technology to be able to effectively regulate it without hindering the development of the AI in a beneficial way.
Tech experts recently created a tool to detect deepfakes, but researchers found it had racial bias. The datasets used to train deepfake detection underrepresented people of color. This meant the technology could detect deepfakes as legitimate when used with the face of a person of color.
The dark side of AI misuse is not limited to works of fiction: as the technology advances, its potential to become a tool of exploitation and objectification, particularly targeting women and people of color, also increases.
The creation of this type of explicit AI-generated content raises questions not only about the limits of the technology, but also the ethical compass that guides its development. There is an urgent need for prevention tools and ethical boundaries in the AI landscape.
From deepfakes to assaults in the Metaverse, the misuse of technology as a whole is becoming a worrying trend that demands collective action from policymakers, the tech industry, and society at large. However, the question is: how can we regulate technology and AI considering the potential involvement of government agencies?
Although regulation is necessary to prevent misuse of the technology, creators agree that government intervention must be approached with caution. Many government agencies often lack the knowledge necessary to fully understand the rapid evolution of AI technology.
Finding the right balance between protecting against abuse and promoting innovation requires a nuanced approach to legislation. We don’t want to risk hindering the innovation that drives the development and advancement of technology, but if we leave it unregulated it could give way to uncontrolled use of technology in the worst possible way.
Even if legislation is implemented, technology is evolving at a pace that could outpace lawmakers’ ability to keep pace. There must be ongoing and informed dialogue between technology experts and legal experts to ensure that regulation remains relevant and effective.