The Durham Law Review is a student-run society commenting on contemporary legal and commercial issues. Meanwhile, it publishes feature articles alongside Regular commercial and legal updates.

The Dangers of AI and the Need for Effective Regulations

The Dangers of AI and the Need for Effective Regulations

It is no secret that artificial intelligence (AI) is becoming more mainstream in our society, and while AI can be beneficial, its dangers cannot be ignored. 

Recently, pornographic AI-generated images of Taylor Swift went viral on X (formerly known as Twitter) and caused outrage from the public. These images, which had tens of millions of views, depicted Taylor Swift at football games, sexualised and exaggerated.[1] This is a classic example of how AI can be used negatively when in the wrong hands. Deepfakes, AI-made images and videos of real people such as these images of Taylor Swift, have become increasingly common on the internet – a 2023 study found a 550% increase in the creation of such false images since 2019, intensified by the emergence of AI. Deepfakes are often abused to portray individuals in compromising situations that can damage their reputations. This “very real” harm brought to victims through the distribution of sexually explicit deepfakes was acknowledged by the US Senator for Illinois, Dick Durbin.[2]

In response to the widespread reach of the deepfake images of Taylor Swift on the platform, Elon Musk blocked searches for Taylor Swift to curb the further spread of the images. Searches for the terms “Taylor Swift” and “Taylor AI” resulted in an error message on users’ screens, which meant that even legitimate content about the popstar was harder to view on the platform.[3] In a statement from X, the company highlighted their “zero-tolerance policy towards such content”, and reassured the press that their teams were “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them”. Joe Benarroch, the head of business operations for X, also said that this was a “temporary action” done cautiously to prioritise safety.[4]

But is this “temporary action” sufficient to protect individuals from this growing problem? A group of US senators did not think so, as this incident prompted them to introduce a bill attempting to criminalise the spread of non-consensual, sexualised images produced by AI. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) bill proposes allowing victims of nude or sexually explicit “digital forgeries” to file a civil action against those who “produced or possessed the forgery with intent to distribute it”, or anyone who received the material knowing it was not made with consent.[5] Josh Hawley, one of the senators behind the bill, highlighted that no one should ever be put in this position, and that innocent people have “a right to defend their reputations and hold perpetrators accountable in court”.[6]

In the US, a mere ten states currently have implemented criminal laws against this type of manipulated media. If the DEFIANCE bill passes, it would become the first federal law protecting victims of deepfakes across the US.[7] In the UK, the Online Safety Act 2023 has been in effect since 26 October 2023, and it addresses a range of criminal offences including sharing intimate images and intimate deepfakes without consent. Additionally, the 2023 Act places responsibility on tech companies to prevent and rapidly remove illegal content, such as terrorism and revenge pornography. There is also a higher protection given to children than adults, and these companies are required to prevent children from seeing harmful material. For example: bullying, pornography, and content promoting self-harm and eating disorders. Failure to comply with these rules would result in companies facing significant fines of up to either £18 million or 10% of their global revenue - whichever is biggest.[8] Social media platforms understood the severity of the Act, evident from the swift actions they took even before the Act was implemented – TikTok implemented a stronger age verification, while Snapchat started removing accounts of many underage users.

Images and videos of famous landmarks such as the Eiffel Tower and Big Ben burning have also been circulating the internet in recent weeks. TikTok influencers have reacted to these videos, citing their horror in learning that these landmarks had “burned down”. In a world where many younger individuals keep up with the news through social media platforms like TikTok and Instagram, it is concerning just how easily media can be manipulated to spread misinformation and cause widespread panic. Therefore, governments must ensure the implementation of legislation regulating the use of AI and take steps to criminalise the perpetrators who abuse the technology to harm others.


[1] https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill

[2] Ibid.

[3] https://www.ft.com/content/0636eb58-eaa3-4d2c-ba22-e1a24c85da3f

[4] https://www.bbc.co.uk/news/world-us-canada-68123671

[5] https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill

[6] https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill.

[7] https://time.com/6590711/deepfake-protection-federal-bill/#:~:text=Only%20ten%20states%20currently%20have,would%20protect%20victims%20of%20deepfakes.

[8] https://www.gov.uk/government/news/uk-children-and-adults-to-be-safer-online-as-world-leading-bill-becomes-law

India's AI Drive – “The Perfect Storm for India”

India's AI Drive – “The Perfect Storm for India”

International Trade Setbacks – the India-Middle East-Europe Economic Corridor

International Trade Setbacks – the India-Middle East-Europe Economic Corridor