Insights

The Regulatory Gap in Deepfake Technology

19/11/2024

On 28 October 2024 Hugh Nelson was sentenced to 18 years imprisonment after pleading guilty to a total of 16 charges relating to child sexual abuse offences. A number of those offences involved Nelson using artificial intelligence (AI) to create child abuse images using photographs of real children.

The recent advancement of AI and deepfake technology has ignited a critical conversation about its ethical use, particularly in the realm of sexually explicit content. The first prosecution of its kind in the UK, Nelson is a landmark case in establishing that computer generated images will still be considered indecent images. However the case has also underscored the desperate need for regulatory and legislative framework to address the misuse of technology and criminalise the creation of sexually explicit images from the outset.

Understanding deepfakes

Deepfakes are synthetic media (often videos or images) that are generated using manipulative AI. They leverage AI to swap faces or manipulate audio to create realistic yet entirely fabricated content. 

The rise of AI has brought many advancements across industries, but it has also introduced new challenges, particularly when it comes to the privacy and consent of individuals. One of the most troubling issues to emerge from the misuse of AI is the creation of deepfake pornography, where AI is used to superimpose somebody's face onto sexually explicit material without their consent. 

Deepfake pornography primarily targets women and does not discriminate between celebrities, public figures or private individuals. A survey quoted in a Government press release on 16 April 2024 revealed that 91% of participants agreed that deepfake technology poses a threat to the safety of women and, as deepfake generation tools become more accessible, this threat will likely only escalate.

Legal frameworks

Historically laws addressing deepfake pornography have been sparse and varied from country to country, often leaving victims with limited recourse. In many jurisdictions existing laws around defamation, harassment, and privacy invasion have been used to address deepfakes but these laws have often fallen short as they were not designed to handle the complexities and nuances of AI-generated content.

The UK has sought to address these shortcomings with the enactment of the Online Safety Act 2023. Under section 188 of the Act (and section 66A of the Sexual Offences Act 2003) it is a criminal offence to share, or threaten to share, an intimate photograph or film without consent. This includes any photograph or film which has been made or altered by computer graphics. A person found guilty of this offence can face a custodial sentence of up to two years and/or an unlimited fine. 

Although this is a step in the right direction, current legislation in the UK fails to adequately address the actual creation of the content. It also doesn’t go as far to hold technology companies and social media platforms to account for facilitating the creation and distribution, especially considering once harmful content is published online, it often spreads rapidly before it is removed or can be addressed.

To plug the gap, on 16 April 2024 the previous Conservative government proposed an amendment to the Criminal Justice Bill criminalising individuals creating or designing intimate images of another person, using computer graphics or any other digital technology, for the purpose of causing alarm, distress or humiliation to the person.  

The amendment was not passed through Parliament in time before the Labour government came into power, meaning its progress is currently on hold. However I query how effective this new offence would have been in practice and whether the offence ought to be based on the absence of consent, rather than the intention of the offender. For example to what extent would the subject of an explicit deepfake successfully be able to prove the purpose for which it was created? 

Regulatory reform

Despite growing awareness of the dangers posed by deepfakes, regulatory measures remain inadequate. Current law is lagging behind technological advancements, and the lack of protection for individuals is only exacerbated by the absence of industry-wide ethical standards for AI and digital content creation. 

As much as we need forward-looking laws that criminalise non-consensual deepfake pornography, we also need to hold the platforms and technology companies to a higher standard of responsibility to prevent the circulation of content online. Whilst we wait for regulatory bodies to catch up, it would be a significant positive step to see those platforms and companies lead by example and actively take action to protect their users which could include:

  • Social media platforms implementing AI detection tools that can identify and flag sexually explicit deepfakes before they are uploaded.
     
  • Creating better pathways for quicker content removal (with liability to eventually be placed on platforms that fail to remove non-consensual explicit content swiftly upon notification).
     
  • Technology companies introducing “digital watermarks” or other traceability features within their software, making it easier to trace the origin of manipulated content.

The Labour party committed in its 2024 manifesto to ban the creation of sexually explicit deepfakes and introduce "binding regulation" on the companies developing the most powerful AI models so I look forward to seeing where this goes in the near future.   

featured image