Legal Cheek: AI and the rise of ‘music laundering’

LPC student Frederick Gummer analyses the legal implications of artificial intelligence on the music industry

In April 2023, a track claiming to feature Drake and The Weeknd titled Heart on My Sleeve spread rapidly across TikTok and Spotify. In fact, this wasn’t a collaboration between the two artists but rather, an AI-generated song by a TikTok user who had trained the AI on their music styles. This incident, fuelled by the rapid dissemination platforms like TikTok and Spotify, highlights the emerging challenges in copyright law known as ‘music laundering’.

Music laundering is the practice of presenting AI-generated songs as authentic collaborations between human artists, without proper disclosure. As AI increasingly infiltrates the creative processes, the UK music industry faces new complexities in protecting artist rights without stifling innovation.

Copyright infringement

In the Heart on My Sleeve track saga, Universal Music Group successfully requested the removal of the song from platforms, reportedly using the inclusion of producer Metro Boomin’s tag in the track, giving a definitive basis for its takedown. However, this event underscores the complexities and uncertainties of copyright law when it comes to AI-generated content. Specifically, it brings up pressing questions: without a straightforward, copyright-protected element like a producer’s tag, what recourse will artists in the UK have against such imitation tracks, and how might existing copyright protections adapt to address these challenges?

Copyright infringement, as understood in both US and UK law, hinges on the creation of works that are “substantially similar” to the original or involve copying the “whole or substantial part” of a copyrighted work. In the context of AI, this distinction becomes particularly complex. AI tools, designed to emulate the general sound and style of existing music without directly copying melodies or lyrics, navigate a fine line to avoid infringement claims. To this end, artists must demonstrate copyright infringement in one of two ways: either through an input or an output. The input question deals with whether training AI with copyrighted music without explicit consent infringes on copyright laws or falls under fair dealing exceptions (although the application of fair dealing in the context remains uncertain). The output question explores if AI-created works, potentially derivative, infringe on the original copyright holders’ exclusive rights to create based on their prior works.

The UK’s legislative stance

The UK’s current legislative stance on AI and copyright is characterised by a prohibition on using copyrighted material for AI training, a position that has seen notable shifts and challenges. Initially, the UK government considered allowing an exception for AI training on copyrighted works but later retracted the same in the face of strong opposition, highlighting the tension between innovation and copyright protection. This indecision reflects broader disputes, including failed attempts to establish a fair licensing framework and legal battles exemplified by Getty Images suing Stability AI. Given the swirling currents of regulatory change and prevailing lack of clarity, coupled with the anticipated challenges of compelling tech companies operating generative AI models to adhere to any forthcoming transparency regulations, it’s a certainty that more AI-generated copycat tracks are on the horizon.

Read more

AI and the rise of ‘music laundering’