MBW Views is a series of exclusive opinion pieces written by leading figures in the music industry… with something to say.
In the following editorial, Jonathan Coote (inset in photo), associate solicitor and music solicitor at Bray & Krais Solicitors, examines tThe potential impact of Getty’s lawsuit against Stability AI on the music industry suggests that there is an “urgent need to clarify the law relating to AI training in the UK”.
Allegations that AI music platform Suno is potentially trained on copyrighted recordings has artists call on technological platforms to stop violating and devaluing their rights, the industry remains deeply concerned that copyrighted music is being used without licensing or due compensation to train AI tools that could reproduce (and ultimately replace) human-created music .
But what is the legal framework to prevent such training?
In the UK, current law appears to prohibit training AI tools on copyrighted works for commercial purposes.
The government initially refused to adopt the EU’s more lenient “opt-out” approach during the Brexit transition period, but then sought to radically transform the law in 2022 by attempting to introduce an extremely permissive.
This project was quickly abandoned after being described as “music laundering” by the rights holders. The government’s next move was to create a task force to develop a system for AI companies to obtain a “fair license.”
Unsurprisingly, given the diametrically opposed interests of technology companies and rights holders, these negotiations failed.
Unlike in the US, where “fair use” defenses can take into account the wider impact on the public, copyright defenses in the UK are narrow and technical.
This means that current law enforcement (notably created before major language models and generative AI) may prove inadequate to address the economic, creative and social consequences of AI. Without further legislation, this could set unnecessary precedents.
Although the restriction on training under UK law is clear on its face, the results of actual cases are not, as they can be very fact specific. There is a major case that could determine the direction of UK law on these issues: image site Getty Images sued Stability AI in the US and England for training its models on image databases from Getty. In the English courts, Stability AI failed to have the case struck out, but its defense highlights some of the many difficulties in bringing a successful infringement action.
To begin with, arguments that an individual production infringes are inherently difficult to prove, because a production must reproduce a “substantial part” of a particular work, and AI tools are generally designed with safeguards to prevent this. Additionally, Getty must demonstrate that Stability AI has committed an act of infringement in the UK. In its defense, Stability AI denies this, saying the affected developers were located overseas and any alleged copying would have taken place on a foreign cloud server. Getty also argues that the model in question, Stable Diffusion, is an imported “counterfeit item”, traditionally used for physical goods such as pirated CDs. The effectiveness of these arguments remains to be seen.
There are also a number of very concrete elements in the Getty case that many in the music industry cannot rely on. First, Getty successfully reproduced distorted watermarks in the output images, demonstrating that its images were clearly used in training (a difficult task given the “black box” nature of most AI tools) and providing thus a reason for trademark infringement. Second, because Getty has built a library of images, it can also claim a violation of its “database rights,” which probably won’t work for most rights holders. Given its technical and legal complexity, if the case ultimately reaches a decision, it is unlikely to create a universal and definitive precedent.
With AI-generated music already distributed across streaming services, music companies and musicians are understandably concerned about its proliferation. Yet while the music industry loudly denounces AI companies for “music laundering,” with one notable exceptionshe hesitates to initiate proceedings.
Additionally, the music industry is perhaps more concerned than other sectors about the impact of deepfakes, given recent astonishing developments in voice cloning technology. While many instinctively think this should be illegal, there are technically no specific image or likeness rights (even though they are widely permitted) that would clearly limit the results of such tools in the UK. Thus, the most effective way to bring an action might be to argue that the training process itself was infringing. The question of training is therefore absolutely vital for the industry.
The regulatory landscape is evolving rapidly. THE The United States is potentially considering legislation to create a new “digital representation” right to combat deepfakes. and, across the Channel, The EU AI law is intended to increase the transparency required by AI companies operating in the EU regarding their training data and could also extend its “opt-out” restrictions outside the territory..
A number of positive initiatives are on the horizon in the UK: All-party parliamentary group studies impact of AI on music industry (you can find the author’s thoughts on this subject here) And a bill in the House of Lords brings real regulation to Parliamentincluding crucial provisions on transparency.
However, given the potential inadequacies of the current law, the UK Government must urgently capitalize on this work to provide much-needed clarity.
Music Business Worldwide