circle circle
 AI and copyright: protecting creativity
Intellectual property & technology

AI and copyright: protecting creativity



In recent years, the term “artificial intelligence” (or “AI”) has most commonly been used in the public discourse in reference to a wide range of technologies with everyday uses; the smooth-talking virtual assistants on our phones, the smart home systems in our houses, and even the self-driving capabilities that are becoming increasingly commonplace in top-of-the-range vehicles. These innovations in the field of AI have introduced a pleasing degree of convenience and comfort to many of our lives – a far cry, then, from the despotic HAL 9000s and Terminators envisaged by science fiction writers of the twentieth century.

Discussion of AI more recently has come to centre on the somewhat more compelling – and if some are to be believed, frightening – capabilities of programmes like ChatGPT, a chatbot developed by the Elon Musk-founded research laboratory OpenAI. ChatGPT has captured the attention of interested observers and industry experts alike due to its impressive selection of skills encompassing the simulation of human conversation, writing code, and much more. ChatGPT is just one of the many freely available AI applications found online. Given the immense potential of such technologies and their rapid rates of development, we can expect even more investment from businesses around the globe in the coming years in conjunction with the output of increasingly impressive iterations of AI.

But as with any ground-breaking technology in its nascent phase, AI threatens to test the limits of current legal regimes, not least within the domain of copyright. AI’s reliance on and extensive use of (often copyright-protected) data is fundamental to its machine learning capabilities, and as such represents a sizable obstacle for developers and policymakers to overcome in their quest to balance out the potential benefits of AI use with the negative ramifications that these could have on content creators if left unchecked.

The problem

AI developers aspire to the creation of systems capable of thinking and acting rationally, as opposed to those that simply think and act like humans. As far-fetched as this prospect may initially seem, recent developments in the field suggest that the realisation of such an ambitious aim could well be on the horizon. ChatGPT has proven itself capable of performing tasks that were hitherto unprecedented for AI, including the drafting of A-grade essay answers to university-level assignments and writing basic code, both in a matter of seconds.

Moreover, the simplicity of its interface is such that almost anyone can use it – which could be a problem.  Education providers, for example, have already voiced concerns about the impact that the widespread availability of such technologies could have on the integrity of education delivery in the future. A degree of disquiet has also emerged amongst the coding community due to the widely-discussed prospect of future iterations of AI rendering human developers effectively redundant.

Copyright experts have joined these groups in expressing concern for what the future may hold if AI’s rate of improvement continues on its current trajectory. This unease is for the most part attributable to the methods deployed by AI programmes rather than their outputs per se. Take, for example, the process by which popular text-to-image system DALL-E 2 (another OpenAI creation) generates images; put somewhat simplistically, a user inputs a prompt which may pertain to something as specific as the style of a particular painter, following which the AI identifies a set of common characteristics within the relationships between different colours of pixels in the vast range of relevant images “scraped” from the internet as well as the text that accompanies them.

Bing’s new AI-driven application Jasper has provided a helpful summary of how this technique is used by text-based applications to produce responses to user-inputted prompts:

The result of this process for DALL-E 2 is an often impressively similar (albeit imperfect) imitation of both the subject and the painting style requested.

(The result of DALL-E 2 input: “A JMW Turner-style painting of the University of Glasgow”)

However, as impressive as the results provided by text-to-image programmes like DALL-E 2 may be, the “scraping” process through which it achieves this is legally problematic.

The first, and perhaps most obvious copyright issue presenting itself here is the question of authorship. Section 9(3) of the Copyright, Designs and Patents Act (CDPA) 1988 provides that the author of a computer-generated work shall be the person who made the necessary arrangements for the creation of the work. So, who is the author of an AI-generated work? Could it be the person entering the input into the AI, the developers responsible for the creation of the AI, or the AI itself? Advocates for the authorship of the user may argue that the input alone constitutes the “necessary arrangements” as required by section 9(3), though this is arguably negated by the fact that the user makes no direct contribution to the results. In addition to this, due to the process of image creation used by these AI, any one input provided by multiple users can generate an endless range of different but similar results.

The authorship of the developer is similarly questionable. Although the system they have created provides the technological foundations for the process of image generation, it relies upon the works of others to then create the work requested by the user. As tempting as it may therefore be to resort to bestowing the title of author to the AI that trawls through copious amounts of data in putting together an output for the user, this is simply not permitted by UK copyright law. Consequently, we are left with no definitive answer to the question of authorship.

The second issue arises as a result of AI’s utilisation of countless copyright-protected works to draw upon in the creation of its output. While not recreating them exactly, the system does use them to create a derivative work, generally without permission from the original creators to do so.

It is also necessary for the AI to create a copy of the protected image for this process to be possible. Under section 28A of the CDPA copies that are merely “transient” or “incidental” – as the copies used by AI would might be argued to be – are permissible so long as the use is:

“an integral and essential part of a technological process and the sole purpose of which is to enable:

(a) a transmission of the work in a network between third parties by an intermediary; or

(b) a lawful use of the work;

and which has no independent economic significance.”

However, as the results of the AI-driven process are likely to have economic significance, and the use where unauthorised is unlikely to be purely related to transmission, it is very difficult to see how this argument would work in practice.

The doctrine of fair dealing could also provide a possible means of defence for these activities, but these are only available for a limited selection of uses, such as private study and reporting of current events. As such, providers of AI services like DALL-E 2 and ChatGPT would have to argue that their use of the works falls into one of the prescribed categories, the most suitable of which would likely be the exception for research under section 29 of the CDPA.

New developments on the horizon?

It now appears that the ambiguity surrounding the interplay between copyright law and the use of AI discussed above looks set to be clarified to some extent following Getty Images’ statement on 17 January in which the stock images supplier announced its intention to commence legal proceedings against Stability AI, the developers of Stable Diffusion (a text-to-image programme using the same image generation process as DALL-E 2). In its statement, Getty Images claimed that millions of copyright-protected images had been “unlawfully copied and processed” by Stability AI and that both “viable licensing options and long-standing legal protections” were ignored in pursuit of the developers’ commercial interests.

While it is uncertain whether Getty Images’ legal arguments will relate to the machine learning process inherent to the likes of ChatGPT and DALL-E 2 or their results, this case will nonetheless bring some much-needed illumination to an as yet uncharted area of copyright law. Of particular interest will be the question of whether Stability AI will be able to rely upon the defences detailed above in its use of vast swathes of copyright-protected works. Further to this, it may also expose a need to adopt a more liberal approach to commercial text and data mining in the UK, one in line with the European Union’s approach introduced by the Copyright in the Digital Single Market Directive. Despite a 2022 IP policy paper announcing the planned introduction of such a copyright exception in the UK, there remains a considerable degree of opposition from a range of stakeholders which will no doubt ensure the prolonging of deliberation in this area.

Authorised uses

Of course, this is not to say that some significant benefits can be drawn from the use of AI where underlying materials are either authorised for use, or made available for use on an open source basis. Take for example the work of Ray Purdy at Oxford University; his team at Air & Space Evidence Ltd has devised the world’s first space detective agency through reliance on AI technologies. They have devised detection models that utilise Earth observation data (compiled by third parties) which are then used to gather information and provide evidence for environmental crime investigations. Such uses demonstrate the positive impact that can be made by lawful data reuse as well as an open and collaborative approach to data exchange.


This is not the first time that IP policymakers have had to come to grips with unprecedented technological developments that push the boundaries of our traditional understandings of fundamental legal concepts. As was the case with the advent of software and the internet, the legal challenges found on this latest frontier of computer science once again reveal the extent to which future technologies are often unforeseeable and difficult to resolve. The proliferation of AI, therefore, goes hand in hand with the arduous task of creating a suitably robust copyright regime which safeguards the interests of stakeholders globally whilst also preventing a fragmented approach to the issue.

Thankfully it would appear that developers of AI applications are in agreement with this assessment, as shown by ChatGPT’s comments on the topic below:

“It is important to ensure that the regulation of AI and copyright strikes a balance between protecting the rights of creators and copyright holders, while also promoting innovation and development in the field of AI. Ultimately, the regulation of AI and copyright in the UK will likely evolve over time as the technology continues to advance and as society grapples with the implications of AI-generated works. It will be important for policymakers to stay informed about these developments and to continue to review and update the regulatory framework as needed to ensure that it remains effective and relevant.”

Of course, one may naturally then wish to explore an AI’s views on the topic of the legality of its own methods. To find out we asked Jasper to see whether it thinks that it might be infringing copyright by responding to our prompts. This is what it had to say for itself:

While it would be unfair to assume that Jasper speaks on behalf of all AI, it would generally appear that applications such as itself (or rather their developers) are steadfast in their position that their data-reliant methods do not infringe copyright, even if they may be operating on the borders of legality. As unsatisfactory as it may be, only time will tell whether AI as we know it now will survive the inevitable reappraisal of intellectual property law for the modern world.

Irrespective of what we may think of its opinions on the copyright ramifications of AI, it would appear that Bing’s Jasper certainly does get some things right:






Glasgow Edinburgh Inverness Elgin Thurso Shetland
Get in touch

Call us for free on 0330 912 0294 or complete our online form below for legal advice or to arrange a call back.

Speak to us today on 0330 159 5555

Get in touch


Get in touch

Call us for free on 0330 159 5555 or complete our online form below to submit your enquiry or arrange a call back.