In April this year, the US Court of Appeals (9th Cir.) ruled in the so-called “monkey selfie copyright dispute” that animals cannot own copyright to their works. The ruling settled ongoing controversy over whether selfies taken by an Indonesian macaque monkey in 2011 are the intellectual property of:
a) the owner of the equipment (as he claimed);
b) the monkey (as PETA claimed); or
c) nobody at all (as Wikimedia claimed).
It was an odd case that set a crucial precedent in US case law. But what does it have to do with artificial intelligence and copyright law in the UK?
Well, potentially more than it seems.
An increasing number of artists and content creators are coming to rely on AI—non-human “authors” as some would have it—to edit images, music, text, and other pieces of work. For the most part these algorithms are merely tools; nobody seriously questions the sole authorship of photographers who use automated features in Photoshop to edit their shots. However, as these tools become more autonomous—integrating sophisticated systems of machine learning and neural networks that allow them to create their own works—we find ourselves faced with a question:
Can artificial intelligence own the copyright to its work?
According to Section 1(1)(a) of the Copyright Designs and Patents Act 1988 (CDPA), for copyright to subsist in a work, it has to be “original.” And for a work to be “original,” it has to be the “author’s own intellectual creation” (Section 3A). Many have interpreted this to mean there has to be a human involved; AI isn’t capable of intentional “intellectual creation,” they say, because AI isn’t conscious.
But is this approach forward-thinking enough? What if you’ve invested millions into the development of artificial neural networks hoping to capitalise on the work they create? Can we not frame authorship a little differently to position the developers as the “authors” of the AI that “authored” the work? In other words:
If AI can’t own the copyright, can we?
UK copyright law is, unlike many jurisdictions, pretty clear on this point. Section 9(3) of the CDPA states that: “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” The CDPA is also clear about what a “computer-generated” work actually is, i.e. one “generated by computer in circumstances such that there is no human author of the work” (Section 178).
The CDPA is less clear, however, about who specifically is meant by “the person by whom the arrangements necessary for the creation of the work are undertaken.” Is it the programmer? The user? Perhaps a case could even be made for an investor.
The most intuitive interpretation—at least in cases where AI creates with little to no human involvement—would be to grant the copyright to the programmer. But what if the programmer is no longer around (or their code is open-source, as with Google’s DeepDream)? What if the AI creates its own programs? What if it makes its own decisions?
The answer is actually pretty simple. Where no human agency exists, there is no “author” and hence no copyright subsists in the work—at least until such time that we’re forced to ask whether AI should be accorded the same rights as humans. After all, the purpose of copyright law should be to promote, not to stifle, the publication of creative works. And we do this by assuring creators a fair return on their time. AI “creators” have no such concerns.
For now, we’re better off disentangling our concept of AI from sci-fi-like notions of personhood and framing it more along the lines of any other automated process. Having said that, we should also be thinking about and pre-empting some of the obvious philosophical challenges to this distinction—like whether “all creativity is inherently algorithmic” or autonomous, even in human beings.