There is nothing new about the concept and creation of ‘artificial intelligence art’ or ‘generative art’, however discussion of its legal and ethical or societal implications (both intended and unintended) hit the headlines last week.
Boris Eldagsen refused his Sony World Photography Award 2023 prize in the creative open category on the basis that his entry was the product of artificial intelligence. Mr Eldagsen himself has sparked the latest debate by claiming that “AI is not photography” and that the rationale for entering the Awards with the work in question was “… to find out if the competitions are prepared for AI images to enter. They are not”.
The reaction of the World Photography Organisation (running the Sony Awards) has been to acknowledge the need for an element of human involvement, which is the crux of the debate: “While elements of AI practices are relevant in artistic contexts of image-making, the Awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in this medium”.
Going back to basics, ‘generative art’ or ‘artificial intelligence art’ refers to art that in whole, or part has been created with the use of an autonomous non-human system, such as DALL-E 2, which is capable of independently determining artwork features otherwise requiring a decision made directly by the artist. Not to be confused with digital art, whilst AI art and digital art of course both employ the use of technology, the differentiating element with the former is that it can autonomously produce art absent direct input from a human artist.
The conventional (and long assumed) approach has been to recognise the importance of the human hand to an artwork. The question then is, to what extent is the human creator or inputter the ‘artist’ as opposed to the generative system or is the system merely representing the human creator or inputter’s artistic idea. Flowing from that question is what that might then mean in terms of the ownership and value of such works. The debate looks set to continue in this particular context of imagery creation and reproduction coinciding with the increasing availability and use of consumer-grade AI image generation programmes, and the natural inclination of artists to continue to create.
The debate is being mirrored in other more ‘text-based’ industries through the widely publicised use of, amongst others, ChatGPT. These systems ultimately rely on being trained, which leads to examination of what training, supervision and testing is in place, or should be, to ensure output quality, and ultimately where and with whom liability for that automated output sits.
Watch this space!
How we can help
Rosenblatt has a wealth of dispute resolution experience in the arts and cultural assets sector. For enquiries, please contact Dispute Resolution Legal Director, Elizabeth Weeks at elizabeth.weeks@rosenblatt.co.uk.
Disclaimer: We at Rosenblatt (and our parent company RBG Holdings plc) support and encourage free/independent thinking in relation to issues which are sometimes considered to be controversial subject matters. However, the views and opinions of the authors do not necessarily reflect the opinions, views, practices and policies of either Rosenblatt or RBG Holdings plc.