Artificial Intelligence (“AI”) just got real: the extraordinary potential of its capabilities, the serious threats it poses and the wide-ranging societal implications of its existence are now hot topics of debate and legal challenge. Perhaps initially seen in isolation as a ‘tech issue’, AI increasingly impacts us all, as individuals and organisations increasingly use AI in their everyday lives and in the ordinary course of business. Here, we explore the current and evolving regulatory landscape, review issues arising from the case law to date, and consider how these developments may shape or impact potential future disputes in this area.
Government Regulation of AI technologies
The UK Government was expected to publish a white paper on risks, harms and regulatory solutions involving AI technologies in early 2022. An interim policy paper was published on 18 July 2022 (the “Paper”), but with turbulence in Westminster, the white paper was delayed and is still outstanding (although is expected to be published in advance of the next King’s Speech). For its part, the Paper steers clear of seeking to regulate specific AI technologies (and specifically completely avoids providing a universal definition of AI), instead proposing to regulate systems and the circumstances in which AI is involved.
Other AI-related consultations have also taken place, notably involving the UK Intellectual Property Office (“UKIPO”), in which the Government has proposed to expand the text and data mining exception in the Copyright and Designs Act 1988 for all purposes. The implication of this is that, without paying or seeking permission from an image’s owner, anyone anywhere could train AI systems to mine protected works.
AI in the courts
On that point, a dispute about AI mining of protected works is currently proceeding through the High Court, brought by Getty Images (“Getty”) against Stability AI (“Stability”) having been issued in January 2023, followed by a suit also filed in the Delaware US District Court in February 2023. Getty claims that Stability “unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty” without a licence, to benefit Stability’s “commercial interests, and to the detriment of the content creators”. Stability have suggested Getty’s allegations “represent a misunderstanding of how generative AI technology works and the law surrounding copyright”. The case centres on an AI model called Stable Diffusion, which generates artwork linked to billions of web-based images. The legal, and even theoretical, fundamental of the proceedings is whether use of those images should, or should not, be exceptions under copyright laws. The starting point is that even if the eventual image output is significantly different from the original, the original is still human-created and therefore human owned.
The UKIPO have also featured in a series of recent AI cases, Thaler v Comptroller-General of Patents, Designs and Trade Marks, the latest of which was heard before the Supreme Court on 2 March 2023. Dr Stephen Thaler submitted applications to the UKIPO for a patent of his AI system, the ‘Device for the Autonomous Bookstrapping or Unified Sequence’ (“DABUS”) in 2018. The application was submitted in Thaler’s own name (Thaler, being the owner of DABUS), but DABUS was stated as being the inventor. The UKIPO refused to approve the patent on the basis that DABUS did not satisfy the ‘natural person’ requirement of section 13 of the Patents Act 1977. Both the High Court and Court of Appeal agreed with the UKIPO that DABUS lacked the required personhood, and was also incapable of transferring the patent right to someone else (as a “natural person” could do). Interestingly, though, Birrs LJ’s dissenting view in the Court of Appeal Judgment was that if an applicant genuinely believed that the named inventor was the true inventor, this could satisfy the requirement. On this case, the Supreme Court has a genuine opportunity to steer the debate and make the law. The outcome is awaited with interest.
These cases follow on the back of a now ancient, in AI terms, series of 2021 cases in the Amsterdam District Court, brought by the Worker Info Exchange (“WIE”) and the App Drivers and Couriers Union (“ADCU”) against Uber and Ola Cabs, in relation to the data protection rights of drivers, including the transparency of algorithmic management practices used by Uber and Ola Cabs. There were four cases in total: two on drivers seeking transparency over the data which is collected on them, one on ‘robo-firings’, where drivers were deactivated from the app concerned by an automated decision, and another on the use of facial recognition software in platform decision-making, the last of which the drivers won by default. In the ‘robo-firing’ case, drivers claimed to have been automatically deactivated by the system, receiving standard messages with vague reasoning for their deactivation. However, the Court agreed with Uber’s claim that deactivation decisions did involve meaningful human input (such as the human review of automated data), and so did not satisfy the “automated decision making” criteria of Article 22 of the GDPR (Article 22 of the GDPR affording important legal protections to individuals in the deployment of automated individual decision-making and profiling). The two other cases involved “fraud detection” systems and also centred on automated decision-making criteria specified under the GDPR. Uber claimed that their system did involve meaningful human interference (comprising the human review of alleged fraudulent activities generated by the AI account, as opposed to the continued sole use of AI technology). The Court accepted Uber’s argument that there was human interference, and that it could not be established that there was automated decision-making. However, in the case of one Ola driver, the Court decided that deductions from their earnings using an algorithm amounted to an automated decision, justifying increased protections under Article 22 of the GDPR. The “robo-firing” and “fraud detection” cases were appealed by the WIE and ADCU at the Amsterdam Court of Appeal on 18 May 2022, and the outcome is keenly awaited.
The future is here?
The newest kid on the AI block, perhaps trumpeting the phenomenon’s true arrival is ChatGPT. On the one hand, OpenAI’s language processing tool has immense potential for positive impacts on global knowledge sharing and business growth. It has already been courted by many in the elite, with Rishi Sunak and Bill Gates recently interviewing each other, based solely on AI-generated questions (various video clips are available online). However, there may be trouble ahead for ChatGPT, in relation to: ownership of material and infringement, data protection, liability for damages and, as well documented recently, accuracy and bias. On the last point, Oxford and Cambridge Universities have now banned ChatGPT for the purposes of producing any academic work at its institutions. It remains a fascinating technology to watch and will no doubt generate disputes before too long.
Ultimately, AI is at some point and in some way created, developed, managed, and used by humans. If things go awry during any of these stages, and disputes arise, a wronged party must have the ability to have legal routes available (contractual and/or tortious) to seek redress. That said, identifying who is at fault and thus to whom legal obligations, such as a duty of care attach may not necessarily be that simple. The regulations and case law of the coming months and years may clarify or complicate the situation further. What is sure is that as these exciting technologies continue to develop, politicians and judges, as well as legal advisers, should be aware of opportunities available and strategies to navigate the associated risks. Companies and individuals will want to have confidence that their professional advisers are equipped to advise when things go wrong. For now, we await the publication of the white paper and the outcome of the developing AI case law with great interest.
Rosenblatt has a wealth of dispute resolution experience and is well-equipped to support and advise companies and individuals. For enquiries, please contact Dispute Resolution Legal Director Elizabeth Weeks (elizabeth.weeks@rosenblatt.co.uk) or Dispute Resolution Solicitor Jacques Domican-Bird (Jacques.Domican-Bird@rosenblatt.co.uk).