Bletchley Park was once the global centre of Allied codebreaking during the Second World War and, building on its technological provenance will set the scene for the UK Government’s AI Safety Summit (“Bletchley”) on 1 and 2 November 2023. In this article (written by humans, not an AI programme), we explore legal and regulatory AI developments since our March 2023 “AI Just Got Real” analysis and consider what is next for AI at Bletchley and beyond.
No rush to regulate
In a Bletchley foreword speech delivered at the Royal Society on 26 October, UK Prime Minister Rishi Sunak announced the publication of a Government discussion paper on the need for further research into AI risks, whilst claiming that, as “we cannot write laws for something we don’t yet fully understand” the UK was in “no rush to regulate”. As an alternative, and in a bid to enthusiastically promote the UK as a future AI powerhouse, the Prime Minister launched a £100m taskforce, instructed to understand and evaluate the safety of AI models and develop AI opportunities, where possible.
The UK Government’s slow and steady approach to AI regulation seems to be currently mirrored at the highest levels of the judiciary. The UK’s Supreme Court has still not, after eight months, handed down an eagerly anticipated judgment on the DABUS case, pursued by Dr Stephen Thaler, as explored in our previous article on the subject. DABUS questioned whether an AI platform was capable of transferring a patent right to someone else, and so satisfy the ‘natural person’ requirement of section 13 of the Patents Act 1977. The Claimant may, though, be waiting nervously if this Judgment results in the same outcome as the Claimants’ attempted pursuit in other jurisdictions. Whilst the five-member bench in the Australian Commissioner of Patents v Thaler [2022] FCAFC 62 case did not consider that an inventor necessarily had to be a human, they judged that “granting a patent for invention must arise from the mind of a natural person or persons”, which it concluded AI was not.
Indeed, of cases brought in the European Patent Office, US Patent and Trademark Office, German Patent Office and South African Companies and Intellectual Property Commission, only the latter has so far allowed the listing of DABUS as an inventor – and even then, only on a technicality, that could still be open to challenge by a third party. We hopefully await an answer from the Supreme Court to set out the UK’s position.
Global developments in AI
Away from the UK, the EU is putting pen to paper on an AI Act built upon a June 2023 negotiating position amongst its members, possibly to include generative AI and foundation models within its ambit and targeting implementation before 2024. It is in the US though, likely due to its abundance of tech company headquarters, that some of the most interesting AI legal/regulatory developments are taking place. On trademark disputes, since we covered the Getty case in our March article, Stability AI has moved to dismiss the complaint against it, with its motion still pending. Elsewhere, there has been a rise in so-called ‘right of publicity’ cases, in which users can digitally ‘swap’ faces on photographs of celebrities and public figures, which also encompasses data privacy issues. The defences put forward in recent US cases are also interesting to observe from this side of the Pond too.
In July 2023’s Dinerstein v Google data privacy case, an Illinois Court dismissed the Claimants’ alleged breach of contractual privacy arrangements based on an AI model due to lack of standing and failure to establish what damages had been associated with the claim. The deployment of ‘fair use’ as an argument, (a defence to claims of infringement when copyrighted material is used in a ‘transformative’ way) is also gaining popularity amongst those defending AI cases. Further, the Ninth Circuit Court of Appeals rejected claims that the practice of “scraping” publicly available data constitutes an invasion of privacy in hiQ Labs, Inc. v. LinkedIn, distinguishing publicly available data and data marked as ‘private’. This could be useful for defendants facing legal claims of an AI nature going forward.
Beyond Bletchley: What comes next?
Whilst relatively balanced, the UK Government’s approach to AI perhaps elevates innovation above intervention. Some though, are so worried by AI’s potentially limitless capabilities that calls for a complete moratorium on developing artificial general intelligence further are now commonplace – even from some of its own inventors. Indeed, Bletchley star attendee, Elon Musk was just one of a thousand other expert signatories to a March 2023 open letter calling for just such a pause. At the least, ChatGPT‘s ‘honeymoon period’ of 6 to 12 months ago is now seemingly over, with defamation and privacy lawsuits piling up around the globe, various institutional bans and boycotts, as well as widening and deepening government investigations and imposed restrictions.
Bletchley could be a moment when the international community comes together, seeks to make sense of this technological tidal wave, and consider, albeit with inevitable national and regional nuance, how to both manage and make the best of AI. Time will tell.
Rosenblatt has a wealth of dispute resolution experience and is well-equipped to support and advise companies and individuals. For enquiries, please contact Dispute Resolution Legal Director Elizabeth Weeks (elizabeth.weeks@rosenblatt.co.uk) or Dispute Resolution Solicitor Jacques Domican-Bird (Jacques.Domican-Bird@rosenblatt.co.uk).