Catch me if you can – the impact of ChatGPT and how UK AI Regulation might deal with generative AI

The UK government’s 2022 Policy Paper on AI Regulation made no specific reference to generative AI models such as ChatGPT, the chatbot that has been taking the world by storm.

Considering the speed at which such AI models are developing, however, and the interest they are attracting, the government may look to consider regulating them more explicitly. In such case, it is unclear whether the government will stick to the principles set out by Liz Truss’s administration. If it does, it may find the “light touch” approach it advocated in the Policy Paper increasingly at odds with the power and influence of generative AI models. 

On January 23, Microsoft announced a $10 billion investment into OpenAI, the creator and operator of ChatGPT, an AI-sourced text generator based on OpenAI’s GPT-3 system that has been causing a global stir since its launch in November 2022. Although not the first generative AI, the depth of ChatGPT’s resources (it was trained using databases of 300 billion words), and the authenticity of its responses (refined following ‘reinforcement learning’ where humans assess its output) place it significantly ahead of its predecessors, and – if you believe the hype – a genuine challenger to knowledge-based professions.

Some commentators have considered how ChatGPT and other generative AI models might be regulated under future AI regulation, such as the EU’s forthcoming AI Act. In July 2022, the UK set out its own proposed approach to AI regulation in its policy paper “Establishing a pro-innovation approach to regulating AI” (the ‘UK Policy Paper’). However, the UK Policy Paper made no specific mention of generative AI systems such as ChatGPT, and the UK government’s White Paper on AI, said to be “forthcoming” in the UK Policy Paper, has not yet been released, apparently delayed by changes in the political landscape. 

We can, however, still assume that the UK government will look to take an active role in regulating AI. In his October 2021 Conservative Party Conference speech, Rishi Sunak described the UK as a “global leader” in artificial intelligence[1]. On the basis that the UK government will probably continue to take the UK Policy Paper as a starting point, we have considered ChatGPT in the context of the UK Policy Paper and reflected on how UK regulators may look to treat it.

The UK Policy Paper sets out two core characteristics by which regulators could assess the risks posed by AI. These are (i) adaptiveness, i.e. the ability of an AI to learn itself, and (ii) autonomy, the ability of an AI to operate and react in situations in a way which humans might struggle to understand or control. 

In these two respects, UK regulators might conceivably view generative AI models such as ChatGPT  as high risk AI systems. Generative models are trained through “self-supervised learning”, which involves the AI hiding certain words and data from itself and then filling the blanks based on surrounding information. This type of self-supervised learning could potentially lead to an AI developing at an exponential rate, perhaps even in a way that could not be supervised by humans. In a 2022 article in The Economist[2], Connor Leahy, co-founder of Eleuther (an open-source AI project), imagined a scenario where “someone… builds an AI that can build better AI’s, and then that better AI builds an even better AI… it can go really quickly.” This potential for rapid – and unchecked – development, and the associated lack of transparency and control, may see UK regulators taking a circumspect view when trying to regulate generative AI models. It is worth tempering this by noting that ChatGPT is not currently connected to the internet, and has limited knowledge of events after 2021 (which was when OpenAI completed its training). The potential for rapid unchecked self-development might, in the case of ChatGPT at least, be overstated for now. 

As well as the two core characteristics, the UK Policy Paper also proposes six cross-sectoral principles to guide regulators. Taking each in turn below, we consider how regulators my apply the principles to generative AI models such as ChatGPT.

Principle: Ensure that AI is used safely

The UK Policy Paper highlights healthcare as a sector where AI could pose a significant impact on safety. The UK Medicines and Healthcare Products Regulatory Agency (MHRA) has already published its “Software and AI as a Medical Device Change Programme – Roadmap”, which aims to ensure that medical device regulation is “fit for purpose” for software, including AI. However, it remains unclear whether a chatbot such as ChatGPT providing pseudo-medical advice would qualify as medical device software under the Programme.

Regardless, the MHRA will likely need to consider ChatGPT at some point. There are signs people are turning to generative AI for basic medical advice, particularly regarding mental health[3]. Indeed, researchers have opined[4] that ChatGPT appears capable of passing the United States medical licensing examination.

The benefits of using generative AI medical advice are obvious – it is available around the clock, with no need to schedule an appointment, is less expensive than human consultation, and avoids any potential embarrassment associated with personal medical consultations. However, although ChatGPT provides its answers confidently, with a veneer of authority and credibility resulting from its usually excellent grammar, the detail may be erroneous or misleading. Sam Altman, CEO of OpenAI, has said ChatGPT is “good enough at some things to create a misleading impression of greatness”. In other words, AI answers to medical queries may sound convincing, but (for the moment at least) are not always right.

In light of the severe potential harm if things were to go wrong in the context of medical advice, it seems likely the MHRA will at some point consider regulating ChatGPT and other generative programmes. 

Principle: Make sure that AI is appropriately transparent and explainable

The UK Policy Paper notes that ensuring AI systems can be explained at a technical level is an “important… development challenge”. The ChatGPT chatbot itself acknowledges that “it’s important for people to understand how large language models work, and what their limitations are…[as this] can help avoid misunderstandings or misuses of the technology”[5]. OpenAI seems to have done a good job at being transparent about ChatGPT’s buildout and functions so far, including the way OpenAI uses its own moderation platforms to filter queries in line with its content policy.

However, there are concerns around the AI’s algorithmic bias, particularly where ChatGPT considers prompts that include descriptors of people[6], and it seems the safeguards OpenAI has implemented to prevent offensive content can be bypassed in certain circumstances[7]. UK regulators may look to require more transparency around generative AI processes.

Principle: Embed considerations of fairness into AI

There has already been significant debate about how to ensure AI models are used fairly and equitably. What would happen, for example, if ChatGPT were to produce misleading content about a company which damages the company’s reputation, or where its output discriminates based on race or gender when a user manages to bypass the safeguards OpenAI has put in place?

UK regulators might consider requiring OpenAI to provide greater transparency on how it is embedding fairness into ChatGPT, and how it would be able to prevent the AI from furthering extremist views, particularly if OpenAI looks to license the technology. The EU’s draft AI Act already places obligations on providers to examine training, validation and testing data in view of possible biases. Eliminating embedded biases may be challenging, especially where the bias results from discrimination buried within the training datasets the AI has been fed, but the effect such biases could have if left unchecked is perhaps one of the strongest arguments in favour of regulating AI.

Principle: Define legal persons’ responsibility for AI governance

The UK Policy Paper provides that legal liability for outcomes from AI must rest with an identifiable person (including companies). It remains unclear who would be responsible for ChatGPT’s output under any UK guidance or legislation. The draft EU AI Act places various obligations on providers, distributors, importers, and even “users” of AI systems, but the UK has left things vague so far, simply stating “we expect all actors in the AI lifecycle to appropriately manage risks to safety and to provide for strong accountability”.

Principle: Clarify routes to redress or contestability

“The use of AI,” states the UK Policy Paper, “should not remove an affected individual or group’s ability to contest an outcome”. Regulators are expected to implement “proportionate measures” to ensure that end users can contest outcomes that result from the use of AI in regulated situations. Clearly, this would depend on how (if at all) generative AIs are ultimately regulated. It is unclear at present how a user would contest an outcome from ChatGPT.  When raising the possibility of legal issues with the chatbot itself, it glibly responds that users should “consult with a legal professional”.

Principle: Ensure that AI is technically secure and functions as designed

The UK’s approach to AI regulation so far aims to ensure that consumers “have confidence in the proper functioning of systems… AI systems should…reliably do what they intend and claim to do”. Since ChatGPT has been trained using “Reinforcement Learning from Human Feedback”, where human trainers review and correct its responses, this should mean it produces the type of output intended by its designers. However, there are risks around users using clever prompts and techniques to bypass the AI’s planned functionality and get it to explain dangerous subjects.

Certain figures have criticised the apparent lack of restriction on ChatGPT at present. Paul Kedrosky, an economist and MIT fellow, tweeted “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society…[it] should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions”[8]. UK regulators may need to grapple with preserving a balance between the “light-touch” AI regulatory framework the government has said it wants to foster and ensuring generative AI systems are sufficiently restricted to avoid misuse.

Conclusion

Our analysis above gives a flavour of how UK regulators may look to consider ChatGPT or similar AI models in line with the principles outlined in the UK Policy Paper, but much remains uncertain around the UK’s future movements on AI regulation.  What will the approach of the next government be, for instance? The Labour Party, if it were to win the next election, talks of “dynamic alignment” with the EU – would this mean greater alignment with the proposed EU AI regime?

Much is uncertain, too, around the development of generative AI systems. Given how rapidly the technology is progressing, by the time the UK government has enacted clear guidance or legislation, the AI landscape may have shifted considerably. Regulators will need to remain flexible as the technology develops, and will need to collaborate with industry players to ensure regulations fully capture the scope of the generative AI and effectively protect users’ rights. While Sam Altman, co-founder and CEO of OpenAI, has tweeted that ‘ChatGPT is incredibly limited”[9], ChatGPT-4, the next iteration rumoured to be launched sometime in 2023, is widely expected to be a game changer. The drafters of the UK White Paper and any follow up guidance or legislation, as well as regulators seeking to implement such provisions, may have their work cut out.

 

Resources

  1. Help build a better future | CPC21 Speeches (conservatives.com)
  2. Article
  3. People Are Eagerly Consulting Generative AI ChatGPT For Mental Health Advice, Stressing Out AI Ethics And AI Law (forbes.com)
  4. Researchers Tested ChatGPT on the Same Test Questions As Aspiring Doctors (businessinsider.com)
  5. ChatGPT Says We Should Prepare for the Impact of AI | Time
  6. https://time.com/6238781/chatbot-chatgpt-ai-interview/
  7. This is how OpenAI's ChatGPT can be used to launch cyberattacks (techmonitor.ai)
  8. Is ChatGPT a ‘virus that has been released into the wild’? | TechCrunch
  9. Sam Altman on Twitter: "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness." / Twitter

Disclaimer

This information is for educational purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. © Shoosmiths LLP 2024.

Insights

Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.