Newsletter – November 2024

Spotlight on… legal costs and AI

In a recent talk at the ACL Annual Conference, I discussed the utility of Artificial Intelligence (AI) in the law and legal costs, and provided a brief demonstration of how even an ‘out of the box’ version of Copilot could generate a good first draft of a legal document. This is especially useful for those, like me, afflicted with blank page paralysis – the condition of staring at a blank page when starting a piece of work, not knowing where to begin!

With the above and the fact that this article will appear in the CLSB newsletter in mind, in this article I will focus not on the utility of AI but on some of the challenges around AI ethics, governance and regulation.

As with most technologies, a de facto standard of regulation and/or governance is yet to be established globally for AI. That does not diminish the need to strive to establish such a standard.

Who would have thought that social media – apps innocently created to help people connect – could end up playing a part in democratic disquiet, the spreading of disinformation and a proliferation of teenage mental health issues. Likewise, quite apart from the readily apparent risks associated with AI – such as AI capable of developing new drugs to treat disease being used to create new bioweapons, or AI capable of producing a faithful version of a voice being used to defraud someone using that voice – the risk of attempts being made to use AI for negative purposes currently not envisaged is high.

Distilling those somewhat existential issues down to be applicable to the practise of law, what does it mean to those currently creating AI tools or thinking of using AI tools in our field? I’ll touch on some data-related issues to consider.

Firstly, most legal businesses are awash with data, in various forms, that could potentially be useful in training AI. That said, I would wager that in almost all of those businesses they do not have the right to use that data for such a purpose and that the data is not sufficiently ‘clean’ to provide accurate, unbiased (see more below) outputs.

Secondly, consider bias. Datasets can contain biases more often than not derived from humans which, if the data is then used to train AI, imports those biases into the model and therefore the model’s outputs. Whilst weighting for some biases might be the intention in certain use cases of AI, beware of biases hidden in data that could skew the output, potentially in very undesirable ways.

Thirdly, plagiarising a phrase from a recent presentation on AI, Large Language Models (LLMs) – which are the type of AI currently grabbing all the headlines – spoil like milk!

A quick technical interlude here. Unlike traditional software, which is programmed to operate in a certain way, AI models are not programmed, they learn and use that learning to provide outputs (answers). As a result of the architecture of LLMs, it is not possible to understand how the model has come up with the output it provides – i.e. it is not possible to understand how it has ‘learned’, in a somewhat similar way that it is not possible to fully understand how a human has learned so as to make him or her come up with a particular thought or answer. As models age and their training data becomes older, as one cannot discern exactly how the model comes up with the output for reasons explained above, good governance around model maintenance is key to making sure your results remain accurate.

I have intentionally focused on AI-related data governance issues above, as data is the cornerstone of AI and I have barely scratched the surface of even the data considerations of using AI, let alone wider concerns, but I hope to at least have given food for thought and encouraged some of you to do more research.

In conclusion, whilst there are myriad different challenges to ensuring responsible use of AI, if used mindfully and with the appropriate guardrails, it promises a boom in legal productivity, providing access to legal services to a number of currently underserviced markets. It has the potential to make getting legal advice cheaper, better and quicker. As a result, the move to AI-enabled law is in my opinion irresistible and so we need to prepare the ground to make sure that governance in our sector is ready for this paradigmatic shift.

Simon Murray
Head of Insurance Business Services, DWF Law LLP

Back to newsletter