Created Date: 18 February 2025
创作日期18 February 2025
ai

Artifical counter-intelligence

Joanna Caen TEP and Cara Fitzpatrick comment on the challenges and impact of AI in the modern-day private client firm. 

An original version of this article was first published by STEP Journal, February 2025. 

What is the issue?

Artificial intelligence (AI) is taking the world by storm with its fast integration into both personal and
work life.

What does it mean for me? 

Firms worldwide are intrigued about AI’s capabilities, but at this current time there is not a full
understanding of AI in general and, more specifically, the abilities of certain AI models.

What can I take away?

What should be considered is how generative, algorithmic and predictive AI can be used within a law
firm, and the considerations that should be made when using AI in a professional setting.

Although artificial intelligence (AI) began in the 1950s with Alan Turing publishing ‘Computer
Machinery and Intelligence’,[1] it has since become more commonly used, with its adoption
exponentially increasing over the past year or two. ‘Generative’ AI is commonly used to assist in
creating text, images, video and other types of documentation at a fast pace. By contrast, algorithmicor predictive AI can be used to determine outcomes on variables using statistics.

Generative AI

Generative AI models specifically have captured most of the attention and have seen the largest
growth in their use, especially with the likes of ChatGPT offering free versions online. However, the
constraints of free AI software must be considered. One such consideration should be when the
underlying data was last updated, to ensure that it is following the most recent precedents. For
example, in some areas of the law where precedents are quickly updated, it is crucial that the output
of data from AI software does not follow out of date practices. This could lead to claims of negligence
if the correct level of attention to the output of information is not considered by the professional who
takes advantage of it. Additionally, the use of free AI models in most cases demands the user to agree
to long sets of terms and conditions and, in some cases, these terms request that the AI model can
learn from the information given to it. They will often include provision for the AI model to reuse the
information provided to it in an open-source fashion, making it free for use in the public domain. This
means that data put into the AI system can be taken by the software developers and (if they wish) be
made public information. Although it may be free to use, the buyer must beware; this will not suit
private clients for whom the emphasis is definitely on ‘private’.

Since the exponential increase of generative AI use and acceptability in recent years, the fear of AI
replacing many occupations has been gradually increasing. For some private client advisors, the worry
is that generative AI will replace the need to go to a firm to request assistance. However, this is not
everyone’s prediction for how the industry’s relationship with AI will develop. At the time of writing, AI
has been seen as a useful way to increase productivity within firms due to the speed that it can
undertake certain tasks. By way of example, using AI as an assistive tool for tasks such as document
comparison can radically decrease the time the task would take if done manually.

Maintaining standards

The STEP Code of Professional Conduct requires members to perform competent work, requiring
necessary knowledge, skill, thoroughness and preparation, as well as performing the work diligently
and conscientiously. Whether clients and STEP members find that ‘outsourcing’ much of a private
client practice to AI meets these requirements remains to be seen. This is particularly so if the
extensive use of AI is not disclosed to or agreed by the clients in advance.

In the legal field, there are anecdotal examples of legal professionals using AI and it not working out
as envisaged. Sometimes, text generated by AI software can include information which may not be
correct, which has aptly been labelled as ‘AI hallucination’. The phrase can be criticised for its use in
the technological domain due to the fact this gives AI a sense of human-like qualities, and the phrase
‘AI fabrication’ is seen to be a more accurate representation of what AI is actually doing. The way that
generative AI can conflate information, or mislead by contradicting information, is prevalent,
especially as models can pick up contradicting data and information while mining for new data.

AI in court 

This was seen in the case of Mata v Aviancain the US,[2] where Mr Mata’s lawyers had used ChatGPT
to formulate their arguments. Unfortunately for Mata, the AI fabricated case law to suit the arguments
of his lawyers. Unfortunately for Mata’s lawyers, they did not check the results before presenting
them. This was a US case but the dangers of using AI as a practising lawyer override the issue of
jurisdiction. An over-reliance on AI (or any technology for that matter) and a failure to proof work will
always give rise to issues.

People's main concerns with AI are its accuracy, bias and potential to mine the data that it has been
given to make the data public. Large language models (LLM) are a type of AI that is used to analyse
and summarise content. Although it may seem like there is a flaw with AI models that use LLMs, it
could also be argued the issue is more with the information that it takes from varying online
resources. If there is data that presents as inherently biased, the LLM is likely to use that data and
learn its own biases. As of the time of writing, there are still issues with AI LLMs relating to data that
may have been stolen from private entities and potentially biased sources.

However, researchers from the University of Oxford have recently made significant developments in
ensuring that information generated by AI models is more reliable (the Study). The main method for
this was by using ‘semantic entropy’[3] to ensure that there is minimal uncertainty from AI models
when mining and generating data.[4] This add-on for AI models could decrease the likelihood of AI
hallucination and could also assist AI models with the accuracy of the generative responses outputted
by the software. The Study hopes to prove that AI models can become more accurate by using specific
formulae to comprise all similar information for the most accurate output. Although this may not
replace professionals in the near future, using AI models with semantic entropy could assist
practitioners and allow for higher productivity with less chance for error.

Algorithmic and predictive AI 

Two areas of AI that are worth consideration in the future world of AI-supported private client practice
are algorithmic and predictive AI. The use of algorithmic and predictive AI has been used in various
jurisdictions worldwide with different levels of integration. The aim for using AI as an assistive tool is to
remove any personal biases by adding an extra layer of consistency with technology, but it is of
course early days and more time is needed to ascertain whether they are on the right path.

There are concerns that, due to the rise in AI in recent years, there are not enough protections in
place for people. Although in most jurisdictions there is not any AI-specific legislation, individual rights
can be protected and AI can be governed through legislative measures that are already in force, such
as the General Data Protection Regulation and the UK Human Rights Act 1998.

The Regulation (EU) 2024/1689 (the Regulation) was adopted by the European Council in May 2024,
with intentions for all provisions to be in force from August 2026. As the first AI-specific regulation, it
provides a foundation for the understanding of AI in the law. Until it is in force, the regulation of AI in
the EU does not have a specific legal standing, nor are legal definitions of specific terminology
regarding AI in one place. Although the Regulation will only be enforceable in countries that are part of
the EU, there have been developments in jurisdictions worldwide, based on both the Regulation and
the work of other legislatures.

In the UK, the Office for Artificial Intelligence has become a part of the Department for Science,
Innovation and Technology, as of February 2024. This Office has provided some guidance and
regulatory measures for use within the UK. Although this office is relatively new to the UK government,
discussions of AI have been published on the government website since 2019.

The hope for the future is that AI can be implemented safely into the private client toolbox to support
greater levels of efficiency, but in a way that removes both human and technological biases that have
been learnt from previous precedents. It is important that if the integration of AI happens in any
capacity, as either its own entity or as an assistive tool for practitioners, that it is done in a way that
ensures clients’ needs are put first and protected. 

[1] Tableau, ‘What Is the History of Artificial Intelligence (AI)?’

[2] Mata v Avianca 22-cv-1461 (PKC) (2023)

[3]Being a measure of uncertainty or unpredictability in the meaning of a piece of information, leading
to assumptions on the part of the AI model.

[4] Sebastian Farquhar and others, ‘Detecting Hallucinations in Large Language Models Using
Semantic Entropy’ (2024) 630 Nature 625, accessed 26 July 2024.