Law Society Loses The Plot Over AI and Ethics

A new ethical guidance report by the Law Society of England & Wales has stated that lawyers ‘should discuss the decision to use AI in providing legal services with the client’ and ‘obtain informed consent’. This is so out of touch with reality it’s hard to know where to start.

Luckily the Law Society has no rule-making powers – those belong to the Solicitors Regulation Authority – but, even so, this kind of narrative is ill-informed and undermines the use of legal tech tools just when they are most needed by an industry going through profound change.

Why This Is So Wrong

Let’s start with some baseline reality checking. When we say ‘AI’ in relation to the legal world we are primarily talking about using natural language processing (NLP) software to perform a sort of ‘word search on steroids’ to find information contained in contracts and court records.

It’s called ‘AI’ or ‘legal AI’ because the NLP is trained up using machine learning, whether assisted by a lawyer while using it, or prior to that by the software vendor, and in many cases both.

These tools are used in the commercial legal sector for things such as:

  • Due diligence review, such as hunting for change of control clauses.
  • eDiscovery, to improve document analysis.
  • Legal research, to improve case law research by going beyond word search and getting into more complex and phrase-based approaches.
  • Knowledge Management, to enable firms to get better insights into their DMS.
  • Review and red-lining of contracts during the negotiation phase.

If a law firm had to inform a client every time they used a tool that had NLP and machine learning built into it they’d constantly be on the phone because today the largest legal research platforms are using these approaches – and they might be used by a large law firm dozens of times a day to do preliminary research on a case.

Due diligence tools are also now widely used by larger commercial law firms. The output is not some sort of massive ‘AI decision’, with the work product spat out at the end with no lawyers involved at all, and then it’s all digested by the client, also without any lawyers involved. That’s a science fiction view of how lawyers and ‘AI’ work.

In reality lawyers are involved at every stage of a review process. The NLP tools are there to assist with the heavy-lifting, or process work of searching through 1,000s of pages of text. But, as mentioned, this is just ‘word search on steroids’.

Moreover, if the Law Society has a thing about machine learning then lawyers will also have to tell their clients every time they use Google to search for some information, as this is also an NLP system that uses machine learning. Google doesn’t bother to call it an ‘AI’ system, but you might as well if you’re going to go down that road.

And what about lawyers who use spell check and grammar check in Word? Will they need to inform clients about this as well? After all, this is a very basic NLP system that responds to human-led machine learning.

As you can see, the position that you need consent and to talk to the client about using these tools on an ethical basis is absurd. Clients should for sure talk to law firms about how NLP tools, especially for transactional work, can help improve efficiency.….but to talk about ethics…..? Why? There is no ethical component here, unless you believe technology in any form needs to be rigorously policed?

How This Red Herring Got Started

Part of the problem is that fears about this technology grew out of the ‘AI is biased’ meme, that got started some years ago with challenges around the use of software to sift CVs – which unsurprisingly led to accusations of bias. And of course, they were – as everything that contains a decision also contains some form of bias. Whether those biases were acceptable, or lawful, was another discussion. And sometimes they were not.

Then came the whole COMPAS fiasco, which was a not very well-made piece of software that allowed judges in the US to decide, by using an algorithm, whether to allow a prisoner out on remand. It was clunky and also left itself open to claims of bias. And in this case the critics look to be right. It really was an awful piece of decision-making software.

However, no law firms use software like that to conduct work for their clients. It’s also a million miles away from using Luminance, for example, to find a change of control clause.

It’s also interesting that the Law Society ethics paper seems to be based on a report by the American Bar Association about the need to ‘explain AI’ and which, on the face of it, looks likely to have fallen into the same traps that the Law Society has.

Does This Faulty Logic Matter?

This all matters not because large law firms will suddenly stop using Westlaw or LexisNexis, or cancel their licences for Kira and Luminance, because having to explain to clients how NLP works every time they use such tools becomes a huge time-wasting exercise. Those firms and sophisticated clients will likely ignore this as much as they will ignore another recent paper by the Law Society that suggested that AI will eventually replace most lawyers – (yawn – an old cliché that has no evidence at all).

The real risk is that small-to-medium-size law firms, which already are grappling with the challenges of improving efficiency may be put off from engaging with legal tech, which may be essential to reaching that goal.

This matters on a national economic level. Less productive law firms mean less productive clients. Unnecessarily slow legal work slows down the economy. This has a real world impact on many people.

Frankly, this report by the Law Society is at odds with the aims of the UK Government to embrace innovation and to make this country a leader in legal technology.

Ethical risks are very important. But, to quote the full text: ‘Solicitors should discuss the decision to use AI in providing legal services with the client. This discussion should include obtaining informed consent from the client, and informing them of the risks and limitations of the AI’ – is an absurd position to take, and likely will lead to confusion and undermine efforts to improve the capabilities of the UK’s legal services sector.

P.S. The Law Society has gone through many changes in its history. In the late 1990s and early 2000s it was mostly ignored by larger commercial law firms. Then we saw improvements and a real effort to engage with the City, and great work has been done on the international stage. More recently there was a push to promote legal tech, which in part led to the creation of the LawtechUK group, which is also doing great work. So, it really gives this site no pleasure at all to have to focus on this misjudged report by the Law Society. But, it needs to be addressed before such tentative positions become part of the new normal.

1 Comment

  1. Very well said! Glad that you published this to help dispel some of the misinformation out there on AI in Legal – lots of unwarranted fear mongering that should never translate into policy. Bravo, Richard!

1 Trackback / Pingback

  1. Lawtech Sandbox Opens to Challenge ‘Invisible Orthodoxies’ – Artificial Lawyer

Comments are closed.