This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Megan Shirley

Senior Lecturer, Nottingham Law School

Quotation Marks
"Legal regulators have issued guidance to lawyers about the use of generative AI but there is nothing yet available to litigants in person."

Litigants in person are turning to ChatGPT for help

Practice Notes
Share:
Litigants in person are turning to ChatGPT for help

By

That's because some guidance is better that none, says Megan Shirley

The recent reports of litigants relying upon caselaw which had been fabricated by generative AI programmes are surely only the tip of an approaching iceberg. The court service needs to address the issue by providing clearer and more accessible support on the use of AI and on the litigation process in general.

It is widely acknowledged that the last ten years have seen a significant increase in the number of litigants in person in our civil court system. A government briefing as early as 2016 noted not only the increased numbers, but also the shift in people becoming litigants in person out of necessity rather than choice.

While there are support services available, many people do not know how to access or find support and the services cannot fill the lacuna of legal advice and representation that would previously have been available in civil and family disputes. Many people therefore face the prospect of representing themselves in legal proceedings alone. They will be sent paperwork that they don’t understand, telling them to prepare documents that they haven’t seen before and attend hearings at which they don’t know what to say.

A Handbook for Litigants in Person was written by six senior judges and published in 2013 to explain the various stages of a civil case. Putting aside the fact that many people would be daunted by the prospect of reading a document that is 170 pages long, the preface acknowledges that, even at that length, the Handbook is “not comprehensive” and “cannot possibly be”. The section on statements of case, for example, explains what they are and some of the basic rules around them, but does not give an example of what one looks like, or deal with what you might need to include for specific types of case.

How then can we be surprised that litigants in person are asking generative AI software such as ChatGPT for help. The computer programme will draft a statement of case or a written submission to court within minutes. The text that is produced will appear to be well written and structured, it will use legal terminology, refer to statute and case law and it will probably make a persuasive argument. A litigant in person will no doubt believe that they have hit the jackpot when they compare it with what they could have produced themselves.

Many lawyers will by now be aware of the fact that AI software can suffer from 'hallucinations' arising from the fact that the software predicts the next word in a sentence from large amounts of data available online. It can therefore create a statute or case, together with citation, which is entirely fictional. Litigants in person will not have seen the legal press about this. Judges are now wrangling with the fact that it is hard to criticise litigants in person for relying upon this fictional law when they do not have the legal training or knowledge to identify it as such.

In the December 2023 Harber Tax Tribunal decision, the judge was not critical of Ms Harber for relying upon nine invented cases which appeared to support her appeal, but did express concern about the time and costs that had been wasted in searching for cases that did not exist. She noted that this wasted time would reduce “the resources available to progress the cases of other court users who are waiting for their appeals to be determined”. Usually, a waste of court time or legal costs would be punished and deterred using costs orders, but they are unlikely to be fair or effective against litigants in person who do not appreciate the risk or implications.

Another concern voiced by Judge Kastel in the US case of Mata v Avianca, where two lawyers had relied upon hallucinated cases, was the uncertainty or “cynicism” of legal precedent that could be created by false decisions.

Neither judge has expressly referred to a risk of such hallucinated decisions affecting the outcome of a case, but this is certainly possible. A persuasive argument which appears to be supported by caselaw could cause the opponent in a case to settle on worse terms than they would otherwise have accepted. There is also a small risk of an invented case being inadvertently followed by a court or tribunal, especially in a case with litigants in person on both sides, as they are unlikely to notice and raise the error with the court.

The other less potent, but probably more widescale problem that can arise from the use of generative AI to prepare court documents and submissions is the fact that text can appear convincing to the untrained eye but may actually say very little of substance or just be wrong in terms of facts and procedure. The AI software will not know and understand all of the facts of the case or how they would be interpreted by a judge. It will deal in generalities and will not necessarily appreciate what factors will be most persuasive to a judge. ChatGPT has also, in my experience, confused UK and US law and terminology, and confused civil and criminal procedure; any of which could lead to mistakes and delays in the process of litigation. While some of these issues may be corrected in more recent subscription service options, many litigants in person will not be aware of the need to use this.

When I asked ChatGPT whether litigants in person should use it to prepare submissions to court, it recognised several of these risks and advised that best practice would be to verify information using legal sources and to obtain legal advice on the case. This is an example of how the software can say something that is technically correct, but entirely miss the point; if the litigant in person knew how to conduct legal research or could pay for legal advice, they would not be a litigant in person at all.

Legal regulators have issued guidance to lawyers about the use of generative AI but there is nothing yet available to litigants in person. The court service needs to issue clear and accessible guidance to litigants in person about the use of generative AI within court proceedings. This should arguably form part of a wider project to update the advice available to litigants in person; making it easier to find and to use. A system that uses targeted questions to guide a person to the relevant information and advice would be much more user friendly than one very long pdf document; quite certainly a job that AI could assist with. Court staff should also be aware of the issue as they are on the front line of dealing with queries from litigants in person.

The advice to litigants in person should not be to avoid generative AI all together. It has great scope to support them in some of the work that they will need to do in preparing a case for court. It can help them with the structure of a document or how to phrase a certain argument. It may even be a starting point to identifying relevant law and procedure, but people also need to be aware of its limitations.

If litigants in person cannot avail themselves of any of the free support and advice that is available, they need to appreciate that generative AI is not a quick fix and should be used with caution. As they always did, litigants in person will have to do their best to understand their case and the relevant law and to make sure that they present this to court in a way that is factually and legally correct. In turn, judges, court staff and opposing lawyers should continue to acknowledge these challenges and to support access to justice in whatever way they can.