This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Catherine Hobby

Senior Lecturer, University of East London

Mark Tsagas

Senior Lecturer, University of East London

Quotation Marks
There are clear benefits to mediation as an alternative to civil litigation

The next phase of civil justice: can artificial intelligence assist with mediation’s integration into the justice system?

Feature
Share:
The next phase of civil justice: can artificial intelligence assist with mediation’s integration into the justice system?

By and

Mark Tsagas and Catherine Hobby, both Senior Lecturers, Researchers and Mediators at the University of East London, discuss the recent developments to promote mediation within civil justice and the potential benefits and drawbacks to integrating AI into the mediation process

The role of mediation in civil litigation changed significantly on 22 May 2024 with the implementation of a new Practice Directive. Parties to small dispute claims under £10,000 are now automatically referred for mediation within the HM Courts & Tribunals Service (HMCTS). With this reform, mediation has become an integrated feature of the civil justice system. In expanding the role of mediation as a form of alternative dispute resolution (ADR), the Ministry of Justice stated that it was guided by the ‘overarching principle’ of bringing the benefits of mediation to as many as possible. With claims for a specific form of money forming 80% of small claims this change is meaningful, as well as a symbolic acceptance of the role of mediation in dispute resolution in the justice system.

This expansion of mediation in civil justice is now further supported by the Civil Procedure (Amendment No 3) Rules 2024. This statutory instrument that came into force on 1 October, amends the Overriding Principle in Part 1 of the Civil Practice Rules 1988 (CPRs) to allow the use and promotion of ADR. The revision gives effect to the recent Court of Appeal judgment in Churchill v Merthyr Tydfil and the court’s active case management duty now includes the discretion to order or encourage parties to use an ADR procedure if appropriate. Thusly, the amendments to the CPRs, combined with development of small claims, are likely to have a ‘dramatic effect’ on the position and significance of mediation in the realm of civil justice.

The benefits and criticisms

There are clear benefits to mediation as an alternative to civil litigation. Mediation is a voluntary and confidential process that seeks to bring together ‘parties in conflict’ to seek resolution. A neutral mediator facilitates a process that provides an alternative dispute resolution method to enable participants to find a mutually acceptable solution. Mediation can provide an effective alternative and swift means to reach agreement without incurring the costs of going to court. With the expressed government aim of reform to small claims to achieve a more ‘efficient, effective and sustainable justice system’, there is the possibility of increasing accessibility to justice and self-determination for the parties.

The reform has integrated mediation as an ‘essential part’ of the process for lower value claims. The Small Claims Mediation Service (SCMS) in HMCTS provides mediation free of charge in the form of a one-hour telephone appointment. Despite arguments that the changes will expand avenues of redress, concerns have been raised as to whether the time-limited appointments can ensure effective mediation. The mandatory nature of the new practice rules can be argued to be in conflict with the voluntary nature of the mediation process. Views have also been expressed by the Law Society that the changes may prevent some parties from accessing justice by creating a two-tier system; with some parties to a dispute able to access justice and others only accessing the ‘means to end a dispute.’

Despite these reservations there has been discussion about whether to extend mandatory mediation to claims under £100,000. Currently, the SCMS is managing the provision of mediation in house, but if this proposal is adopted, it may struggle to meet the level of demand. If a principal aim of expanding mediation is to increase accessibility, then offering virtual mediation platforms and the use of artificial intelligence (AI) may assist any further expansion of mediation.

The possibilities presented by AI

As argued by the government in its response to the consultation in 2022, the reforms reflect the way the resolution of disputes is ‘evolving for the modern age’ and AI has the potential to play a role in this. The fusion of mediation and AI could be a transformative approach to this method of dispute resolution, which reflects the new digital age. In their recent article, Dr Renu Raj and Adamya Raj advance arguments for the integration of AI into the mediation process and offer possible advantages for its use, including increased efficiency through the automation of administrative tasks, a reduction of the costs by streamlining the process, and increased accessibility. They also suggest that the use of AI would ensure consistency in the mediation process by helping safeguard against bias and, thus, increase the fairness and credibility of the process.

Yet, while the potential reasoning for implementing AI, in such instances, may seem to hold water in theory, in practice it is an endeavour fraught with a variety of pitfalls in terms of the technology itself, ethics, and even the law. However, before exploring the potential underlying issues, it is first necessary to offer an explanatory distinction, of sorts, in relation to the moniker of ‘artificial intelligence’. The aforementioned term has been used excessively in recent times, primarily since the advent of OpenAI’s popular tool ChatGPT, to describe any and all manner of technological advancements within the field of machine learning. Consequently, its meaning has been diluted and is ill-equipped to operate as an appropriate descriptor without qualifying terms. As such, arguments levied against the implementation of the technology, in relation to mediation and within this submission henceforth, are primarily a critique of generative artificial intelligence (GenAI), as opposed to simpler variants like ‘rules-based chatbots’.

The concept of ‘removing bias’ from the process is an interesting premise, but one that hinges almost entirely on the perceived neutral nature of technology. However, the steadfastness of this perception wavers with the realisation that large language models (LLMs) are trained on data sets that may not necessarily be appropriately vetted in terms of their content. Furthermore, the decision-making process with regard to GenAI’s outputs are typically not based on ‘logical reasoning’, at least not in a sense that is explainable. Rather, the LLM operates by predicting the most statistically probable token set to appear in the sequence next, based on the utilised prompts. As such, the possibility of flawed data sets, coupled with the aforementioned unclear reasoning regarding outputs, may serve to detract from the value of mediation and relegate it to an undesirable process, somewhat echoing the Law Society’s expressed views about a two-tier system being realised, whereby participants that can access human respondents may be in a far better position as opposed to those who have to rely exclusively on information generated through any implemented AI system proposed in the future. The negative side effects are further compounded through continued research that illustrates that GenAI does continue to exhibit bias making it necessary for practitioners and clients to heed these early warning signs and not opt for a seemingly easy solution.

The concept of hallucinations, or rather generated misinformation, has been a well-documented issue that persists in its existence, despite GenAI’s continued evolution. In effect, across multiple professional fields GenAI has been proven to, at times, disseminate misinformation. This includes but is not limited to healthcare, law and aviation. This subsequently raises the question of liability and who should be held responsible should such an occurrence take place during the mediation process. The simplest solution would be to follow the World Health Organisation’s (WHO) example with their GenAI Resource Assistant S.A.R.A.H., by placing a lengthy disclaimer somewhere on their website.

‘WHO Sarah is a prototype using Generative AI to deliver health messages based on available information. However, the answers may not always be accurate because they are based on patterns and probabilities in the available data. The digital health promoter is not designed to give medical advice. WHO takes no responsibility for any conversation content created by Generative AI. Furthermore, the conversation content created by Generative AI in no way represents or comprises the views or beliefs of WHO, and WHO does not warrant or guarantee the accuracy of any conversation content. Please check the WHO website for the most accurate information. By using WHO Sarah, you understand and agree that you should not rely on the answers generated as the sole source of truth or factual information, or as a substitute for professional advice.’

However, considering the voluntary nature of mediation as a form of ADR, such an approach may serve to operate as a deterrent, undermining the effectiveness of the process by making it seem unreliable if paired with the technology in question. Consequently, and following the decision by the tribunal in relation to the case of Air Canada, should GenAI be chosen to be implemented, ‘reasonable care’ will have to be taken to ensure that any outputs are accurate. Typically, this may involve human oversight. Yet, if this is the potential solution, why not opt for human mediators in the first instance?

Conclusion

The critics of GenAI are many, varied, and in most instances quite accurate. Despite the above, the technology is set to have a wider positive impact in the future. However, considering the problems that may arise, it is suggested that its implementation may currently not be in the best interests of mediation or indeed most professions that rely on human interaction to effectively provide solutions. As such, should AI be implemented in some context it should almost exclusively be done as a supportive tool for mediators and should the SCMS expand its current policy to higher claims rather than relying on variants of AI with documented flaws, clearly evidenced by real cases, it should seek to engage more human mediators.