This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Catherine Hobby

Senior Lecturer in Law, University of East London

Mark Tsagas

Senior Lecturer, University of East London

Quotation Marks
In terms of the legal profession, there has already been a recorded instance when attorneys submitted a legal brief containing multiple case references that did not exist

Artificial intelligence hallucinations: present and future impact of generated misinformation

Feature
Share:
Artificial intelligence hallucinations: present and future impact of generated misinformation

By and

Following the issuance by the Bar Council of new guidance for barristers on generative artificial intelligence large language model systems, Mark Tsagas and Catherine Hobby discuss hallucinations and their potential impact

Artificial intelligence (AI), once dubbed the product of science fiction, has for a number of years already proven to be an impactful tool in a variety of industries and disciplines. Yet, with its manifestation in the public consciousness over the last year, partially due to the prevalence of and ease-of-access to mainstream generative AI tools (GenAI), further to its potential strengths, some glaring issues have also come to light. Namely, bias and hallucinations.

Hallucinations

With a specific lens towards the latter, instances of generated misinformation that have come to be known under the moniker of ‘hallucinations’ can be construed as a serious cause of concern. In recent times, the term itself has come to be recognised as somewhat controversial. Conceptually, it is an easy-to-understand metaphor, likening aspects of the GenAI process to the functioning of the human mind in its attempts to fill gaps in memory. However, this new term also serves to obfuscate the impact such ‘false information’ can produce, in terms of scale and liability. It is important to note that these ‘hallucinations’ can result from a variety of different causes. As such, the misinformation generated can in effect just be a by-product of the GenAI’s computing process, void of any element of malice, as case studies have attested to. However, although this fact is recognised, it offers little comfort to those that may be harmed when such bouts of misinformation are relied upon, regardless of the intent (or lack thereof) of the hallucination.

In terms of the legal profession, there has already been a recorded instance when attorneys submitted a legal brief containing multiple case references that did not exist, fabricated by an AI chatbot. While the attorney in question espoused his innocence by stipulating that he was “unaware that its content could be false”, nonetheless misinformation was relied upon, and appropriate due diligence was not conducted. Even though the fabrications were unearthed in time, the implication that the court could have been misled, subsequently culminating in a miscarriage of justice, is unambiguous. In fact, in recognition of this issue, judicial guidance was issued to English judges in December 2023. The document discusses the key risks and issues resulting from GenAI, while offering direction on how to appropriately use AI with a view towards the ‘Judiciary’s overarching obligation to protect the integrity of the administration of justice.’

In the healthcare sector, the highly publicised case involving the Chatbot named Tessa, further highlights the dangers of AI hallucinations and the sincere need for oversight of potential outputs. In this instance, the National Eating Disorders Association (NEDA), an organisation centred around the express purpose of helping vulnerable individuals suffering from eating disorders, disbanded its helpline (comprised of salaried employees and volunteers) before announcing its replacement by the Tessa chatbot. What makes this case of particular interest is the fact that Tessa was originally and painstakingly designed to be a rules-based chatbot, void of generative elements and, thus, unable to deviate from standardised pre-written responses. Dr Ellen Fitzsimmons-Craft, one of the researchers involved in the creation of Tessa, stipulated that “by design, it couldn’t go off the rails.” Further suggesting that the rules-based design resulted from the fact that they were very cognisant that AI isn’t particularly suitable for that specific demographic. Tessa was subsequently taken offline, after a very succinct period of operation. Reports suggested that it offered problematic advice that could have exacerbated the eating disorder symptoms experienced by users. This ability to facilitate new responses, thus deviating from the pre-programmed answers, was submitted to be the result of the host company adding GenAI to the chatbot, as part of a ‘systems upgrade’. This case study, considering that the subject matter relates to a person’s physical and mental wellbeing, serves as a particularly concerning and poignant reminder that AI hallucinations can indeed have tangible consequences, especially if the recipient is ill-prepared to challenge the assertions made.

While the case studies above are particularly prominent, in effect there is no shortage of examples where hallucinations have caused some distress. Ranging from defamation of character, as in the case of Mr Hood, when AI falsely asserted that he had been imprisoned for bribery, to claims of academic misconduct when AI software reportedly incorrectly suggested its utilisation by students. In response to this issue, certain industries have been able to adapt quickly and implement a variety of precautions that allow them to safeguard integrity, as evidenced above.

Delusions

It is clear that this technology is already having a genuine effect on people’s lives through further means that transcend the ‘unintended’ hallucinations. Consequently, a question can be posed; ‘Since community leaders are already aware of the present issues, in the future would refinement and regulation of the technology not solve this defect in its entirety?’ While a fair question to ponder, should too much faith be put into a technological system, no matter how advanced, one need not look further than recent events to glean the potential undesirable outcome.

The airing of the TV drama Mr Bates v The Post Office in January 2024 brought public attention to the Post Office Horizon scandal involving wrongful convictions on the basis of a faulty digital accounting system. The Post Office’s private prosecution of innocent subpostmasters using defective computer evidence is regarded as the “most widespread miscarriage of justice” in British legal history. The Horizon system was piloted in 1999 and rolled out to Post Office branches in 2000.The initial roll-out of Horizon was delayed by technical issues and from the start subpostmasters were reporting discrepancies and shortfalls caused by faults. It was established in group litigation by 555 subpostmasters in 2019 that Horizon had numerous ‘bugs, faults and defects’, and that the Post Office knew that it generated false accounting shortfalls. Despite this, the Post Office prosecuted subpostmasters for offences of theft, fraud and false accounting and over 736 were convicted for these shortfalls. They received a court sanction, including immediate imprisonment for some and suffered damage to their good character, income, bankruptcy in many cases and social disgrace. This was acknowledged by the Court of Appeal in 2020 in quashing the convictions of 39 subpostmasters. The Post Office Horizon IT Inquiry was established in 2020 to investigate the implementation and failings of the Horizon system. The title of the inquiry is misleading as it is the conduct of the Post Office itself that is at the heart of this scandal, and not the faulty Horizon system. Humans, and not IT, are what caused the “affront to justice” found by the Court of Appeal in allowing the appeals of the subpostmasters.

Evidence to the inquiry demonstrates a complete lack of curiosity by the Post Office towards its own computer system, particularly in regard to the actions of its own investigators. In conducting audits and investigations into shortfalls, they appear to have accepted without question the reliability of the Horizon data. They regarded it as infallible, even when contradictory accounts were provided by subpostmasters. This conduct can be compared to the US attorney’s unquestioning use of the AI chatbot discussed above. In their approach, the Post Office investigators also failed to comply with their legal obligations. A report to the inquiry by criminal prosecutions expert witness, Duncan Atkinson KC, revealed that Post Office’s policies on the investigation and prosecution of subpostmasters failed to comply with the Police and Criminal Evidence Act 1984 (PACE) and the Criminal Procedure and Investigations Act 1996 (CIPA), and the codes of practice issued under each Act. There was a failure to comply with the duty under the CIPA Code to pursue all reasonable lines of inquiry, including those pointing away from the suspect and consider whether the accounting shortfalls might lie with the computer system. This duty is of central importance to securing the human right to a fair trial, not least through achieving fair and adequate disclosure.

The inquiry has also revealed that the Post Office were alerted to faults in the system by the subpostmasters, and others within the organisation, from its instalment. At one point the Horizon Helpline was receiving between 12,000 and 15,000 calls a month from subpostmasters complaining about irregularities in the IT system. None of these complaints led to action and concerns continued to be raised. An email from a member of the Post Office security team to the then head of Post Office private prosecutions in 2010, disclosed to the inquiry, warned of discrepancies being detected with the Horizon IT system at 40 branches, but this did not stop the prosecutions.

Public concerns were also expressed as early as 2009 with the publication of Computer Weekly’s investigation into the Horizon system, but the Post Office sought to sustain an image of robustness concerning the system to protect its brand. There was a culture of denial and cover-up by the senior management of the Post Office. The organisation even sacked the forensic investigating company, Second Sight, which it contracted to investigate possible computer errors, when it confirmed there were issues. The Post Office then spent millions defending the group litigation by the subpostmasters and made a failed attempt to recuse Mr Justice Fraser when he found in favour of the litigants in 2019. This was all part of the continued corporate projection of the falsehood now being examined by the Post Office Horizon IT Inquiry.

Unprecedented primary legislation has now been introduced to exonerate innocent subpostmasters to redress the injustice caused by the Post Office’s actions. The company knew that the consequences of using defective computer evidence in the criminal trials were severe, but persisted in its actions knowing there were serious issues with the reliability of Horizon. This scandal is an example of corporate delusion in the use of IT.An utter falsehood that the Horizon system was robust was maintained by the Post Office, despite knowing that it was delicate. A lack of oversight has ultimately damaged the reputation of the Post Office, possibly beyond repair with talks of a transfer of its ownership to operators. More significantly it has wrecked the human lives of many subpostmasters.

Conclusion

Overall, the advancement of AI is construed to be a significant scientific breakthrough, one with a wide-reaching ripple effect that has not yet been fully realised. Although the most recent AI safety summit is expected to have a positive outcome, regarding the concept of AI regulation, it should still be considered as a mere initial step. Reservations in this regard stem from the frequent use of terms such as ‘guardrails’ and ‘declaration’ both of which suggest reliance on voluntary commitments as opposed to a binding agreement. The UK’s current approach is thusly not as direct as the European Union’s Artificial Intelligence Act, which has been in development for some time. The former approach is more akin to self-regulation. To this end it is crucial that developers, regulators and future overseers take heed of present and past cases in their attempts to create, implement and regulate AI, generating prudent industry standards, lest we be condemned to repeat history; a retelling of recent events, except instead of technical issues, bugs, faults and defects, in the absence of oversight, the future catalyst may be ‘hallucinations’.