This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Kaajal Nathwani

Partner, Curwens

Quotation Marks
There needs to be comprehensive consideration of the legal risks associated with relying on AI tools when making decisions about employees and workplace issues

Artificial Intelligence: the beginning of the end of employment lawyers or the rise of a new world?

Feature
Share:
Artificial Intelligence: the beginning of the end of employment lawyers or the rise of a new world?

By

Kaajal Nathwani looks at the current state of play in regard to the potential impact of artificial intelligence (AI) on employment law.

‘AI is definitely shaking things up in the employment landscape. On one hand, it’s creating new opportunities and jobs in the field of AI development, maintenance and oversight. On the other hand, it’s raising concerns about job displacement due to automation.’ [Source: ChatGPT].

‘Shaking things up’ is one way to describe the soon to be gargantuan impact that developing AI has started to have and will continue to have in the legal sector, in both practice and application.

This article will focus (albeit in brief) on the impact to employment law and the numerous practitioners in this specialised discipline, fearful that their specialism will soon be redundant. Or will it? How will something that has been around (in some form) for decades wipe out an entire discipline as we know it today? While the answer would no doubt be eloquently articulated by our friend ChatGPT, can we consider the aforementioned trusted and reliable as one would want in such a friend? Or are we gearing ourselves up to defend against what will become a foe, due to the bias that such ‘unintelligent’ tools are likely to display.

First and foremost, what is AI?

The AI and Employment Law Research Briefing (‘the Briefing’) published 11 August 2023, defines AI as ‘technologies that enable computers to simulate elements of human intelligence, such as perception, learning and reasoning’.

AI involves a specific form of technology, where a machine or software ‘learns’ from the data it analyses or tasks it performs. The machine or software then adapts its ‘behaviour’ based on what it learns to improve its performance on certain tasks over time. It is technology that mimics human intelligence to perform tasks ordinarily performed by humans. In simple terms, AI is computer software programmed to execute algorithms over a data set to, among other things:

  • recognise patterns;
  • reach conclusions;
  • make informed judgements;
  • optimise practices;
  • predict future behaviour; and
  • automate repetitive functions.

AI-powered applications include ChatGPT, chatbots, text analysis tools, virtual assistants, and image and voice recognition.

AI comprises two key elements:

  • data (including but not limited to text, audio, images and video); and
  • algorithms (sets of code with instructions to perform specific tasks).

UK employment laws and AI, where do we stand?

There are currently no UK laws that govern the use of AI at work, but some existing legislation may provide protection for employees. While businesses start to embrace the use of AI for efficiency, concerns remain that companies may be exposed to increased risk of employment tribunal claims given the foray of unchartered territory when it comes to AI.

There needs to be comprehensive consideration of the legal risks associated with relying on AI tools when making decisions about employees and workplace issues. Similar issues have been covered in the Briefing, which provides an introduction to AI, and considers the various uses of AI at work, identifies the employment law implications and sets out the current proposals for regulatory reform.

The Briefing identifies the following existing areas of law that impact its use:

  • The common law, in particular the duty of mutual trust and confidence and the requirement for employers to be able to explain decisions, which may be more difficult where an employer has placed heavy reliance on an AI system.
  • The Equality Act 2010, due to the potential for bias and discrimination when using AI tools.
  • The Employment Rights Act 1996, where AI has been involved in a dismissal decision. The workings of algorithms may make explaining such decisions difficult, a problem that is exacerbated by intellectual property concerns, which discourage AI developers from sharing commercially sensitive information about algorithms with other entities.
  • The right to privacy under Article 8 of the European Convention on Human Rights, in particular where workers are monitored.
  • The UK General Data Protection Regulation and the Data Protection Act 2018, in particular the restrictions on data processing, the right to object and the limitations on solely automated decision making.

The UK government’s March 2023 white paper ‘A pro-innovation approach to AI regulation’ laid out the framework for the current plans to regulate AI. The proposals take a non-statutory approach, relying on existing regulators to oversee the use of AI in their areas, while following five broad principles: safety, transparency, fairness, accountability and contestability. It remains to be seen if this will be enough, having already received criticism from the Labour Party for the ‘laissez-faire’ approach.

The modern-day workplace

Modern and progressive workplaces (there will always be a fair few lagging behind for time to come) are receptive to tools powered by AI to perform certain functions and tasks, including automation and smart robots. The concern is that enshrined employment laws will not be able to cope with these rapid developments and regulatory and legal reform needs to be able to keep up. We need to consider what will happen if unfair practices are initiated by AI driven decision making and to what extent current legislation offers protection to employees?

Common uses of AI in the workplace

There are already a range of AI-assisted tools available for use in the workplace, which can help employers to perform many workplace functions, several of which are set out below. Figures published by the Department for Digital, Culture, Media and Sport (DCMS) in January 2022 confirmed that 68 per cent of large companies, 34 per cent of medium-sized companies and 15 per cent of small companies have adopted at least one AI technology tool. AI has a role throughout the whole of the employee lifecycle.

Common examples of the use of AI include: identifying the skills, qualifications and experience needed for a role; creating compelling job descriptions to source more suitable candidates; using virtual assistants or chatbots that ask or answer questions about preliminary job qualifications, salary ranges and the hiring process; sourcing and screening candidates, including predictive hiring that identifies a company’s performance drivers to improve the quality of hires; automated background checking; and once onboarded, AI can be used to answer new employee questions and direct them to the employer’s appropriate resources.

Once employed, employers can rely on AI to manage performance and productivity by allocating tasks and scheduling shifts. AI can also be used to determine the profiles of successful employees, measuring performance and selecting candidates for promotion.

Generative AI at work

When we hear the term ‘AI’ bandied about, most people automagically think of functions such as ChatGPT following recent media coverage. While seemingly the go to for planning holiday itineraries to being a fountain of knowledge and educational know how used in academia, we know that it has the capacity to be used to carry out various functions in the workplace replacing Google’s place in our lives. However, there are inherent limitations to its use, including the high possibility of inaccuracies or ‘hallucinations’. There is a common misconception that generative AI always produces factually accurate answers and that answers are unbiased.

I was told only at the weekend by an employer that he had got rid of two of his administrative staff as why would he pay two salaries when ‘ChatGPT knows everything and can write all my letters for me [while I am on the toilet]’. We are right to be concerned by this assumption on generative AI’s all prevailing use. Another employer (in pharmaceuticals) routinely uses AI for the narrative of regulatory reports, with the official regulators seeming to prefer the conciseness produced. With this particular form of AI being unregulated, are we digging a hole that will take years to climb out of? Surely the risks need to be considered and managed against what seems to be a short-term gain (time, energy, money and resources)?

Juxtaposed with the need to implement policies which govern the use of generative AI at work, is the need to understand and know exactly which factors have been taken into account, and with what weight, when the decision-making process has been outsourced to an AI tool. Without this transparency the tools cannot be considered fair and reliable for widespread use and, consequently, surely employers cannot be considered reasonable in permitting or implementing the use of them.

AI and discrimination

Increased reliance on AI will inevitably have the effect of eroding the personal nature of the relationship between employer and employee (and the implied term of mutual trust and confidence). At present, the iteration of AI tools being used (this may develop over time) are said to lack common sense and are unable to reach the often nuanced decisions that humans can make. The effect of this is an erosion of the relationship between individuals in the workplace, which could in turn lead to management issues (for example, where an employee is reluctant to come forward and explain the underlying reasons for any performance issues).

A prevalent concern with AI tools is that there is lack of transparency in how they work. Employers relying on AI to make decisions, for example if relating to decisions on recruitment or employer management run into the danger of implementing AI made decisions which are tainted by prejudices (including possible duplication and proliferation of past discriminatory practices) and biases hidden in the software (algorithm related), which lead to discriminatory outcomes. The reason for this is that AI tools are only as good as the information provided to them.

Examples of concerns include the use of facial recognition which the Trades Union Congress (TUC) is pushing to place restrictions on. Research has indicated that the use of facial recognition, for example in interviews or monitoring does not work the same for all skin colours, increasing the risk of discriminatory outcomes.

Turning to privacy, there is also a danger that AI could enable employers to discover information about employees, and then use this as the basis for unfavourable treatment, but with the added layer of complexity of proving such treatment which in law could be held unlawful.

Difficult issues are likely to arise in relation to direct discrimination claims based on AI tools. If the employer relies on the AI tool’s recommendation and has no reason to think that it has been fed discriminatory data, it would be difficult to establish that the candidate’s protected characteristic was a material influence (conscious and subconscious) on the decision maker. Protected characteristics might be relevant factors in the decision even if this information is not explicitly included in the data provided to the software.

Other personal information, such as sickness absence, education, employment history, length of service and seniority, could serve as a substitute for a protected characteristic. For example, an algorithm that concluded that a certain combination of data points (as set out above) tended to identify a given candidate as a woman, determined that women were less likely to be high performers in a particular role than men, and relied on that as a reason to reject a particular candidate. This discriminatory chain of reasoning could occur inside the algorithm without it being explicitly shown in the output.

Where an employer can cite the ‘objective justification’ defence in indirect discrimination claims, by demonstrating that its actions were a proportionate means of achieving a legitimate aim, if the justification of a decision/action was based on an AI tool, there will be a transparency ‘void’. Employers will not be able to ‘justify’ or ‘explain’ how a particular algorithm worked, more likely if the tool has been provided by a third-party provider. Who was the mind behind the automation?

Overthinking the possible risks

Are we just overthinking it all? After all, AI is not altogether new with the first program dating back to the 1950s. Since its inception, governments and regulators have been working behind the scenes to investigate and advise on how to maximise the benefits of data-driven technology, concluding (unhelpfully for us employment lawyers) that it was unclear whether algorithmic decision-making tools carry more or less risk of bias than human decision-making processes, and that there are even reasons to think that better use of data can have a role in making decisions fairer. The train of thought behind this thought-provoking conclusion is that where society has developed a range of standards, common practices and legal frameworks for managing bias in human decision making, the challenge is simply to make sure that this can be translated to the algorithmic world, to ensure a consistent level of fairness regardless of whether decisions are made by humans, algorithms or a combination of the two. But what if AI takes on a mind of its own (as is suspected)?

In 2021, research from New York University found that an AI model was able to determine a job applicant’s gender even where machine learning models had stripped CVs of all gender indicators. The research concluded that the findings validated ‘concerns about models learning to discriminate gender and propagate bias’.

For now

If anything, our trusted AI friend, concurs, that for now although the impact of AI on employment law is evolving and it can ‘assist with certain aspects of employment law, it cannot fully replace an employment lawyer’ (cue hooray from the 6000 strong employment lawyers in England and Wales).

Why? Because while AI is useful for a whole raft of information, incusing legal research, document analysis and often generic information on employment laws (replacing conventional sources of information) ‘the nuanced and context-specific nature of legal issues often requires human expertise. Employment Lawyers bring a deep understanding of legal principles, interpretation, and can provide tailored advice based on individual circumstances, which AI may not fully grasp. AI can be a valuable tool in the legal field, but the human touch and legal expertise remain crucial.’ [Source: ChatGPT].

Kaajal Nathwani is a partner at Curwens LLP
https://www.curwens.co.uk/