Exploring workplace discrimination in the age of AI

Article29.08.20257 mins read

Key takeaways

AI Use in employment carries legal risks

Without proper oversight, AI tools can unintentionally lead to discrimination or unfair treatment.

Bias in algorithms can breach Equality Laws

AI systems may disadvantage individuals based on protected characteristics even when not intended.

Employers must stay accountable and transparent

Clear policies, training, and human oversight are essential to manage AI responsibly and lawfully.

AI has existed for many years, but its usage has surged following the Covid-19 pandemic. As of February 2025, it was reported that ChatGPT’s weekly user base is now approximately 400 million, a sharp and significant increase from 100 million users in November 2023.  

This means rising numbers of employees are using AI within the workplace, sometimes without the employer’s knowledge or consent. Thought to improve efficiency, productivity, and decision-making, tools like Microsoft Copilot are now also increasingly commonplace in some workplaces.  

Employers can mitigate the risks associated with this hidden or permitted AI use by implementing AI policies and training. However, at the time of writing, there is no employment specific legislation in England and Wales which directly governs the use of AI in employment, instead we are governed by current discrimination and Data Protection laws.  

Typical uses of AI in the workplace  

Recruiting, hiring and onboarding 

To screen CVs, rank candidates based on job criteria, arrange interviews, answer candidate’s questions, conduct initial video interviews and chatbots that ask or answer questions about preliminary job qualifications, or salary ranges. 

Performance management, conduct and productivity 

To monitor employee performance through data analytics, allocate tasks and schedule shifts, track productivity levels, and identify potential conduct issues. It can also predict trends and assist in strategic planning.  

Managing remote workers 

To monitor attendance, engagement, and output (increasingly so since the pandemic), for example monitoring key strokes.  

Supporting disabled employees 

To support with making reasonable adjustments via AI-powered accessibility tools (such as speech-to-text technology and screen readers). 

AI in the workplace: understanding the risks 

Whilst there are many clear advantages to using AI, if not used carefully, it can lead to unfair treatment and subsequent claims or risk exposure – especially with respect to dismissals, discrimination and data privacy.  

Discrimination  

The Equality Act 2010 protects people from being treated unfairly because of characteristics such as age, race, sex, or disability. We have already seen several reported cases where AI systems have / can unintentionally breach these protections, for example: 

  • Direct Discrimination (i.e. Less favourable treatment because of a protected characteristic, even if caused by biased AI) 

    • A combination of personal information, such as sickness absence, education, employment history, length of service and seniority, could serve as a proxy for a protected characteristic. For example, an algorithm could conclude that a certain combination of data points could identify a given candidate as a woman, determine that women are less likely to be high performers in a particular role than men, and then reject a particular candidate. Or video interview analysis tools that assess facial expressions and tone may disadvantage neurodiverse candidates or those with speech impairments. 

  • Indirect Discrimination (i.e. Seemingly neutral requirements applied to all individuals but which places those with a protected characteristic at a group disadvantage, and the neutral requirement is not objectively justified). 

    • For example, some tools which use facial recognition struggle to identify those with darker skin tones. In a 2024 case supported by the EHRC, an Uber Eats driver received a financial settlement after claiming that facial recognition checks required to access his work app were racially discriminatory, leading him unable to access the app to secure work.

    • AI tools which analyse data on the previous availability and productivity of workers can also lead to reduced shifts being offered to an employee who has low availability or productivity due to a disability. 

    • Whilst the reason ‘why’ the neutral requirement causes the disadvantage may sometimes be obvious, there is no requirement to identify or explain the reason why; the mere fact it causes a group disadvantage is enough, unless it is objectively justified.  

  • Discrimination Arising from Disability (i.e. unfavourable treatment because of something arising because of a disability that cannot be objectively justified).  

    • For instance, if an employer disciplines an employee for high levels of absence based on data produced by an AI management tool, but the absences are linked to a disability that an AI system has failed to recognise. Or if an automated interview screened out an autistic candidate because they avoided eye contact.  

  • Reasonable Adjustments (i.e. employers have a duty to make reasonable adjustments for disabled employees). 

    • Employers must adapt processes and make reasonable adjustments, including to AI tools, to avoid disadvantaging disabled individuals. For example, someone with a speech impediment or neurodiversity may be at a disadvantage in an automated video interview so should be offered an alternative.  

Unfair dismissal 

When AI tools are relied on, there is a risk of irrational or unfair decision-making. This is further complicated by the fact that many AI tools are designed in the US without appreciation for UK discrimination laws.  Managers may not understand how algorithms work, and how to interpret and use any resulting data. This can be problematic when AI forms the basis for important decisions, such as dismissing an employee for conduct or capability.  There is also the risk that AI can gather evidence on employees that employers would have to demonstrate was not the reason for dismissal – for example, gathering evidence about whether employees are intending to unionise – with the resulting risk of an unfair dismissal claim arises. This risk will be exacerbated further when the proposed changes to unfair dismissal rights under the Employment Rights Bill come into force. 

Who is liable? 

A potential complication in AI-assisted recruitment and employee management decisions is “who is liable?” 

Discrimination or facts giving rise to an unfair or constructive dismissal may arise at any step in the process: when the programmers created the software; when the training data was collected; when the algorithm was honed using the training data; when an individual candidate’s application was processed by the software; when the employer made the final decision. 

Ultimately the tests and considerations which apply will vary depending on the type of claim pursued. However, there is likely to be an investigation into how far a manager who has relied on AI data / outputs has understood what the tool has considered and if the output is rational and reasonable. 

Data Protection Concerns 

AI systems often handle large volumes of personal data. Using personal data without a lawful basis or failing to inform employees of how their personal data is being utilised may breach GDPR, particularly with regards to transparency and consent. Further, it is paramount to ensure that data policies and privacy notices account for any use of AI in the business – again, this is to aid transparency and compliance with regulations.  

If you have any concerns regarding data protection, then please do not hesitate to contact one of our specialist data protection lawyers

Final thoughts on managing risk  

AI can be a powerful tool, but it must be used with care. It is something which is here to stay and so employers will need to learn how to embrace its uses in a safe and measured way. Here are some suggested points to consider: 

  • Carry out a risk assessment to identify and mitigate potential areas of concern with each AI tool used.  

  • Carry out appropriate due diligence when procuring an AI system that will be used in the workplace. This will help identify and assess related operational and legal risks.  

  • Consider whether consultation is required or anticipated. Consultation with a trade union, works council or other staff association may be required. 

  • Be transparent about the use of AI tools. Provide full information to candidates and employees about profiling, automated decision-making and monitoring practices  

  • Carry out a data protection impact assessment (DPIA) to assess the necessity and proportionality of data processing that uses AI. 

  • Monitor protected characteristics to identify bias and discrimination risk. 

  • Make reasonable adjustments for candidates and employees with disabilities. 

  • Allow for flexibility and put alternative measures in place where a candidate or employee has a protected characteristic which means the AI tool may not function as intended 

  • Train HR and managers on how to understand algorithms and interpret any resulting data, including checking the accuracy of the data relied on. 

  • Ensure a human manager has final responsibility for any decisions, particularly where there is the potential for dismissal. 

  • Be able, as far as possible, to explain and justify the basis for decisions made using AI tools if they were to be challenged. 

  • Collect only the minimum possible information to achieve the purpose of the relevant AI tool and ensure that this information is only processed for that limited purpose and is not stored, shared or reprocessed for any alternative purpose. 

Ultimately, this is an area which continues to develop as more AI tools and products are launched and developed. We will continue to keep a watchful eye on case law progress, changes to legislation and provide continued guidance. 

This article was co-authored by trainee Samantha Hambridge.

Your content, your way

Tell us what you'd like to hear more about.

Preference centre

Related views