Written by Matthew Howse and Jessica Rogers From Morgan Lewis & Bockius LLP on June 12, 2025
This chapter was first published in April 2023. Be advised that some of the below content may no longer apply.
The past decade has seen artificial intelligence (AI) innovations across almost every sector. In the employment context, AI has the potential to significantly impact all stages of the employment relationship. The corresponding disruption to human resources practices gives rise to both benefits and legal risks for employers.
Impact of AI on employment relationships
There is no single, recognised definition for the term ‘artificial intelligence’. The UK Information Commissioner’s Office defines it as ‘an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking’.
For employers, the adoption of AI-based tools can increase the speed and scale of previously manual processes. In the modern, hybrid-working world brought about by the covid-19 pandemic, AI tools are an attractive way for employers to reduce resource time and costs.
Recruitment is a key area in which AI tools offer significant benefits to employers. With the current, competitive labour market, employers are under pressure to process applications faster and attract more diverse and qualified workers. AI tools can use natural-language processing to help hiring managers to write job descriptions by linking language used to a data set of outcomes, which allows the hiring manager to craft job descriptions that will maximise the likelihood of attracting the best candidates. Algorithms are also commonly used to recommend the purchase of advertising on different recruitment platforms based on matching candidates’ data to job postings.
AI tools can also be used to review and filter job applications on a larger scale than would be feasible for most in-house talent acquisition teams. AI can screen applicants and exclude those who do not meet minimum requirements or are likely to be a poor fit, and even assess interview performance by using natural-language processing and interview analytics to determine a candidate’s suitability considering their soft skills and personality traits. AI-driven chatbots powered by natural-language processing can be tasked with responding to candidate questions during the recruitment process. In turn, this reduces the amount of time that needs to be spent on these tasks by talent-sourcing specialists and human resources departments, allowing them to focus on other valuable work. Human input is required only in the latter stages of the recruitment process (eg, final interviews, negotiations and offers).
Despite the obvious efficiencies, there are some key risks and associated safeguards that employers should consider before implementing AI technology in their employment cycle.
Regulation
Most countries currently have no specific regulatory framework governing the use of AI. However, it is likely to be partially regulated under existing employment and data privacy laws, and the use of AI is a growing area of focus for regulators around the world.
Discrimination
In the United Kingdom, it is unlawful for an employer to discriminate against candidates or employees based on protected characteristics (namely, age, disability, gender reassignment status, marriage or civil partnership, pregnancy, maternity, race, religion or belief, sex and sexual orientation). The use of AI can result in indirect discrimination claims where someone with a protected characteristic suffers a disadvantage because of an algorithm’s output.
An algorithm that demonstrates a preference for particular traits that, in practice, results in men being favoured over women in recruitment selection is an example of indirect discrimination on the basis that the algorithm places women at a substantial disadvantage because of their sex. To defend such a claim in the United Kingdom, the employer would need to show that the use of AI was a proportionate means of achieving a legitimate aim. While the use of technology to streamline the recruitment process may be a legitimate aim, it is difficult to see how such a tool, which can have significant implications for a candidate, can be a proportionate means of achieving that aim without any human oversight.
The use of AI may also create other legal risks for employers. Disabled people may face particular disadvantages in undertaking automated processes or interviews. For example, some systems read or assess a candidate’s facial expression or response, the level of eye contact, tone of voice and language, which could disadvantage candidates with visual or hearing impairments, candidates who are on the autism spectrum or candidates who have a facial disfigurement. Given the obligation under many anti-discrimination laws to make reasonable adjustments to remove disadvantages for disabled people, an employer could potentially find itself in breach of discrimination laws when using AI software as a blanket approach.
Language and tone of voice can also be more difficult for someone whose first language is not the native language assessed by the AI tool, increasing the risk of negative racial or nationality bias and unlawful discrimination claims based on race.
Data protection
It is likely that the use of AI during the employment life cycle will involve the processing of candidate and employee personal data. Employers should therefore be mindful of their obligations under data privacy regulation, with particular regard paid to three key principles: lawfulness, fairness and transparency.
The use of AI technology to make employment decisions without human scrutiny will be considered a solely automated decision. The EU General Data Protection Regulation and its equivalent laws elsewhere in the world restrict an employer from making solely automated decisions that have a significant impact on data subjects unless such automation:
- is authorised by law;
- is necessary for a contract; or
- occurs after explicit consent has been given.
In the latter two cases, specific safeguards should be in place, such as a mechanism for the individual to challenge the decision and obtain human intervention with respect to the decision. Essentially, this is a human appeal process.
The processing of special category personal data such as health or biometric data is further restricted unless processed on specific and lawful grounds.
Any use of AI is likely to require a data protection impact assessment. If high risks to the rights of individuals cannot be mitigated, prior consultation with a relevant supervisory authority is required, and the AI technology cannot be deployed without the consent of such an authority.
Future trends
AI is likely to continue to be a hot topic throughout 2023 and beyond. Several countries have recently introduced AI-focused legislation. As of 1 January 2023, the US state of New York prohibits the use of AI tools in employment decisions unless such use has been subject to a bias audit and is disclosed alongside the option to request alternative processes. In 2021, the European Commission proposed the implementation of a new legal framework to address the risks of AI use, which would set out requirements and obligations regarding the use of AI and the high-risk applications of the technology, and set out an enforcement and governance structure. In Germany, employers already have an obligation to consult with the works council (a consultative body that represents workers) when introducing AI to the workplace.
In the United Kingdom, the government published a policy paper on AI regulation in 2022, and a White Paper is expected to provide further guidance on the topic in the coming months.
Next steps for employers
The use of AI technology is developing rapidly, and there are several steps that employers can take to introduce innovative technology while minimising legal risk.
It is essential to ensure that suitably qualified, skilled and experienced individuals are responsible for the development and use of AI to minimise the risks of bias and discrimination. The provider of the technology should be able to demonstrate that the data and algorithms have been stress-tested for bias and discrimination against candidates because of, for example, their gender, race or age, and disparate impact assessments should be conducted on a regular and ongoing basis.
Prior to implementing AI technology, employers should consider whether a data protection impact assessment is required. Employers will also need to consider and identify the lawful basis for processing personal data before proceeding with any automated profiling or decision-making. Employers should proactively update candidate and employee privacy notices to highlight the use of AI technology in the processing of personal data.
Many human resources practitioners have had limited experience with AI to date. As a priority, employers should ensure that staff are adequately trained in relation to the use of AI and possible pitfalls so that they can readily identify any potential areas of concern. More broadly, employers should have in place clear and transparent policies and practices around the use of AI in recruitment and employment decisions.
Employers can introduce further practical safeguards to mitigate the risks inherent in the use of AI (eg, by identifying appropriate personnel to actively weigh up and interpret recommendations and decisions made by AI in the recruitment process before applying it to any individual). It is important that meaningful human reviews are carried out, as data privacy restrictions cannot be avoided by simply rubber-stamping automated decisions. With this in mind, AI should be used as only one element of assistance in recruitment decisions. Employers should also ensure that their processes allow for human intervention; if a candidate needs adjustments because of a disability, employers should make it clear with whom and how the candidate should make contact to discuss what might be required.