Andrew Burns John Platts-Mills

Employment law and autonomous decision making: reflections on the AI White Paper and Article 22 UK GDPR

Posted on 09 May, 2023
By: Andrew Burns KC | John Platts-Mills |

The AI White Paper

‘Artificial intelligence’ and the White Paper’s definition

 

  1. As noted in October 2022 by Carnegie’s Matt O’Shaughnessy, one of the biggest problems in regulating AI is agreeing on a definition. The recent government White Paper (A pro-innovation approach to AI regulation), published 29.03.23, proposes defining it by reference to two characteristics: its adaptivity and its autonomy.

 

  1. The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes. AI systems are ‘trained’ – once or continually – and operate by inferring patterns and connections in data which are often not easily discernible to humans. Through such training, AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.
  2. The ‘autonomy’ of AI can make it difficult to assign responsibility for outcomes. Some AI systems can make decisions without the express intent or ongoing control of a human.

 

  1. The combination of adaptivity and autonomy can make it difficult to explain, predict, or control outputs, or the underlying logic by which they are generated. It can also be challenging to allocate responsibility for a system’s operation and outputs. The definition, as noted in the White Paper, is intended to be broad enough to cover foundation models, including large language models – which have the power to write software, generate stories through films and virtual reality, and more.

 

 

The proposed framework

  1. The Government’s proposal is to construct a framework for regulators to interpret and apply to AI within their remit. The framework will be underpinned by 5 principles to guide and inform the responsible development and use of AI in all sectors of the economy:

 

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

 

  1. However in so doing the Government has opted against both statutory intervention and the creation of a dedicated regulator. Instead the Government anticipates introducing a statutory duty requiring existing regulators to have due regard to its principles. The need for central support functions to ensure monitoring and evaluation is recognised.

 

  1. The Government has described its approach as ‘proportionate, adaptable, and context-sensitive to strike the right balance between responding to risks and maximising opportunities’. However, the Government repeatedly makes clear its intention to establish the UK as ‘the best place to research AI and to create and build innovative AI Companies’. It is difficult to escape the conclusion that the Government is looking to position itself as facilitating a more accommodating and pro-business framework. This would be ‘light touch’ regulation compared to the AI Act which is being considered by the European Union and in respect of which the European Parliament reached an agreement on 27 April 2023.

 

  1. The approach is a far cry from the Accountability for Algorithms Act advocated for by the Institute for the Future of Work in their 2020 report ‘Mind the Gap: How to fill the equality and AI accountability gap in an automated world’. In its report, the Institute for the Future of Work observed that ‘the natural state of data-driven technologies is to replicate past patterns of structural inequality that are encoded in data, and project them into the future’. In its view, this necessitates active deliberate steps to ensure algorithms ‘promote equality rather than entrench inequality’. The White Paper envisages this equality risk being addressed by EHRC/ICO guidance rather than legislation.

 

Some considerations for employers and autonomous decision making

  1. Perhaps unsurprisingly, the Government has set out what it hopes will be a flexible and responsive scheme – capable of keeping pace with technological advancement, the ambition is to be welcomed. In the meantime, employers will have to give careful consideration as to the risks resulting from the adoption of ML systems, whether that be in hiring decisions, job advertisement or performance management, in light of the law as it stands.  

 

  1. A key feature of the current framework in the employment context is Article 22 UK GDPR – which gives people the right not to be subject to solely automated decisions, including profiling, which have a legal or similarly significant effect on them.:

 

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:

(a). is necessary for entering into, or performance of, a contract between the data subject and a data controller;

(b). is [required or authorised by domestic law] which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or

(c). is based on the data subject's explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

3A. Section 14 of the 2018 Act, and regulations under that section, make provision to safeguard data subjects' rights, freedoms and legitimate interests in cases that fall within point (b) of paragraph 2 (but not within point (a) or (c) of that paragraph).

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place.

 

  1. ‘Profiling’ is defined in Article 4(4) – broadly speaking it covers any form of automated processing of personal data to evaluate certain things about an individual, for instance, tracking performance.

 

  1. The provision is likely to give rise to a number of difficult questions in an employment context, including:

 

  1. is a decision ‘based solely on automated processing’;
  2. is it a decision which ‘produces legal effects concerning him or her or similarly significantly affects him or her’;
  3. can the ‘explicit consent’ exclusion be relied upon in an employment context; and
  4. what needs to be done if a worker submits an Article 15(1)(h) request for information in relation to autonomous decision making?

 

  1. Below we have sought to identify some potential guidance and sources of insight into how these issues might be approached.

 

Guidance from the ICO

  1. The ICO’s ‘Automated decision-making and profiling’ guidance (the ‘ADM Guidance’) cites ‘A recruitment aptitude test which uses pre-programmed algorithms and criteria’ as an example of potential automated decision making. As to the scope of ‘solely’ it identifies the linking of a factory worker’s pay to productivity, which is monitored automatically, as the wrong the side of the line.

 

  1. Pursuant to the ADM Guidance a process will not be considered ‘solely’ automated if someone weighs up and interprets the result of an automated decision before applying it, however, ‘human involvement has to be active and not just a token gesture’. If warnings are linked to automatically monitored attendance records, a key issue will be the extent of HR or management involvement in the decision to issue a warning. Before the advent of the Disability Discrimination Act employers often enforced inflexible attendance thresholds, but the duty to make reasonable adjustments made individual discretion and consideration the norm.  It remains to be seen if AI can take account of varying equality factors or if it will be saddled with the inherent bias of institutions or programmers.

 

  1. ICO ‘Guidance on AI and data protection’ – updated 15.03.23 (the ‘AI Guidance’) comments that human review must be ‘meaningful’ and ‘in most cases, for human review to be meaningful, human involvement should come after the automated decision has taken place and it must relate to the actual outcome’.

 

  1. It may well be difficult to ensure (and establish on the evidence) that such a review, is, in fact taking place and more than tokenistic. There are already concerns about AI ‘paternalism’ and a study of oncologists published in 2020 observed instances of clinicians accepting the view of AI even when it contradicted their initial diagnosis. This is reflected in the AI Guidance which provides: ‘To mitigate this risk, you should ensure that people assigned to provide human oversight remain engaged, critical and able to challenge the system’s outputs wherever appropriate’.

 

  1. As to the requirement of a ‘significant effect’, the ADM Guidance provides the following example: as part of their recruitment process, an organisation decides to interview certain people based entirely on the results achieved in an online aptitude test. There will be no shortage of scenarios where there is the requisite significant effect in an employment context.

 

  1. As to consent: consent generally under the UK GDPR must be a freely given, specific, informed and unambiguous affirmative indication of the individual’s wishes. The ADM Guidance notes that explicit consent means that the individual should expressly confirm their consent, for example by a written statement, filling in an electronic form or sending an email. As to the issues that may arise in an employment context, the Institute for the Future of Work have suggested, not unreasonably, that it is ‘unlikely that employees will be able to give their consent freely due to the inherent imbalance of power between employer and employee’.

 

Guidance from Europe

  1. For present purposes, the wording tracks that in the GDPR – which was recently considered by the Amsterdam Court of Appeal in Drivers v Uber and Ola. Whilst the decisions raise issues specific to Dutch law, including the Dutch General Data Protection Regulation Implementation Act, they also afford an insight into the sorts of issues that may arise in the employment context in consequence of adoption of ML systems and how they might be approached – as a matter of EU law and potentially under the UK GDPR.

 

  1. The Amsterdam Court of Appeal determined that decision-making by means of ‘batched matching system’ (automated system by which Uber links drivers to passenger); the ‘up-front pricing system’ whether or not in combination with the use of dynamic tariffs; and the determination of the average ratings ‘significantly’ affected the drivers within the meaning of Article 22(1) GDPR – in light of their direct, or indirect in respect of the third means, impact on the drivers’ income.

 

  1. Uber did not dispute that the decision making was based solely on automated decision making such that decisions fell within the scope of Article 22(1) GDPR, and therefore within the scope of the Article 15(1)(h) obligation to provide information in relation to such matters.

 

  1. As regards to how an Article 15(1)(h) request should be acted upon – Article 15(1)(h) GDPR requires that the data subject has the right to information about the existence of automated decision-making, including profiling, and, if so, ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’ (wording replicated in the UK GDPR).  In determining the scope of the obligation, the Dutch court relied upon guidelines adopted by the European Data Protection Board (‘EDPB’) in 2017, which included the following: ‘The controller must provide the data subject with general information (in particular on factors taken into account in the decision-making process, and their respective "weighting" at an aggregated level) that is also useful to him or her to challenge the decision’. In the court’s view: ‘The information provided must be complete enough for the data subject to be able to understand the reasons for the decision’. EDPB guidelines are no longer directly relevant to the UK regime and are not binding under the UK regime. However, they may still provide helpful guidance on certain issues – as noted by the ICO.

 

  1. In the second Uber case drivers were notified that fraudulent activity had been identified on their accounts and that in consequence their accounts would be  deactivated. The court concluded that the circumstances constituted a ‘decision based solely on automated processing’ within the meaning of Article 22(1). Uber’s position was that members of its Risk team were sufficiently involved in the decision, albeit they used ERAF software capable of detecting multiple fraudulent activities, such as when a driver has been repeatedly involved in cancelled trips in a short period of time. The court was not satisfied that the limited documentary evidence provided showed more than a purely symbolic act and that those employees included all relevant data in their analysis, as required by the EDPB guidelines.

 

Concluding remarks

  1. Whether a decision is ‘based solely on automated processing’ s likely to be a key point of dispute in an employment context. As to the English court’s likely approach: there is a degree of consistency between the ICO and the Dutch Courts - human involvement must be more than purely symbolic, not a mere token gesture. It is likely to require the weighing up and interpretation of the result of an automated decision before applying it. Employers should consider what steps to take to ensure that managers or decisions makers assigned to provide human oversight remain engaged, critical and able to challenge the system’s outputs.

 

  1. Given the realities of the employment relationship there would appear to be a risk where employers rely solely upon the ‘explicit consent’ exclusion. Consideration should be given to how the exception relating to ‘necessity for entering into/performing a contract’, which will of course be the contract of employment, can be relied upon.

 

  1. The other substantial question is that of enforcement.  Employees and workers are generally given employment protection by virtue of statutory obligations placed on employers which can be enforced by claims for compensation in Employment Tribunals.  There is a question whether employers will accord the same respect to non-statutory guidance particularly in sphere of industry or commerce where there is no regulator.

 

  1. Finally, as to the extent of the obligation to provide information in relation to autonomous decision-making pursuant to Article 15 – how ‘meaningful information about the logic involved’ can be collated should be given careful consideration, sooner rather than later, despite the likely push back from commercial teams looking to accelerate adoption.

 


Back to blog