Impact of the CJEU's Schufa judgment on the use of AI in HR

Germany

This article examines the extent to which the CJEU's Schufa judgment is an obstacle to the use of artificial intelligence (AI) in the HR sector.

More and more companies are using AI systems in HR. A key data protection provision in this context is Article 22 (1) GDPR. According to this provision, every person has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. On 7 December 2023, the CJEU caused commotion in the media with its Schufa judgment (European Court of Justice, ruling dated 7 December 2023 – C-634/21) (see also our blog post from 22 February 2024), in which the Luxembourg-based supreme judicial body of the European Union commented on the Schufa system of credit scores. This article explores what the judgment means for the use of AI in HR.

What was the Schufa judgment about?

The claimant had been denied a loan by her bank due to an unsatisfactory Schufa score. She brought an action against the bank for refusing to grant access to data. The Wiesbaden Administrative Court, before which she filed the action, in turn referred the matter to the CJEU for a preliminary ruling. The CJEU had to settle the question of whether Schufa's scoring procedure constitutes "automated individual decision-making" within the meaning of Article 22 (1) GDPR, assuming that the loan was in practice denied solely based on the credit score. The legal classification of the Schufa score was in doubt because it is not Schufa as such that makes the final decision regarding whether to grant a loan, for example. Schufa merely provides third parties with a probability value that is intended to provide information on how well or poorly a person will fulfil their financial obligations.

CJEU: Human versus machine – a broad understanding of "decision"

The CJEU clearly stated that a decision based on automated processing within the meaning of Article 22 (1) GDPR is ordinarily prohibited even before a decision is actually reached (for example whether to establish, perform or terminate a contract). It held that the term "decision" should be interpreted broadly against the background of the protective purpose of the standard and should not be undermined by a three-party relationship. Therefore, determining a probability value for a third party to see in the Schufa score is sufficient, provided that this value "significantly" influences the actual decision made as a result. However, the CJEU left unanswered what should be understood as "significant" influence, i.e. how far in turn the final human decision-making framework must extend.

Automated decision-making in HR

As the CJEU has established that, contrary to the previous assumptions of notable voices in legal doctrine, an automated decision can be assumed if a prior automated decision prepared by third parties is used with significant effect to make a legal decision (e.g. on whether to enter into a contract), the question inevitably arises as to whether and to what extent these words from the Luxembourg court apply to the HR sector.

Like the Schufa credit agency, many AI applications often create analyses that precede the "actual" decision-making process, which is why it will now also be necessary to check in each of these cases whether and to what extent automated decision-making can be assumed. Insofar as such automatic analyses practically reflect the result of the AI's independent assessment of often comprehensible criteria and algorithms, they are readily comparable with the methodology behind the Schufa score that is being criticised. Accordingly, some connection to a human assessment is also required here. However, the person making the final decision must have the necessary expertise and, in particular, must be given sufficient time to adequately review the purely automated preliminary assessment and make their own expert decision. 

Check three requirements

Nevertheless, it is not possible to apply the conclusions of the Schufa judgment directly to the use of all AI applications. Instead, users must carefully review in three steps (the existence of a decision, the decision's exclusive basis on automated processing and the development of legal effects vis-à-vis the data subject or significant effects on them) whether the specific AI application falls within the scope of Article 22 (1) GDPR and is therefore subject to the strict regulations of the CJEU. Microsoft Purview can be used as an example to illustrate this:

This is a platform for data, governance and compliance management aimed at helping companies manage data across different cloud and on-premises environments. The integrated AI is intended to provide more efficient data classification and corresponding monitoring. It is designed to automatically identify and classify certain data in order to ultimately promote well-founded decision-making.

A closer look at the mechanisms reveals that employers do not use Purview to make decisions based solely on automated processing, nor do the statements generated by Purview have any legal effect on employees or significantly affect them. Unlike the external assessments in connection with the Schufa score, the internal risk analyses generated by Purview aim to protect employers from insider risks. The individual members of staff are not evaluated. Instead, the assessment focuses on data exfiltration actions that are carried out. Moreover, the actual evaluation of the detected risks and thus the final decision is by necessity the responsibility of a human being and not of a machine. If, in an individual case, a process detected by Purview actually leads to consequences under employment law, including employees being formally warned or dismissed, such measures must be preceded by careful clarification based on human judgement – this will at least mean there is no "significant" influence.

Exceptions to the prohibition on automated decisions

By issuing this judgment on Schufa scores, the CJEU has, whether deliberately or not, expanded the previous, incomplete GDPR legal framework on AI to many new areas. Whether the use of an AI application falls under the criteria of Article 22 (1) GDPR like the Schufa process must be decided on a case-by-case basis. Provided that a correspondingly admissible data processing operation is actually in place, utilisation of the results may also be authorised. Automated individual decision-making that has a legal effect or significantly affects the data subject is admissible if it is necessary for entering into or performance of a contract between the data subject and the data controller or if it is based on legislative permission or the data subject's explicit consent (Article 22 (2) GDPR). Whether a German provision – in particular Section 31 German Federal Data Protection Act (BDSG) – can be used as a suitable justification was left open by the CJEU in the Schufa case, but it expressed strong doubts. 

For more information, contact your CMS client partner or these CMS experts: Dr Inka Knappertsbusch and Markus Theißen.