"Automatically disadvantaged?" – Discrimination in the use of AI in the workplace

Germany

Despite the enormous potential of artificial intelligence (AI) in the workplace, evidence shows that AI harbours the risk of perpetuating discriminatory decision-making patterns. There are increasing calls for regulatory measures to be introduced to counteract this.

AI is making ever greater inroads into our working environment. Soon it will be hard to imagine life without it. However, the increasing integration of AI systems in the workplace is also presenting employers with greater challenges. It is not uncommon for algorithm design to reinforce such disadvantages in hiring or promotion decisions, for example, that are already subconsciously present in the training data. This raises the question of how algorithmic discrimination by AI systems acting autonomously can be tackled legally.

The German General Act on Equal Treatment (AGG) offers protection with limited enforceability

The German General Act on Equal Treatment (AGG) is the core regulation against discrimination in the context of employment. Employees may not be discriminated against either directly or indirectly on the grounds of race or ethnic origin, gender, religion or belief, disability, age or sexual identity. One example of this is the clear case where an AI system uses male gender as a positive evaluation factor within its criteria when filling a tech position because it compares the applicants with its own male-dominated workforce. Insofar as human involvement is required, the input of discriminatory data or (qualified) failure to comply with legal monitoring obligations, such as those arising from the AI Act, can be used as a basis, without employers having to be aware of discrimination based on one of the attributes arising from section 1 AGG.

According to the provisional agreement reached by the Council and the European Parliament on the AI Act (Draft AI Act), an employer operating a high-risk AI system will be required, for example, to appoint a person to oversee the AI used and to ensure that the input data is sufficiently representative with regard to the intended purpose of the high-risk AI system. In any case, this includes systems that are to be used for recruiting and selecting applicants as well as promoting or dismissing employees by monitoring performance and behaviour (Article 6 (2) Draft AI Act in conjunction with Annex III).

In the event of discrimination, the disadvantaged party is entitled to claims under section 15 AGG. The fact that AI decision-making processes can be covered by this according to established law is an expression of the technology-neutral wording of the AGG. However, disadvantaged parties often encounter difficulties in proving the causal link between discrimination and one of the reasons under section 1 AGG when attempting to effectively assert their claims. This is due to the fact that the autonomous functioning and decision-making of an AI system is difficult to follow (so-called "black box effect"). The easing of the burden of proof in section 22 AGG means that the disadvantaged party only has to provide circumstantial evidence which, when viewed objectively as a whole, gives rise to a sufficient probability of discrimination based on one of the reasons listed in section 1 AGG, whereby the other party must prove that no such discrimination took place. Particularly in the case of indirect discrimination, for example if an accent is to the disadvantage of the applicant when a personality profile is created by a language analysis AI, the disadvantaged party may find it difficult to prove this, as they often simply cannot know between which attributes correlations have been established. Although this may be a general problem in discrimination cases, in contrast to many AI decision-making processes, there is usually no evidence (such as training data sets) that remains hidden from the disadvantaged party when human decisions are made.

German Federal Anti-Discrimination Agency aims to extend protection against digital discrimination

In light of these difficulties in enforcing AGG claims caused by AI, in its report "Automatisch benachteiligt – Das Allgemeine Gleichbehandlungsgesetz und der Schutz vor Diskriminierung durch algorithmische Entscheidungssysteme" ("Automatically disadvantaged – The German General Equal Treatment Act and the protection against discrimination by algorithmic decision-making systems"), the German Federal Anti-Discrimination Agency (ADS) calls for measures including a revision of the AGG with regard to the role of AI, comprehensive information and disclosure obligations that allow insights into the specific functioning and data, as well as an adjustment of the reversal of the burden of proof in the event of discrimination by AI which goes beyond section 22 AGG.

The contents of the ADS's report thus resemble in parts the European Commission's proposal for an AI Liability Directive (Draft AI Liability Directive). Although the planned AI Liability Directive is only intended to apply to non-contractual fault-based claims for damages under civil law, it uses the same instruments that the ADS proposes for the AGG. For example, Article 3 Draft AI Liability Directive deals with the disclosure of evidence, while Article 4 Draft AI Liability Directive establishes a rebuttable presumption of causality.

In order to counter the problem of opacity, a court-enforceable disclosure claim could therefore exist if the disadvantaged party has unsuccessfully asked the opposing party to disclose the relevant evidence available to it regarding an AI system that is suspected of having caused discrimination based on one of the attributes arising from section 1 AGG. If, for example, the language analysis AI (see above) was trained exclusively with data sets of German native speakers, a disclosure of the data basis or functionality of the system could help the disadvantaged person to effectively enforce a claim under section 15 AGG. However, in the context of weighing up mutual interests, it must be ensured that sensitive data and business information are not made public when the evidence is disclosed. The idea of transparency in the ADS proposal is also indirectly covered by data protection law. Article 15 of the General Data Protection Regulation (GDPR) grants the data subject the right to obtain information from the data controller as to what data relating to them are stored or processed. They can also obtain additional details from the controller, such as information about the purposes of processing, the origin of the data (if it does not come directly from them) and the recipients to whom their data are transmitted.

A new idea is the ADS's right to take collective action to prosecute systemic violations of the prohibition of discrimination, as well as a right to conduct litigation for anti-discrimination associations in order to better support those affected by discrimination in enforcing their rights and to reduce structural imbalances.

Be careful when selecting training data sets

While the AI Act is expected to come into force this year following the provisional agreement, the route for the AI Liability Directive is even longer. At national level, the proposals for adapting the AGG with regard to AI are currently only coming from the ADS, but an evaluation of the AGG is also included in the 2021 coalition agreement. With this in mind, employers should start giving high priority to transparency when using AI and training employees to use AI now. Internal company guidelines can also serve to ensure the safe use of AI in the workplace. In order to avoid discrimination, particular focus should also be placed on ensuring that the training data sets for the AI systems used do not have any indirectly discriminatory attributes.