EU Commission's White Paper on Artificial Intelligence stresses excellence and trust

Europe

Artificial intelligence (AI) has now reached almost all areas of life: mobility, trade and health, to name but a few. As a result, the EU Commission has summarised its vision for the future of AI and its legal framework in a comprehensive White Paper.

"Artificial intelligence must serve the people, follow their rights", said Commission President Ursula von der Leyen. The long-term goal, according to the White Paper, is to create a regulatory framework that takes the specific characteristics of AI into account, which in the future will have a considerable impact on companies operating in this field.

Benefits and risks of AI

With the White Paper, the European Commission is presenting an approach that aims to promote the use of AI on one hand and reduce the risks associated with the technology on the other.

The White Paper is a pillar of the Commission's future digital strategy, which it says is essential to create confidence in the possibilities of AI. Indeed, the Commission considers a regulatory framework adapted to the specific characteristics of AI to be an important element. The guiding principles of the White Paper are the creation of ecosystems of excellence and of trust.

An ecosystem of excellence

With the ecosystem of excellence, the Commission defines a policy framework for the promotion of AI. The Commission wishes to mobilise resources and in particular support small and medium-sized enterprises.

At the international level, cooperation is to be expanded and networks extended between research centres and universities. Efforts should focus on areas where Europe already has considerable potential today, such as the healthcare system. The Commission wants to start introducing AI-based products and services in the public sector as soon as possible. To this end, it is seeking a programme for the implementation of AI – with priority given to the health sector. Since AI cannot be developed without data, the Commission recognises that access to data and responsible data management must be promoted.

An ecosystem of trust

By creating an ecosystem of trust, the Commission aims to address citizens' concerns about the risks of AI. This could slow down the development of appropriate technologies. From the Commission's perspective, the creation of a European regulatory framework for AI will greatly help.

According to the Commission, the main risks associated with the use of AI relate to the protection of fundamental rights (e.g. data protection, privacy, non-discrimination), security and liability issues since citizens are increasingly affected by actions and decisions of AI systems. Some of these systems are autonomous, non-transparent and complex, which reportedly makes it difficult to enforce existing EU legislation to protect fundamental rights.

Moreover, AI could also be used for monitoring and analysis, which would endanger privacy. There would also be the risk of discrimination by AI since a mechanism of social control that governs human behavior and helps prevent discrimination by bias is missing. AI technologies also bring new security risks and liability issues. All of this must be countered with clear regulations, the Commission concludes.

Adapting the regulatory framework: risk-based approach

The European Commission believes that existing European regulations should be adapted to AI and their effects. Among its core challenges, the Commission identifies issues such as changing functions of AI systems within their life cycle, uncertainties with regard to responsibilities within the supply chain and changes in security concepts.

As a first step, the Commission intends to continue applying existing legislation in various sectors, such as health (including medical devices), product liability and data protection. In a second step, the Commission will pass further regulations that reflect the new challenges posed by AI. However, a new legal framework must remain proportionate and not lead to overregulation. To achieve this balance, the Commission favours a risk-based approach. New, extended regulations should primarily apply to high-risk AI.

The Commission considers that the classification of a high-risk AI application should take into account what is at stake. It should be examined whether the sector and the intended use pose significant risks, particularly from the point of view of safety, consumer rights and fundamental rights.

The White Paper cites health, transport, energy and parts of the public sector as examples of high-risk sectors. At the second stage, the question is whether the AI application is used in the sector in such a way that significant risks should be expected. The reason for this criterion is that AI use in the selected sectors does not always involve significant risks.

Clearly, although the health care system in general may indeed be a relevant sector, an error in a hospital's appointment-scheduling system should not normally entail sufficient risks to justify legislative intervention. Conversely, there could also be exceptional cases in which, due to the inherent risks, the use of AI applications for certain purposes could in principle (i.e. independently of the sector concerned) be classified as high-risk, such as using AI for recruiting in companies, which entails risks of discrimination, or AI for biometric remote identification (i.e. facial recognition).

Higher requirements: conformity assessment

In designing the future legal framework for AI, it will be necessary to decide what types of binding legal requirements should be imposed on the relevant actors. According to the Commission proposal, requirements for high-risk AI applications could relate to the following:

  • training data;
  • data and record-keeping;
  • providing information;
  • robustness and accuracy;
  • human oversight; and
  • specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.

For high-risk AI applications, an objective conformity assessment must be carried out in advance, including procedures for testing, inspection or certification, such as to verify the algorithms or the data sets used in the development phase. According to the Commission, the assessment can be part of the conformity assessment mechanisms that already exist for a large number of products to be placed on the EU internal market, such as medical devices. Due to the learning capacity of AI, assessments may also have to be repeated.

In addition, for both high-risk and other AI applications, effective remedies should be provided for parties affected by the negative effects of AI systems. The White Paper does not elaborate further on the liability framework, which is dealt with in a Commission report on the security and liability framework that is also worth reading. This report contains a detailed analysis of the current legal framework, identifies a number of gaps and contains quite specific proposals for adapting the legal framework.

Conclusion: White Paper as a step towards a European AI approach

With the White Paper "On Artificial Intelligence - A European Concept for Excellence and Trust" and the accompanying report on the security and liability framework, the Commission is launching a broad consultation process.

In essence, this represents nothing less than the development of proposals for a European AI concept. In addition to political means, this above all includes proposals for core elements of a future legal framework. Even if at first glance these proposals still seem vague and intangible from a practical point of view, the rapid spread of AI and its steadily increasing importance makes it more and more likely that the call for effective regulation will grow rapidly.

Commissioner Margrethe Vestager puts it this way: "We expect there will be a call for regulation of the risky aspects of these technologies."

Only time will tell what direction the discussion develops and which concrete proposals will follow. For more information on the future of the AI regulatory environment in the EU and how it can affect your business, contact your regular CMS advisor or local CMS expert Dr. Roland Wiring.