Complying with future AI regulations in Europe: capAI

United KingdomScotland

Experts at the Universities of Oxford and Bologna have recently launched capAI, an assessment procedure for AI systems which aims to assess them in line with the EU’s proposed Artificial Intelligence Act (the “AI Act”). Given both the potential of AI across businesses and wider society, and some of the known risks when using it, the developers behind capAI argue that ‘proactively assessing AI systems can prevent harm by avoiding, for example, privacy violations, discrimination, and liability issues, and in turn, prevent reputational and financial harm from organisations that operate AI systems.’

The EU Artificial Intelligence Act

The EU unveiled its proposed AI Act back in April 2021 and its final form is currently the subject of discussions in both the European Parliament and the Council. In summary, the AI Act adopts a risk-based approach, differentiating between AI uses that create:

  • unacceptable risk;
  • high risk;
  • low risk; and
  • minimal risk.

Those uses of AI that present an unacceptable risk (by contravening EU values) will be banned, while those that create a high risk will be subject to a range of mandatory requirements, including a conformity assessment which is what capAI is seeking to address. Low risk uses will be subject to limited obligations (for example on transparency), while minimal risk uses will have no additional obligations other than those in existing EU legislation.

The AI Act proposes to adopt a very broad definition of AI as ‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’

The AI Act also proposes transparency obligations for certain AI systems, including those which interact with humans, those which use an emotion recognition/biometric categorisation system and adapt their operation accordingly and for AI systems that generate or manipulate content (such as those involved in deep fakes).

Other key elements of the AI Act include:

  • the establishment of a European Artificial Intelligence Board;
  • providing for the designation of national competent authorities;
  • creation of an EU database for stand-alone high risk AI systems; and
  • a requirement for AI systems providers to report and investigate AI-related incidents and malfunctions.

The AI Act propose three levels of sanctions:

  • EUR 30 million, or 6% of total worldwide annual turnover;
  • EUR 20 million, or 4% of total worldwide annual turnover; and
  • EUR 10 million, or 2% of total worldwide annual turnover.

The level of sanction will be determined by the severity of the infringement and the size and market share of the operator committing the infringement amongst other factors.

The capAI Procedure

capAI has been designed as a simple and ethical way of approaching the conformity assessment referenced above. It consists of three components:

  1. an internal review protocol (IRP), which provides organisations with a tool for quality assurance and risk management and should follow the key stages of the AI system lifecycle (see below);
  2. a summary datasheet (SDS) to be submitted to the EU’s future public database on high-risk AI systems in operation; and
  3. an external scorecard (ESC), which can (optional) be made available to customers and other stakeholders of the AI system.’ The ESC is effectively a ‘health check’ to show the application of good practice and conscious management of ethical issues across the AI life cycle.

Source: Floridi, Luciano and Holweg, Matthias and Taddeo, Mariarosaria and Amaya Silva, Javier and Mökander, Jakob and Wen, Yuni, capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act (March 23, 2022). Available at SSRN: https://ssrn.com/abstract=4064091 or http://dx.doi.org/10.2139/ssrn.4064091

The capAI lays out some traps to be avoided in the process:

  • Design: When things do not go to plan, data scientists and managers could be tempted to ex-post rationalise AI’s performance but in practice it is likely that not enough focus went into the design phase.
  • Development: the ‘garbage in garbage out’ problem which troubles many IT projects carries much higher risks with AI because of the scale, scope and autonomy of decision making by AI once implemented. The capAI model addresses the risk through sequential ‘prepare’ and ‘train’ steps:
    • The prepare step concerns collecting the ‘right’ or ‘good quality’ data and transforming it with appropriate methods to ensure quality and compliance.
    • The training step concerns all the tasks for ensuring the model produces reliable predictions. It includes tasks such as selecting features, training, validating and tuning the model. Tuning ensures that the algorithm is trained to perform its best; it uses all the available information to reduce uncertainty in the outcomes. This is an iterative process, and model versioning is suggested to explain differences in model performance and compare models (suggested to explain differences in model performance and compare models (e.g., through A/B testing).
  • Evaluation: The capAI model identifies this stage as involving the most significant difference with traditional software development. This is because traditional software should perform on objective facts as designed and coded (with the exception of bugs), whereas AI is an “inference machine”. The model provides for two stages: test and deploy:
    • The test step aims to assess how the AI system performs on unseen data across a set of dimensions, such as technical robustness, and adherence to ethical norms and values.
    • The deploy step ultimately concerns deploying a tested model into the production environment.
  • Operation: the capAI model identifies this phase as the most significant gap in most business compliance processes governing AI. Again the operational phase is less of a risk with conventional IT systems because, once developed, they are fixed in their characteristics. However, machine learning outcomes result from statistical inference rather than ‘ground truth’, and it is likely that programmers spend less time monitoring, tracking changes and updating the model, as their efforts are put into automated, reproducible pipelines that take care of most of the updates when new data is available. The capAI model builds in two stages in the operational phase, sustain and maintain:
    • Sustain refers to all activities that keep the system working, such as monitoring its performance, and establishing feedback collection mechanisms. As users interact with the AI system, they might use it in ways that were unforeseen by the developers, producing errors that need to be resolved.
    • Maintain refers to providing updates to keep the system running in good condition or improve it. This step involves defining regular update cycles and establishing problem-to¬-resolution processes.

Fundamentally, the procedure aims to allow organisations and other AI systems users to satisfy the requirements of, and demonstrate their compliance with AI Act.

The procedure is ethics based, with a clear focus on increasing trustworthiness in AI. This is due in large part to the AI Act, but also to capAI’s developers’ belief that ‘adherence to ethical norms and values is considered to provide the highest standards’ and helps to tackle key issues around bias, privacy and ‘explainability’. Ethics-based auditing (EBA), it is argued, allows AI systems users to validate claims made about their systems and put into practice their ethical commitments.

In terms of the key stakeholders involved at different stages of the AI life cycle, capAI envisages that these will include:

  • the ‘top manger responsible for AI’ who is responsible for the system both internally and externally;
  • the ‘product owner’ who is responsible for the AI system’s performance;
  • the ‘project manager’ who leads either the internal development or external procurement of the system; and
  • the ‘data scientist’ who leads the ‘technical implementation of the AI system’[1].

Why should organisations use capAI?

Organisations using capAI would receive the IRP, SDS and ESC detailed above, each of which could be used to achieve different aims. The IRP, for example, could assist an organisation with their risk management, while the ESC could be published online to demonstrate a system’s compliance and ethical standards with the SDS being submitted to the EU’s proposed database on high-risk AI systems.

Benefits and risks of an ethical auditing approach

The capAI model appears to be derived from the broader ESG world of EBA. There is no doubt that there is a distinct human benefit to anticipating the negative consequences of a technology before they eventuate. However, (and this is acknowledged in capAI) there are also risks surrounding EBA which also run the risk of eventuating if there is too much pressure placed on companies to implement procedures that they do not have the resources or capacity to support. Such risks could include:

  • Ethics Blue-Washing - involves making unsubstantiated claims in relation to the ethical behaviours of an organisation;
  • Ethics Lobbying – this is the process of exploiting ‘self-governance’ to delay or avoid necessary legislation; and
  • Ethics shopping - this is the practice of cherry-picking ethics principles, and thereby justifying pre-existing behaviours.

Some unanswered questions

There are clear advantages to the approach adopted by capAI. However, the limitations should also be considered. For instance, a process such as like CapAI is likely to require additional resources, something which companies (particularly SMEs) may not be or able to provide. There are also likely to be costs (which could be significant) imposed upon such businesses in implementing these capAI. If capAI is to be successfully implemented, some of the following questions will need to be addressed:

  • How can capAI be supported in a company?
  • How can SMEs implement capAI?
  • What types and level of external support can be provided and what would that look like?
  • Can relevant third parties (such as research or industry bodies) develop tools to minimise the need for companies themselves to build and implement capAI on a standalone basis?

Next Steps

capAI is a world first approach, aimed at helping organisations comply with future European AI regulations. Its developers acknowledge that as AI continues to develop, capAI itself may need to as well. The same is true of the AI Act, which is still in the process of being negotiated and finalised. As such, time will tell both how useful capAI will be in practice, and how it will need to develop going forwards.

The authors would like to thank Jake Sargent, Trainee Solicitor at CMS, for his assistance in writing this article.


[1] ibid