Explaining AI decisions: The UK ICO publishes new guidance

United Kingdom

On 20 May 2020, the Information Commissioner’s Office (“ICO”) published new guidance, Explaining decisions made with AIThis follows the draft guidance published in December 2019 and the subsequent consultation.  The guidance was created by the ICO in conjunction with The Alan Turing Institute and the ICO says that its aim is to help organisations explain their processes, services and decisions delivered or assisted by AI to those who are affected by them.  The explainability of AI systems has been the subject matter of Project ExplAIn, a collaboration between the ICO and The Alan Turing Institute.  It should be noted that the guidance is not a statutory code of practice under the UK Data Protection Act 2018 and the ICO points out that it is not intended as comprehensive guidance on data protection compliance.  Rather, the ICO views this as practical guidance, setting out what it considers is good practice for explaining decisions that have been made using AI systems that process personal data.

In this article we summarise some of the key aspects of the new guidance.

Guidance in three parts

The guidance is not short (c.130 pages in total) and it is divided into three Parts:

1. The basics of explaining AI

2. Explaining AI in practice

3. What explaining AI means for your organisation

The basics of explaining AI

Part 1 (The basics of explaining AI) covers some of the basic concepts (e.g. What is AI? What is an output or an AI-assisted decision? How is an AI-assisted decision different to one made only by a human?) and provides an overview of the relevant legal framework to the concept of explainability. The overview focusses on data protection laws (e.g. the General Data Protection Regulation (“GDPR”) and the UK Data Protection Act 2018) but also explains the relevance of, for example, the Equality Act 2010 (in relation to decisions that may be discriminatory), judicial review (in relation to government decisions), and sector-specific laws that may also require some explainability of decisions made or assisted by AI (for example, financial services legislation which may require customers to be provided with information about decisions concerning applications for products such as loans or credit).

Part 1 of the guidance sets out six ‘main’ types of explanation that the ICO/The Alan Turing Institute have identified for explaining AI decisions.  These are: rationale explanation, responsibility explanation, data explanation, fairness explanation, safety and performance explanation, and impact explanation.  The guidance sets out the types of information to be included in each type of explanation.  It also draws a distinction between what it calls processed-based vs outcome-based explanations (which apply across all of the six explanation types identified in the guidance).  Processed-based explanations of AI systems explain the good governance processes and practices followed throughout the design and use of the AI system.  Outcome-based explanations clarify the results of a decision, for example, the reason why a certain decision was reached by the AI system, using plain, easily understandable and everyday language.

The guidance also sets out five contextual factors that it says may apply when constructing an explanation for an individual.  These contextual factors were the results of research carried out by the ICO/The Alan Turing Institute.  The guidance says that these factors can be used to help decide what type of explanation someone may find most useful.  The factors are: (1) domain factor (i.e. the domain or sector in which the AI system is deployed); (2) impact factor (i.e. the effect an AI decision has on an individual or society); (3)  data factor (i.e. the type of data used by an AI model may impact an individual’s willingness to accept or contest a decision); (4) urgency factor (i.e. the importance of receiving an explanation quickly); and (5) audience factor (i.e. who or which groups of individuals are decisions made about, which may help to determine the type of explanation that is chosen).

Part 1 also sets out four key principles that organisations should think about when developing AI systems in order to ensure that AI decisions are explainable: (1) Be transparent; (2) Be accountable; (3) Consider the context in which the AI will operate; and (4) Reflect on impacts of the AI system on individuals and society.

Explaining AI in practice

Part 2 (Explaining AI in practice) is practical and more technical in nature.  It sets out six ‘tasks’ that can be followed in order to assist with the design and deployment of appropriately explainable AI systems.  The guidance provides an example of how these tasks could be applied in a particular case in the health sector.  The tasks include: collecting and pre-processing data in an ‘explanation-aware’ manner, building your AI system in a way that relevant information can be extracted, and translating the logic of the AI system’s results into easy to understand reasons.

What explaining AI means for your organisation

Part 3 (What explaining AI means for your organisation) focusses on the various roles, policies, procedures and documentation that organisations should consider implementing to ensure that they are in a position to provide meaningful explanations about their AI systems.  

This part of the guidance covers the roles of the product manager (i.e. the person that defines the requirements of the AI system and determines how it should be managed, including the explanation requirements), the ‘AI development team’ (which includes the people involved with collecting and analysing data that will be inputted into the AI system, with building, training and optimising the models that will be deployed in the AI system, and with testing the AI system), the compliance team (which includes the Data Protection Officer, if one is designated), and senior management and other key decisions makers within an organisation.  The guidance suggest that senior management should get assurances from the product manager that an AI system being deployed by an organisation provides the appropriate level of explanation to individuals affected by AI-based decisions.

Regulators focus on explainability and transparency

As the use and development of AI continues to expand, the ICO has shown that it will be proactive in making sure that usage of the technology aligns to existing privacy legislation and other protections for individuals.  In addition to this new guidance, the ICO recently consulted on new draft Guidance on the AI auditing framework.  That guidance provides advice on how to understand data protection law in relation to AI and gives recommendations for technical and organisational measures that can be implemented to mitigate the risks that the use of AI may pose to individuals.

The ICO is not the only regulator that sees the importance of transparency and explainability to AI systems.  In February 2020, the Financial Conduct Authority (“FCA”) announced a year-long collaboration with The Alan Turing Institute that will focus on AI transparency in the context of financial services.  The FCA acknowledges that, along with all of the potential positives that come from the use of AI in financial services, the deployment of AI raises some important ethical and regulatory questions.  It considers that transparency is a key tool for reflecting on those questions and thinking about strategies to address them.

Along with announcing its collaboration, the FCA also set out a high-level framework for thinking about AI transparency in financial markets which operates around four guiding questions: (1) Why is transparency important? (2) What types of information are relevant? (3) Who should have access to these types of information? (4) When does it matter?

More information about the ICO’s Guidance on the AI auditing framework and on the FCA’s transparency initiatives is available here.

The number of regulatory announcements and publications that have already taken place or are expected in 2020 shows the level of scrutiny that regulators and lawmakers are giving AI and the seriousness with which they regard its benefits and the issues that may arise from its use.  It also indicates the speed at which this technology is being deployed and at which regulators are working to keep up with it.

Article co-authored by Kiran Jassal.