UK Regulators continue to scrutinise AI: The FCA and the ICO announce new AI initiatives

United KingdomScotland

The FCA and the ICO announced new AI-related initiatives on 19 February 2020. The FCA announced a year-long collaboration with The Alan Turing Institute that will focus on AI transparency in the context of financial services. The ICO announced a new consultation on its draft Guidance on the AI auditing framework.

These initiatives follow a number of other regulatory developments relating to AI that have already taken place in 2020 and came on the same day that the European Commission published its White Paper: On Artificial Intelligence - A European approach to excellence and trust, which lays out options for a European Union-wide regulatory framework on AI. This shows the level of scrutiny that regulators and lawmakers are currently giving to AI and the seriousness with which they regard both its potential benefits and the issues that may arise from its use.

FCA collaboration on AI transparency

The FCA's collaboration with The Alan Turing Institute will focus on AI transparency in financial services. The FCA acknowledges that, along with all of the potential positives that come from the use of AI in financial services, the deployment of AI raises some important ethical and regulatory questions. It considers that transparency is a key tool for reflecting on those questions and thinking about strategies to address them.

Along with announcing this initiative, the FCA has set out a high-level framework for thinking about AI transparency in financial markets.

The framework operates around four guiding questions:

  1. Why is transparency important?
  2. What types of information are relevant?
  3. Who should have access to these types of information?
  4. When does it matter?

Because the opportunities and risks associated with the use of AI may vary, the FCA does not think that a 'one-size-fits-all' approach to AI transparency can be followed. Instead, the FCA suggests that decision-makers develop a 'transparency matrix' which can be used to map different types of information to different types of relevant stakeholders and help structure a systematic assessment of transparency interests.

The FCA's collaboration with The Alan Turing Institute follows a similar link-up between the ICO and The Alan Turing Institute called Project ExplAIn, which aimed to provide guidance about explaining AI decisions to the individuals affected by them. One output from this was the publication in December 2019 of the ICO's new draft guidance Explaining decisions made with AI, which was created in conjunction with The Alan Turing Institute. You can read more about the draft guidance here. The ICO's consultation on that closed in January 2020 and the final guidance is expected later this year.

ICO consultation on AI auditing framework guidance

This new draft guidance provides advice on how to understand data protection law in relation to AI and gives recommendations for technical and organisational measures that can be implemented to mitigate the risks that the use of AI may pose to individuals.

It deals with:

  1. Accountability and governance;
  2. Lawfulness, fairness and transparency in AI systems;
  3. Security and data minimisation in AI; and
  4. Enabling individual rights in AI systems (e.g. rights of information, access, rectification, erasure, and rights in relation to solely automated decisions).

Although the ICO is focusing on a risk-based approach to AI, it makes it clear that this does not mean data protection legal requirements can be ignored if the risks are low. Rather, it makes it clear that if risks to the rights and freedoms of individuals that may arise cannot be sufficiently mitigated through appropriate technical and organisational measures, it may be necessary to stop an AI project.

The ICO says that it is eager to hear views on the draft guidance from people who have a compliance role (e.g. DPOs, general counsel, risk managers) as well as technologists (e.g. ML experts, data scientists, software engineers, IT risk managers).

The consultation closes on Wednesday 1 April 2020.

AI keeping regulators busy

It is a busy time for regulatory initiatives relating to AI and these recent announcements follow a number of other developments in the first two months of 2020.

In January 2020 the European Banking Authority published a new Report on big data and advanced analytics, in which it identified some key risks associated with the deployment of AI and ML technologies - see further here.

Building on their joint survey on Machine learning in UK financial services, the FCA and the Bank of England announced in January 2020 that they are establishing a forum to further dialogue with the public and private sectors to better understand the use and impact of AI and machine learning within financial services – see further here.

On the same day that the FCA and the ICO made their AI-related announcements, the European Commission published a White Paper that lays out options for a specific regulatory framework on AI.

The proposed regulatory framework would focus on high risk AI applications. The White Paper defines ‘high risk’ applications as those which:

  1. Are deployed in sectors where, given the nature of activities usually undertaken, significant risk can be expected to occur; and
  2. The AI used in that sector is, in addition, done so in a way that significant risks are likely to arise.

The EC proposes that the regulatory requirements might cover:

  • Training data;
  • Data and record-keeping;
  • Information to be provided;
  • Robustness and accuracy;
  • Human oversight; and
  • Specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.

More detailed analysis of the proposals in the White Paper will follow soon. The consultation on the White Paper closes on 19 May 2020.

The number of regulatory announcements and publications that have taken place already in 2020 shows the level of scrutiny that regulators and lawmakers are giving AI and the seriousness with which they regard its benefits and the issues that may arise from its use. It also gives a sense of the pace at which this technology is being deployed and at which regulators and lawmakers are working to keep up with it.