“AI in the UK: No room for complacency” and no room for a separate AI regulation

United KingdomScotland

A week prior to Christmas, the House of Lords’ Liaison Committee (“Liaison Committee”) published a report “AI in the UK: No Room for Complacency” (the “2020 Report”), a follow up on the 2018 Report by the House of Lords’ Select Committee (the “2018 Report”). The message is this report appears to be that unlike the EU, a separate AI regulation in the UK is not currently an option. In this article we explore some of the topics discussed in the 2020 Report and in particular what the 2020 Report says about: (i) the progress that has been made in the regulatory issues by the Government since the 2018 Report has been published; (ii) the future of the AI regulation in the UK; (iii) the role of the Government in the AI sector going forward; and (iv) the ethical framework in the context of AI.

What is the role of the House of Lords’ Select Committee and the Liaison Committee?

The Select Committee on AI was appointed by the House of Lords, the second chamber of the UK Parliament, in June 2017 to consider the economic, ethical and social implications of advances of AI. In its report published in April 2018, the Select Committee made a large number of recommendations, mainly for the Government. The Government responded to these recommendations, and the report and response were then debated in the House later in 2018. The Select Committee is a special inquiry committee and ceases to exist once their report has been agreed. Accordingly, there is no specialist committee that could monitor, on an ongoing basis, the implementation by the Government of the recommendations made by the House of Lords. Accordingly, it was one of the tasks for the Liaison Committee to review the work of the Select Committee and consider the progress made by the Government in respect of the recommendations. In the past, the Liaison Committee has carried out this task by correspondence with the Ministers. In this case, the Liaison Committee received a reply to its questions from the Minister of State of Universities, Science, Research and Innovation. In addition, for the purposes of the 2020 Report the Liaison Committee also heard oral evidence from nine witnesses from various Ministries as well as academia.

What does the 2020 Report say about the progress since the 2018 Report?

The 2020 Report pointed to the following recommendations made in the 2018 Report:

  • Blanket AI-specific regulation is not appropriate and existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed.
  • GDPR appear to address many of the concerns over the handling of personal data, which is key to the development of AI.
  • The Government Office for AI, with the Centre for Data Ethics and Innovation (the “CDEI”), needs to identify the gaps, if any, where existing regulation may not be adequate.
  • The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.

In light of the above recommendations, the 2020 Report notes the feedback it received from the Government. In particular, the regulator-led approach is the current Government position. The sectors are best placed to identify the regulation needed in their area, particularly in the financial services. Many regulators, including the Information Commissioner's Office (the “ICO”), have taken an active role in explaining the regulations in place and providing relevant, practical guidance for their sector. However, the 2020 Reports notes that other regulators need to upskill in the context of AI.

Based on the oral evidence received by the Liaison Committee there appears to be a consensus that with the existing regulatory framework there is no desire to rush the implementation of any AI specific legislation The regulatory framework itself is broadly applicable to the challenges the UK is facing with AI. However, at the same time the 2020 Report does raise concerns about gaps in regulation, including deficiencies in the existing legal framework for the use of AI by social media companies or in facial recognition technology and concludes that it is important to understand how to make sense of the existing laws, regulations and ethical standards.

The 2020 Report notes that since the publication of the 2018 Report the CDEI has been established and its terms of reference include identifying gaps in the regulatory framework. In June 2020, the CDEI published its AI Barometer, which looked at five key sectors (criminal justice, health and social care, financial services, energy and utilities and digital and social media) and identified the opportunities, risks, barriers and potential regulatory gaps. The Liaison Committee considers that the Barometer could be used to better inform policy makers of the risks posed and any need for regulation.

According to the 2020 Report, beyond its specific purpose, regulation could also play a role in establishing public trust in AI. The 2018 Report recommended that: “Industry should take the lead in establishing voluntary mechanism(s) for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers. The soon-to be established AI Council, the proposed body industry for AI, should consider how best to develop and introduce these mechanisms”. All witnesses from Department for Digital, Culture, Media and Sport and Department for Business, Energy and Industrial Strategy (“BEIS”) agreed that transparency is essential in building public trust. Notably, elsewhere in the 2020 Report there is a reference to a 2020 survey by the BEIS that found that 44 per cent of people said that were neither positive, nor negative about AI, with a further eight per cent saying they did not know; only 28 per cent of people said they were positive about AI, while 20 per cent felt negative about it. The Minister for Digital and Culture confirmed that the public feel deeply suspicious of some part of AI and highlighted the work of the AI Council “because it has a specific working group dedicated to getting the narrative about AI right”. The 2020 Report notes that the establishment of the AI Council was recommended back in 2017 in the Hall-Pesenti report (commissioned as part of the Government’s Industrial Strategy). While the Government announced later in 2017 that it was taking forward this recommendation, the membership of the AI Council was only announced in May 2019. The 2020 Report notes that it is unclear why there was such a delay in getting the Council appointed.

Recommendation in the 2020 Report on the future of AI regulation in the UK

The 2020 Report makes the following recommendations based on its discussion of the progress made in since the 2018 Report:

  • The challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting legislation. Interestingly, it is not entirely clear from the 2020 Report as discussed above, on what basis and specific feedback this conclusion has been made as there is no separate discussion on challenges associated with development and deployment of AI.
  • The understanding by users and policymakers needs to be developed through better knowledge of risk and how it can be assessed and mitigated.
  • In line with the 2018 Report, the 2020 Report notes that sector-specific regulators are better placed to identify gaps in regulation, and to learn about AI and apply it to their sectors. The CDEI and Office for AI can play a cross-cutting role, along with the ICO, to provide that understanding of risk and the necessary training and upskilling for sector specific regulators.
  • By July 2021 with input from the CDEI, Office for AI and Alan Turing Institute, the ICO must develop and roll out a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. It will be essential for sector specific regulators to be in a position to evaluate those risks, to assess ethical compliance, and to advise their sectors accordingly. Such training should be prepared.

The 2020 Report and the EU-wide AI regulatory developments

While the 2020 Report briefly mentions the risk-based approach to AI as addressed in the White Paper published by the European Commission in February 2020 (which we have discussed in our previous article), there is no mention of the most recent developments in the EU (please refer to a separate article on this topic). In October last year (and before the 2020 Report was published) the European Parliament adopted proposals on regulation of AI prepared by the European Parliament’s Committee on Legal Affairs. The proposed regulation on AI is now with the European Commission for review. The 2020 Report only briefly mentions the work of the Council of Europe’s Ad Hoc Committee on AI that is producing a feasibility study on the regulation of AI and mentioned that the UK is participating in the drafting of this study.

The role of Government in the AI sector going forward

The 2020 Report states that many Government departments are active in development of AI, in the use of AI and in training in its use and commends the Government in establishing a range of bodies to advise on AI over the long term. There are also many other organisations outside the framework of Government which are involved in advisory role: for example, the AI Council, the CDEI, the ADA Lovelace Institute, the Alan Turing Institute. According to the 2020 Report, the Government now seems to be aware of the need for coordination between the wide variety of bodies. In relation to CDEI in particular, the 2020 Report notes that CDEI needs to be able to work effectively and independently. Overall, the 2020 Report concludes that more needs to be done and the coordination needs to be raised to a higher and more influential level, such as ministerial level.

The 2020 Report recommends:

  • to establish a Cabinet Committee whose terms of reference include the strategic direction of Government AI policy and the use of data and technology by national and local government;
  • for the Cabinet Committee to approve a five-year strategy for AI with such strategy to consider whether the existing bodies and their remits are sufficient, and the work required to take advantage of AI. Of interest, on 6 January 2021 the Office for AI has published the UK’s Roadmap. In line with the 2020 Report, the Roadmap suggests that a National AI Strategy is needed to prioritise and set a timeframe that will position the UK for success. The Roadmap sets out recommendations to help the Government develop a National Strategy across three pillars: research, development and innovation; skills and diversity, and data, infrastructure and public trust and also addresses some specific measures to support adoption and the key areas of health, climate and defence.
  • to appoint a Government Chief Data Officer. Such appointment was listed by the Government in 2017 among priorities until 2020; however, no steps have been taken to recruit a suitable candidate. The Chief Data Officer must act as a champion for the opportunities presented by AI in the public service and ensure that the safe and principled use of public data is embedded across the public service. This particular recommendation does not seem to align with the recent recommendation made in the report by the UK Parliamentary Committee on Standards in Public Life published in February 2020 (“Standards in Public Life Report”). This Standards in Public Life Report concludes that the UK does not need a new AI regulator, but that all regulators must adapt to the challenges that AI poses to their sectors and that the CDEI should advise regulators on how to adapt to new technologies and be set on an independent statutory footing.

National standards for the ethical development and deployment of AI

The 2020 Report notes that an ethical framework for the development and use of AI became a key focus of the 2018 Report. In particular, the Select Committee had recommended that the CDEI with input from the AI Council and the Alan Turing Institute develop “with a degree of urgency” a cross-sector ethical code of conduct, or “AI code”, that can be used for implementation across public and private sector organisations.

The 2020 Report notes that:

  • although in the last few years a large number of companies and organisations produced their own ethical AI codes of conduct, a solely self-regulatory approach to ethical standards risks a lack of uniformity and enforceFability;
  • the guidance on AI and ethics published by the Government Digital Service and the Office for AI in partnership with The Alan Turing Institute, though applicable to the public sector, is not a foundation for a countrywide ethical framework which developers could apply, the public could understand and the country could offer as a template for global use; and
  • the UK’s various memberships (such as the Global Partnership on Artificial Intelligence established in June 2020 with the UK being a founding member) demonstrate the UK’s commitment to collaborate on the development and use of ethical AI, but the UK is yet to take on a leading role.

According to the 2020 Report, the Government must lead the way on ethical AI with the help of CDEI. The CDEI should establish and publish international standards that will consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.

Where to from here?

It needs to be seen how the Government will respond to the recommendations made in the 2020 Report. The overall message from the 2020 Report is that there is no room for complacency and urgent action is required in a number of areas, including in relation to the creation of the UK’s strategy on AI, the ethical framework for AI and the use of AI in the public service. The 2020 Report discusses at length the need for public trust and acknowledges that there is a long way to go in order to create this. Yet, unlike the EU that intends to help create such trust via a specific AI regulation, in the UK this approach does not appear to be appealing. Instead, each regulator is advised to continue considering the impact of the AI on their sector. It is being acknowledged that while some regulators, like the ICO, have taken an active role in explaining the regulations in place, other regulators require urgent upskilling in the AI sector.