Planning for the AI landscape of the future: the latest from the UK Government

United Kingdom

Following the publication by the House of Lord’s Select Committee (the “Committee”) of its Report “AI in the UK: No Room for Complacency” (the “Report”) – a summary of which can be found here – the Government has provided its response by setting out reflections on the Report, its progress to date on addressing the social, ethical, legal and technological concerns with respect to the technology, and how it intends to further take on board the recommendations in the Report. The response suggests that the immediate strategic priority for the Government is to work with the AI Council on their recommendations in the recently published AI Roadmap.

Living with AI – public understanding and data

The Report findings highlight the increased use of and reliance on AI by businesses as a result of the pandemic, and how this has exacerbated the need to improve the public’s understanding of AI and use of data. The Committee suggests it is for the Government to lead the way on this by leveraging the recommendations, advice and research of various bodies and experts. 

The Government suggests a two-pronged approach as the way to build trust in AI; (i) creating the appropriate legal and regulatory framework; (ii) ensuring the public are informed and “able to take active decisions regarding their relationship to AI technologies”.

Ethics

According to the Report the Government should lead on “the operationalisation of ethical AI”. The Committee suggests that the Government work with the Centre for Data Ethics and Innovation (“CDEI”) to challenge unethical AI use and that the CDEI should also develop a national standard so that there is clarity for “companies developing AI, the businesses applying AI, and the consumers using AI”. The Government has responded pointing to the existing guidance in respect of using AI in the public sector and that more work is to be undertaken by Government Digital Service (“GDS”) pertaining to transparency mechanisms on the use of AI in the public sector. The Government has not directly addressed the Committee’s recommendation in respect of the national AI standard and mentioned that it is considering what the CDEI’s future functions should be. Finally, in their response to ethical concerns, the Government cites a CDEI report which gives recommendations for Government, regulators and industry in relation to reducing algorithm bias. The Government intends to provide feedback to the CDEI on its recommendations but it is not committing to any specific timeframes.

Jobs

A primary concern noted in the Report is people’s ability to be adequately prepared for the future of employment given the impact of AI on jobs, but now also given the rapid and potentially permanent change brought about by the pandemic. It is their opinion that the AI Council should take steps to identify skills gaps and assist with setting up a dedicated training scheme. The Government in response provides statistics to back up its claim that AI and automation will have a net positive impact on job numbers in the UK. It has also announced various schemes both specific to AI as well as looking generally at the future of the labour market off the back of the COVID-19 crisis, to address changes to jobs, requirements from employers and access to education and training. These include amongst others, AI apprenticeships as announced last year, the launch of a free Skills Toolkit scheme providing online IT and numeracy skills, the Lifetime Skills Guarantee allowing adults to take free college courses to enhance technical skills, and changes to make part-time and flexible study easier. The Government pointed out that the AI Council’s Roadmap also makes ‘Skills and Diversity’ a central pillar of its recommendations and that it is considering the AI Council’s recommendations.

Public trust and regulation

The Committee asserts that it is necessary for public data and AI systems training to be developed for the staff of sector regulators, with input from CDEI, Office for AI and the Alan Turing Institute.  The Government reiterates that it is considering what the future function of the CDEI should be. The Government also states that the Office for AI, CDEI, ICO and other regulators sit on a working group comprising 32 regulators and other organisations. The purpose of this group is to analyse and take forward the recommendations in the Report. This may indeed lead to the creation of a training course by the ICO as recommended in the Report, but only following consideration and consultation of regulators’ needs in this area.

The Government points out that in regulatory spaces where misuse of AI is a concern, the Government has plans to implement a new “online harms regulatory framework” to ensure safety online. A key point is the creation of a new duty of care to users which shall apply to all services such as social media sites that host user-generated content or enable interaction. There is also mention in the white paper of an “online media literacy strategy” which it is hoped will complement existing support for schools in teaching digital literacy, and foster critical thinking regarding misinformation, catfishing, harmful content, privacy settings and users’ ‘online footprint’.

Leading on AI

The Report suggests a “Cabinet Committee” be set up to co-ordinate the Government’s AI policy. It recommends that the primary task should be to create a five-year strategy which considers “the existing bodies and [whether] their remits are sufficient, and the work required to prepare society to take advantage of AI rather than be taken advantage of by it”.

In summary, the Government does not discard the idea of a separate Cabinet Committee. The Government explains that the responsibility for AI policy and its impact on the economy is managed by the Department for Digital, Culture, Media & Sport (DCMS) and Department for Business, Energy, and Industrial Strategy (BEIS). The responsibility for uptake across Government lies with the GDS. In relation to the strategy, the Government states it is considering the AI Council’s Roadmap in creating a national AI strategy with such strategy to include considerations of governance, including at Government Department and Cabinet committee levels. It is not clear whether it will be a five-year strategy.

Chief Data Officer

The Report strongly advocated for the appointment of a Chief Data Officer. The Government will not be appointing a chief data officer as recommended, but comments that, three senior Digital, Data and Technology leaders were appointed in January 2021: “Paul Willmott will Chair a new Central Digital and Data Office (CDDO) for the Government; Joanna Davinson has been appointed the Executive Director of CDDO and Tom Read has been appointed as CEO of the Government Digital Service.”

Autonomy Development Centre

This aspect of the Report addressed concerns around the use of definitions when describing and ensuring alignment with NATO and other allies. The Government mentions that with regards to the definition of Lethal Autonomous Weapons Systems (LAWS), there are challenges to consistency due to the complexity of the technology, how it works and how it is understood technically. However, with regards to “responsible AI for Defence”, the Government states the UK is a “prominent voice” in international discussions on adoption AI for defence, and the MOD will soon be publishing a Defence AI Strategy part of which is to review definitions in this area. There will be an AI centre within the MOD to accelerate the research, development, testing, integration and deployment of military AI.

The UK as a world leader

The Report acknowledges the development of the Global Partnership on Artificial Intelligence and the UK’s role as a founding member. However, it warns against the UK becoming a less welcoming environment to students, researchers and businesses that could harm the AI industry, which is an important reminder given the direction the UK may or may not pursue post-Brexit, particularly with regards to immigration. The Report states that: “changes to the immigration rules must promote rather than obstruct the study, research and development of AI”.

The Government responds by mentioning the Global Talent “fast-track” visa route which replaces the Tier 1 (Exceptional Talent) route, in the hopes of “benefiting higher education institutions, research institutes and eligible public sector research establishments”.

Secondly on the UK’s role and influence, the UK and USA signed the ‘Declaration of the United States of America and the United Kingdom Cooperation in Artificial Intelligence’, which as the name suggests is a basis for collaboration between the countries and shall comprise of initiatives such as student exchanges.

In the way of investment, the Government announced that £20 million will be used to deliver Turing AI Acceleration Fellowships to provide resources to top AI innovators to drive their research in key industry areas. Finally, the Government quotes an encouraging statistic from a publication by Oxford’s Future of Humanity Institute: that the UK is considered the “second most likely destination for AI researchers to work in, over the next three years, with 35% choosing the UK”.

The feedback from the Government is encouraging. While the national AI standard for use by the businesses and consumers does not appear to be the immediate priority, the national AI strategy is. The Global AI Index ranks the UK as the number one country for operating environment, third for research, but only seventh for government strategy. It is possible that 2021 may be the year when the UK will have a brand new AI strategy based on the AI Roadmap with a brand new ranking in the Global AI Index as a result.

Article co-authored by Aadam Sattar.