Artificial Intelligence at Work and “people first” AI Regulation

United KingdomScotland

In November 2021 the All-Party Parliamentary Group (“APPG”) on the Future of Work (“Future of Work”) published its report titled “The New Frontier: Artificial Intelligence at Work” (the “Report”). The Report follows the National AI Strategy (the “Strategy”) released by the government in September and sets out to identify and resolve challenges posed by artificial intelligence (“AI”) in the workplace through the development of a new regulatory framework. Whilst the proposed framework addresses AI in the workforce, we consider some of the principles could be applied across all sectors. The recommendations made by the Future of Work inform the wider debate about AI governance and regulation as part of the Strategy.

What are APPGs?

APPGs are informal cross-party groups that have no official status in Parliament but are run by and for Members of the Commons and Lords, bringing together parliamentarians, industry and civil society. There is an Artificial Intelligence APPG, but the author of the Report is the Future of Work, an APPG which aims to “foster understanding of the challenges and opportunities of technology and the future of work”. The Future of Work aim to work together to develop practical solutions to issues facing the workforce.

Much of the Future of Work’s recent attention has been on the impact of AI technologies in the workplace, from the use of algorithmic surveillance, to monitoring technologies and management features. Whilst the Future of Work notes the opportunities that AI creates, from the creation of new jobs to the boost in quality of work, the Report references significant negative impacts too on the conditions and quality of work across the country within which AI technologies are used. Against the backdrop of the growing adoption of AI and a number of reports which focus on recent digital transformations in the workforce the Future of Work launched an All-Party inquiry (the “Inquiry”) to address growing public concern about AI and surveillance.

The Inquiry found that the pace in which AI is transforming our ways of living has vastly out run the existing regimes for regulation in the UK. The report references adverse impacts across access to work, fair pay, equality, dignity, autonomy, participation and learning.

How does the Report address the National AI Strategy?

The Future of Work’s concerns focus largely on the governance (or lack of) of AI. The issue of governance is not just on the Future of Work’s agenda: The government prioritised governance as one of the three core pillars of the Strategy – A summary of which can be found in our article here.

Whilst the Report does recognise that the Strategy sets plans to build a world-leading governance system, the Future of Work caution that “there is an urgent need to bring forward robust proposals to protect people and safeguard our fundamental values” and that the existing regulatory framework is inadequate to promote both innovation and fundamental rights alongside each other. Against the backdrop of the use of AI in the workplace, the Report in fact warns that the wider objectives of the Strategy will not be fulfilled without properly understanding and addressing the harms presented by AI. A key issue noted in the Strategy was the way in which trust in AI operates as a barrier to innovation. The Report reinforces this notion, noting the way that workers do not fully understand or experience sufficient agency around automated decisions that determine fundamental aspects of work. Both the Report and the Strategy highlight that low levels of trust in AI hinder wider adoption.

How could the Report supplement the Strategy?

The Report outlines an overarching aim to ensure that the UK’s AI eco-system is human-centred, principles-driven and accountable. On that basis, the Report highlights an opportunity for policymakers to reset the trajectory by using a “robust regulatory response”.

To that end, the Future of Work recommend the introduction of an Accountability for Algorithms Act (the “Algorithms Act”). The Algorithms Act is intended to update existing regimes and facilitate the development of additional cross-sector principles-based rules. Whilst the Strategy set various different proposals to regulate AI, the Report aims to provide a solution.

Algorithms Act: What will it look like?

The Future of Work recommend that the Algorithms Act would act as a new, cross-sector, principles-driven regulatory framework to promote strong governance and innovation together. The Report provides an outline of the Algorithms Act centred around five key elements.

1. Accountability

The Algorithms Act would establish a duty of conducting a prior assessment and a responsibility to disclose and take appropriate action. The assessment would take the form of a pre-emptive Algorithmic Impact Assessments (the “AIA”) and the duty which would extend from design to deployment.

The Report notes that organisations are not currently required to produce any assessment on how AI or other algorithmic systems they are adopting could or do impact work on their workforce. The Future of Work therefore highlight that adverse impacts are generally only dealt with once the damage is done. An AIA could therefore shift regulatory emphasis to active, anticipatory intervention instead of being retrospective. This could also create a clear direction on how to operate AI for businesses and people.

The Report proposes that the AIA duty would be subject to a risk-based, contextual threshold. An AIA model would have 4 limbs:

  1. identifying individuals and communities who might be particularly vulnerable or impacted by algorithmic decisions through multi-stakeholder engagement;
  2. undertaking a risk analysis aimed at outlining potential pre-emptive actions to operate the precautionary principle and prevent adverse impact occurring;
  3. taking appropriate action in response to the analysis and making appropriate modifications to address harms; and
  4. conducting ongoing impact assessments.

2. Transparency

The Report suggests that employees subject to AI decisions should have the right to access a full explanation of purpose, outcomes and significant impacts of algorithmic systems at work. These rights would be set out in a dedicated schedule to the Algorithms Act and would seek to increase trust and transparency in the use of AI.

The right would enable employees to find out the use of purpose for and metrics within AI technologies are used in operation, not unlike a data subject access request facilitated by the EU General Data Protection Regulation. The Report does indeed note the need to introduce a layered approach to the right to be involved in AI in order to avoid potential intellectual property infringements on disclosing the way in which these technologies work.

3. Representation and collaboration

The Algorithms Act would recognise the collective dimension of data processing and would provide rights for unions and specialist third sector organisations to exercise new duties on AI subjects’ behalf. On that basis, AAPG propose creating an AI Partnership Fund to allow the Trade Union Congress (the “TUC”) to be trained on how to interact, comprehend and challenge the use of AI. The Report notes that this could complement the Strategy’s aim to develop training in AI, by suggesting that TUC could collaborate with independent organisations and charities like The Alan Turing Institute.

4. Enforcement

The Algorithms Act would equip the Joint Digital Regulation Cooperation Forum (the “DRCF”) with new powers to create certification schemes and issue new guidance to supplement the work of individual regulators and sector-specific standards. The DRCF was formed in July 2020, originally made up of the Competition and Markets Authority (the “CMA”), the Information Commissioner’s Office (the “ICO”) and the Office of Communications (“Ofcom”) and later jointed by the Financial Conduct Authority (the “FCA”). The DRCF aims to encourage greater co-operation between the regulatory bodies who interact with the unique challenges posed by the regulation of online platforms.

The Report notes the way in which there is a very mixed picture for responsibility and accountability in AI, and how a lack of co-operation between regulatory bodies can create confusion. The Future of Work therefore propose that to make the UK a world leader in governance as well as innovation, new mechanisms need to be introduced to encourage common capacity and enforcement mechanisms. To that end, the Report recommends that the DRCF are supported by an interdisciplinary team to work horizontally across the statutory guidance that already exists.

5. Fundamental values

The Algorithms Act would codify existing fundamental values which would aim to guide development and the application of a human-centred AI Strategy. Whilst the Future of Work recognise the “Principles of Good Work” as relevant in this context, the wider application of this proposal could involve other context-specific values. This proposal is an interesting development of the Strategy, which does purport to protect the public and fundamental values but does not necessarily articulate what these values are and how they would fit into the governance framework. Additionally, the Report clearly places the interests of the human first, which may differ from wider UK consultation where in some cases the human element could be removed to promote innovation – The potential removal of human moderators from the operation of AI within the government’s “Data: a new direction” reforms comes to mind.

What are some of the risks organisations face if the Algorithms Act is adopted?

We consider that transparency as to how AI makes decisions at a high level could be seen as “good practice”, particularly when it is affects recruitment and promotions. However, such transparency is not without risk. For example, employers might inadvertently disclose information which could be confidential, or employees might not understand the information fully and could misinterpret it. The Algorithms Act could also result in more whistleblowing claims by employees if they believe there have been breaches by the organisation and there is a public interest to be protected.

What will happen next?

Whilst the Strategy presented the priorities and aims of AI governance, it did not set out a clear direction at this stage for how we can expect AI governance to look. The Report therefore serves to propose a suggested approach to regulation, which the Future of Work considers would maximise innovation but also more importantly address challenges posed by the fast development of AI. The Algorithms Act might have been developed against the backdrop of the workplace, but its application could be applied across all sectors. The Future of Work do in fact acknowledge that “our focus is the frontier of changes to work, but our recommendations inform the wider debate about AI governance and regulation as part of the UK’s AI Strategy”.

The Strategy, and much of the government’s recent appetite for AI development, has appeared to put innovation first. In contrast, the Report and the proposed Algorithms Act aims to establish a framework of AI governance that a) puts people first, b) introduces mechanisms to protect human agency and only then c) references driving innovation. However, the Strategy and the Report do meet in the middle where co-operation is concerned, both advocating for existing sector specific regulations to be updated, with overarching cross sector rules, likely in the form of principles, filling the gaps.

The Report notes that it is the role of the law to “shape innovation and organisational behaviours in ways which serve the public interest. And it is the role of legislators to regulate for real accountability and real AI innovation”. The opportunity is therefore open for the UK to follow through from the Strategy to develop a gold-standard governance system for AI. Whether we can expect to see such governance shaped by the proposals made in the Report, we will have to wait until early 2022 when we can expect a White Paper to be published by the government, setting out phase 2 of the Strategy.

The authors would like to thank Jessica Wilkinson, trainee solicitor, for her assistance in writing this article.