Is the UK’s “pro-innovation” approach to AI regulation sustainable? TUC continues its call for legislation with publication of AI Bill

United Kingdom

The potential impact of artificial intelligence (AI) on the world of work is enormous. The UK AI industry is estimated to be worth over $1 trillion by 2035 and a recent research paper anticipates that up to 59% of job tasks will be impacted by generative AI. However, given the risks involved in using AI, there is clearly a balance to be struck in terms of capitalising on AI’s huge economic potential while putting in place appropriate safeguards to protect the workforce.

There is currently a global divergence in AI regulation. The UK government has so far chosen to take a light touch, or ‘pro innovation’, approach to regulating AI in order to keep pace with the rapid developments in AI technology. Rather than legislating, the government has tasked industry specific regulators such as the ICO, FCA and CMA with creating bespoke measures as they see fit.

Many other jurisdictions including the US have adopted a similar approach to the UK. By contrast, in March 2024 the EU approved the Artificial Intelligence Act (EU AI Act), the world’s first comprehensive set of AI laws, which will regulate the use of AI systems in the EU. Although there has been no indication of an about-turn by the UK government, the government anticipates that it will ultimately be necessary to take legislative action. In the meantime, calls by relevant stakeholders to legislate AI remain, in particular from the Trades Union Congress (TUC) which published the Artificial Intelligence (Regulation and Employment Rights) Bill yesterday.

The TUC’s AI Bill aims to regulate the use of AI systems in the workplace in order to protect the rights and interests of employees. It also provides for trade union rights in relation to the use of AI systems in the workplace (including consultation requirements) and, perhaps slightly incongruously, for a “right to disconnect”.

Whether the TUC’s AI Bill will be passed (either as published or in a watered-down form which seeks a balance of employer and employee interests) remains to be seen, and immediate legislative priorities may lie elsewhere – particularly with a general election on the horizon. Either way, AI is not going away nor is the need for employers to grapple with it and the workplace risks it poses. Below we take a closer look at the key concepts and approach to AI regulation proposed under the TUC’s AI Bill. 

Key concepts and approach

The UK government has set out five core principles for regulating AI, as follows:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

The TUC’s AI Bill adopts these principles and adds a sixth: equality, diversity and equality of opportunity. Given mounting evidence and concerns about the vulnerability of AI to bias and associated discrimination risks, this addition perhaps does not come as a surprise.

The TUC’s AI Bill follows the EU AI Act in taking a risk-based approach to regulating AI, but adopts a simplified approach of setting out only ‘high-risk’ or prohibited uses of AI in employment, and regulating them accordingly.

‘High-risk’ decision-making

Most rights and obligations under the TUC’s AI Bill are triggered when an employer uses AI to make ‘high-risk’ decisions in relation to employees, workers or jobseekers. ‘High-risk’ is broadly defined, meaning a decision with the capacity or potential for legal effects or “other similarly significant” effects. The Bill goes on to list a wide range of activities that are presumed ‘high-risk’, including steps taken in relation to disciplinary matters, the termination of employment, capability assessments, trade union membership, protected characteristics, and many more.

Under the TUC’s AI Bill, employers’ obligations when using AI to make ‘high-risk’ decisions would include:

  1. Carrying out a workplace AI risk assessment beforehand. The assessment would need to cover, amongst other matters, risks around health and safety, data protection, equality and human rights, and would require direct consultation with workers and employees;
  2. Maintaining a record of information about AI systems used for making these decisions; and
  3. Consulting with trade unions or other employee representatives on such ‘high-risk’ decision making.

In addition, where ‘high-risk’ decisions are made through AI, employees, workers and jobseekers would have the right to human reconsideration of those decisions. Where a decision is (or might reasonably expected to be) detrimental to them, there would also be a right, on request, to a personalised explanation of that decision.

There may therefore be a substantial (and human) administrative burden associated with using AI to make employment decisions.

Prohibitions

In addition to regulating ‘high-risk’ scenarios, the TUC’s AI Bill includes an outright ban on the use of emotion recognition technology when making any high-risk decision that could be detrimental to employees, workers or jobseekers. So-called “empathic AI” has already been used by some employers for recruitment purposes and to assess employees’ satisfaction, stress levels and performance. However, the science behind whether “empathic AI” can accurately recognise emotions is hotly contested, as is the susceptibility of this technology to bias. In the face of this, it is perhaps unsurprising that (like the EU AI Act) the Bill simply bans the use of empathic AI where it is detrimental in employment contexts.

The TUC’s AI Bill also prohibits discrimination against employees, workers or jobseekers through the use of AI, and amends the Equality Act 2010 so that employers will be held liable for decisions made by AI. The burden of proof will be on the employer to show that no discrimination happened whether by the system or any human involved in its operation. Employers concerned about being on the hook for discriminatory AI systems or being unable to discharge this burden of proof may find some relief in the “audit defence” contained in the Bill. This means employers will not be liable for the discriminatory consequences of AI systems they use if they can show that (a) they did not create or modify those systems; and (b) they carried out a thorough audit and introduced procedural safeguards before deploying them.

Statutory right to disconnect 

The TUC’s AI Bill also introduces a right to disconnect and protection from dismissal or detrimental treatment for exercising that right. There are limited caveats to this: the right will not apply if there is an emergency threatening the “fair running” of the employer outside of normal working hours, or where a different arrangement has been agreed through a collective or workforce agreement.

While on first reading the inclusion of this right in the Bill seems incongruous, the TUC’s 2020 report, Technology managing people, highlighted concerns raised by workers about the increasing use of technology to monitor their working time and productivity. With technology and hybrid working capabilities eroding the boundaries between work and personal time, perhaps this Bill is well-positioned to introduce a right to switch off. Interestingly, a right to switch off is something that the Labour party has previously indicated would be included in its general election manifesto, and so that aspect of the TUC’s AI Bill may well be progressed depending on the outcome of the upcoming general election.

Conclusion

So far, the EU is the outlier amongst major economies in passing legislation to regulate the use of AI. It remains to be seen whether being the frontrunner will pay off. Will the EU AI Act successfully safeguard fundamental rights without risking the financial benefits promised by AI? Or does legislating for the use of AI in its (relative) infancy jump the gun, establishing a restrictive framework before the capabilities and full impact of AI can be fully known? Only time will tell, but the publication of the TUC’s AI Bill makes clear that AI will remain high on the agenda of relevant stakeholders regardless. While we wait to see whether the Bill will obtain any traction in Parliament, employers keen to learn about the latest legal developments in the world of AI can read about this in recent CMS publications, AI Act: Council and Parliament reach political deal and UK government publishes delayed update on AI policy, and should keep abreast of regulatory developments relevant to their sector in the meantime.