Online harms regulation gains momentum and shape

United KingdomScotland

At the end of last year, the Government published its long-awaited response to the consultation on online harms. The response (the “Response”) (almost) puts to bed nearly two years of speculation as to what the regulatory framework for online harms will look like in the UK. Whilst we can expect further embellishments before the Online Safety Bill is published (which is anticipated in the coming months), and further changes again on the Bill’s passage to becoming the Online Safety Act, we now have a good idea of the framework that is likely to apply to in-scope services, which will have a duty of care towards their users.

The proposed online harms legislation is part of a broader move towards greater regulation of platforms - not just in the UK, but globally. The Response was published on the same day as the EU Commission published the draft regulation of the Digital Services Act (“DSA”); a week after the US Government launched a ‘big tech’ lawsuit due to antitrust concerns (which some commentators say foreshadows a shift in policy under the new US administration towards tech regulation); and a week before the Australian Government announced its current consultation on a draft Online Safety Bill. When comparing and contrasting the UK position on online harms with the “online harms” elements of the EU’s DSA, many will first understandably note the threat of significant fines under both regimes (in the UK, fines of up to 10% of global annual turnover and in the EU, up to 6% of global annual turnover). What both regimes also have in common is that they represent a significant shift in the position established by the E-Commerce Directive some 20 years ago now in relation to intermediary liability. Under the E-Commerce Directive, platforms are not liable for unlawful content they host, provided they do not control or have knowledge of it and, if made aware, act expeditiously to remove it. These new laws will require platforms to take much more responsibility for the content on their services and ensure the focus is on procedures and accountability. However what is arguably most significant when looking at the two regimes is the key difference between them: the DSA will apply only to unlawful content, however the UK online harms legislation will (at least for the largest platforms) also apply to legal but harmful content. This is complex and divides opinion – it raises questions about how platforms can implement the new requirements in practice, as well as more fundamental questions about the implications for freedom of speech.

But now to what the Online Safety Bill – and Act – is likely to look like.

1. What companies will be in scope?

The legislation will ostensibly apply to companies whose services:

  1. host user-generated content which can be accessed by users in the UK;
  2. facilitate public/private online interaction between service users, at least one of whom is in the UK; and/or
  3. are a search engine.

Examples of in-scope services include: social media platforms, consumer cloud storage sites, video sharing platforms, online instant messaging services, video games allowing interaction with other players, and online marketplaces. Despite this apparently broad remit, the Government estimates that only 3% of UK businesses will be in-scope. This is partly due to the number of exceptions, which include B2B services, ISPs, and email and telephony providers. However, given that official Government figures from the beginning of last year estimated there to be just under 6 million UK businesses, that is still 180,000 UK businesses that are potentially within scope. And that does not factor in non-UK businesses who offer services to UK users, who will also be in scope (as explained further below).

Other services or content that will be out of scope are as follows:

  • Low-risk services with limited functionality - this includes businesses that allow reviews and comments by users on their websites that relate directly to the business.
  • Content published by a news publisher on its own site – this also includes user comments on such content, as well as “journalistic content” shared on in-scope services (although it remains to be seen how such content will be identified, for example, does it need to be content from a well-known publication or just from a separate, freestanding website or blog?)
  • Policy or political arguments - it is considered that voters are capable of determining the veracity of that which comprises the political discourse (which begs the question of what constitutes a “political argument” and whether it needs to be made by a political actor or a political campaign – indeed it may be argued given recent examples of political arguments from across the pond that that exemption should be narrowly drawn).

2. What harmful content or activity is caught and what do in-scope services need to do?

The Response proposes a two-tiered approach to services, with in-scope companies to be divided into two groups: those that provide “Category 1” services and those that provide “Category 2” services.

What harmful content does it apply to?

The harmful content or activity that a company will have to address will depend on whether it is a Category 1 service or a Category 2 service, and is divided broadly into three types:

  1. Illegal content and activity, which all in-scope companies will have to address.
  2. Content and activity that is harmful to children (such as pornography or violent content): all in-scope companies will be expected to assess whether children are likely to access their services and, if so, take measures to protect children on their services, including reasonable steps to prevent them from accessing age-inappropriate and harmful content (this is likely to have some similarities with the ICO’s age-appropriate design code).
  3. Content that is legal but harmful (such as information regarding eating disorders, and disinformation and misinformation), which only Category 1 services will have to address.

Secondary legislation will set out priority categories of legal but harmful material, but the material must nonetheless meet the definition of harmful content and activity, being that which “gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. While it is unclear from the response what “adverse psychological impact” means, it is anticipated that this won’t require as high a bar as psychological injury requiring professional intervention, and won’t be as low as mere distress as a result of someone being nasty online.

The legislation will not deal with harms that are covered by other regimes, such as breaches of intellectual property rights, breaches of data protection legislation, defamation, fraud, breaches of consumer protection law, cyber breaches or hacking.

What is a Category 1 service?

Category 1 services are those that are “high risk and high reach”, meaning those that have a relatively large audience and their functionality allows information to be spread rapidly. Though the thresholds for these factors have not yet been defined, we do know that Ofcom will need to assess services against the thresholds (once determined) and publish a register of Category 1 services, and that services can be removed and added to this register (and indeed appeal their inclusion on the register).

What are the obligations that apply to in-scope services?

All companies, no matter in which category its services are placed, will owe a duty of care to their users and be required to:

  1. remove illegal content expeditiously and provide mechanisms to allow users to report illegal content or activity;
  2. assess the likelihood of children accessing services and, if there is a likelihood, ensure those children will not be exposed to harmful content like cyber bullying and age-inappropriate content; and
  3. have effective user reporting and redress mechanisms.

The Government has sought to make clear that the focus will be on ensuring that companies have compliant systems and processes in place, rather than liability for specific pieces of content.

What are the obligations that apply only to Category 1 services?

The additional obligations applying to Category 1 services include the requirements to:

  1. address content and activity which is legal but harmful to adults, including disinformation and misinformation;
  2. provide transparency reports setting out information on what the company is doing to address online harms, (with DCMS having the power to extend this requirement beyond Category 1 services if necessary); and
  3. set out in their terms and conditions how the company will handle harmful content and enforce those terms consistently and transparently.

On disinformation and misinformation, there is a greater emphasis on this in the Response than in the Government’s initial response, which was first published prior to the Covid-19 pandemic (in February 2020). No doubt partly as a result of the Covid-19-related disinformation seen during the course of the last year, the regulator (who will be Ofcom), will be required to establish an expert working group on disinformation and misinformation, which will include rights groups, academics and companies and aim to build consensus and technical knowledge on how to tackle disinformation and misinformation.

As to the requirements around ensuring transparent and consistent application of companies’ terms and conditions relating to harmful content, this is intended to both empower adult users to keep themselves safe online and protect freedom of expression by preventing companies from arbitrarily removing content. The intent in this is not just about taking down content but could also include, for example, circumstances where certain content is deprioritised or has a warning attached to it, or where users are signposted towards support or alerts are given to discourage users from certain behaviour.

3. What can we expect of the regulator?

Who is the regulator?

The Government announced in its initial response in February 2020 that it was likely that Ofcom would be the regulator, and that has been confirmed. As part of its role, Ofcom will produce codes of practice so that the industry is able to comply with its new duty of care (which has attracted some criticism as this is leaving the regulator, rather than Parliament, to draft legal duties). As it stands, Ofcom has produced interim codes of practice which companies are encouraged to follow to address terrorism and child sexual exploitation and abuse online, although Ofcom have said that there will not be a code made for each and every harm. As such, certain precise obligations to fulfil the duty of care must still be unknown.

What are the potential fines and sanctions?

Ofcom will have the power to issue fines that are even higher than those that can be levied by the ICO for data protection breaches, of up to £18 million or 10% of annual global turnover, whichever is higher. In circumstances of repeated or flagrant non-compliance, Ofcom will be able to take measures to disrupt a company’s business activities in the UK, including blocking access to those services in the most serious circumstances. The Government is also reserving its right to introduce criminal sanctions against senior management if they do not respond to information requests. How readily and easily Ofcom will be willing to resort to these measures, however, is another matter. The process of levying such hefty fines, and enforcing against companies domiciled across the globe, will not be straightforward in practice, and blocking access to services is likely to be reserved as a very last resort.

Super complaints

If a company is not meeting its statutory duty of care, Ofcom will accept super-complaints demonstrating substantial evidence of a systemic issue that is causing harm, or risks causing harm, to large numbers of users or specific groups of users. This type of complaint cannot be put forward on the basis of specific harmful content, but rather must have in its focus the systems and processes that companies have in place.

Legal action against Ofcom

Ofcom’s decisions will be able to be challenged either by judicial review through the High Court, or through a statutory tribunal on the basis of judicial review principles, by any party with sufficient interest in the matter. This will mean that a tribunal won’t be expected to run any fact-finding missions to review the facts of the appeal, but instead will consider whether Ofcom has exercised its powers lawfully and fairly.

Funding of the regulator

To fund its function, Ofcom will charge fees to companies above a certain threshold based on global annual revenue. Only companies above that threshold will be required to notify the regulator and pay an annual fee. While it is likely that those companies who are having to notify the regulator and pay the fee will also be Category 1 services, the notification / funding requirement interestingly has not been drawn along Category 1 / Category 2 lines.

Overall, this is a very significant expansion of Ofcom’s existing role. And it won’t be operating alone in regulating big tech. Not only will be it be working closely with its international counterparts due to the global reach of the proposed bill, but on 1 July 2020, the CMA, ICO and Ofcom announced a new forum, The Digital Regulation Cooperation Forum, to help ensure online services work well for consumers and businesses in the UK. By working together on this, they seek to harness their collective regulatory expertise in what is clearly an acknowledgment of the scale of the task they collectively face when regulating online interactions.

Comment

As mentioned above, the Response is one piece of a broader global patchwork of tech regulation that is slowly, but increasingly in earnest, being put in place, particularly in relation to the largest platforms. Whilst the Government has made clear what the framework should look like, some of the devil is likely to be in the detail and as yet we have not seen the proposed text of the legislation. It is likely that the Online Safety Bill when published in the coming months will include some further variations as well as, hopefully, some clarifications, including in relation to the following points:

  • First, while the Response is clear that in-scope companies will have a duty of care towards their users, the precise requirements to fulfil the duty of care.
  • Secondly, as discussed above, while harmful content and activity is now defined, it isn’t clear how such a subjective definition will be workable in practice.
  • Thirdly, the exemptions to the proposals require further clarification as discussed above, especially in relation to the journalistic and political speech exemptions.
  • Fourthly, given that the legislation applies to all in-scope companies offering services to UK users regardless of where they are located, it is unclear how fines against and enforcement in respect of non-UK companies will work in practice.
  • Lastly, it is clearly a Government concern that freedom of expression is protected (freedom of expression is mentioned almost fifty times in the government response) but it’s not clear how this will be safeguarded in practice. Whilst the Response speaks of “robust protections” for freedom of speech; and seeks to protect media freedoms by exempting journalistic content, and individual freedoms by requiring user redress for arbitrary takedowns, there appears to be little to prevent companies deciding that, for their platform, certain content is legal but harmful and therefore needs to be addressed.

A separate point to note is that while the Response confirms that the proposals will not introduce any new causes of action for individuals to sue companies, any regulatory decisions that Ofcom does make against companies, and any fines that are levied, can be used as evidence (for example of a breach of a company’s terms of use), in any legal action that is pursued, much in the same way that ICO decisions are increasingly being used to bolster data protection claims.

One of the biggest questions is what effect these proposals, which go further in relation to online harms than any other territory to date, will have on the UK tech sector. The Government appears keen to work closely with the tech industry to legislate in an inclusive fashion, and the Government will no doubt want to avoid a similar face-off with big tech such as that recently seen in Australia following the recent publication of its news media code – but can the UK implement a regulatory regime which protects individuals from legal but harmful content without driving away big tech and at the same time protect freedom of speech online? Let the debate begin.