UK's Online Safety Bill published

United KingdomScotland

The draft Online Safety Bill was published on 12 May 2021, two years after the Government first published its White Paper on Online Harms. The Bill sets out the proposed framework for the first regulatory regime specifically for online tech firms in the UK, and was published a day after a Queen’s Speech in which Her Majesty announced that “my Government will lead the way in ensuring internet safety for all, especially for children, whilst harnessing the benefits of a free, open and secure internet.” While the Bill is ostensibly about controlling the content that is available online, its scope extends to the imposition of somewhat nebulous duties on certain service providers to protect freedom of expression and even to protect democracy. In this way the scope of the Bill has been expanded (in a potentially significant manner) beyond the Government’s White Paper response – which we covered here – which referred only to an exemption from the controls on content for political speech and campaigning and not to a positive duty regarding free speech and democracy.

This update summarises the key features of the Bill, and while it is by no means exhaustive (given that the Bill runs to 133 pages and is accompanied by 123 pages of explanatory notes) it flags some of the issues that should be considered by any business that may fall within the regulation, as well as some of the remaining points of uncertainty around the proposed regulatory framework.

Services and content in scope

The services and content within scope are broadly as expected following the White Paper response, but while some of the questions around the meaning of “harm” have been answered, there are still some significant unknowns.

  • Regulated services: As proposed in the White Paper response, services within scope will include “user-to-user” services and “search” services which:
  1. have a significant number of UK users (although query what “significant” might mean);
  2. have the UK as one of their target markets; or
  3. (c) can be accessed from the UK and “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK…”. This is one of many instances where the Bill includes the concepts of “material risk” and “significant harm” without further clarification as to how these concepts are to be interpreted. The Bill makes clear that it intends that service providers from outside the UK will be included in its scope. The Bill seeks to get around extra-territorial issues by granting Ofcom the power to apply to court for service restriction and access restriction orders (by way of injunctions against ISPs to prevent UK users from accessing certain internet services if they do not comply with the legislation).
  • Categories of content and services: As proposed in the White Paper response, the Bill sets out three categories of content that it covers, across at least three categories of service providers. All regulated services will be required to address both “illegal content” and “content that is harmful to children”, but it is only Category 1 regulated services that must also address “content that is harmful to adults”:
  • Illegal content: The definition of “illegal content” covers (a) terrorism offences, (b) child sexual exploitation and abuse offences, (c) other “priority illegal content” to be determined in secondary legislation, and (d) other offences directed at an individual as the victim. The Bill expressly states that “[f]or the purposes of determining whether content amounts to an offence…no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom”. As such, conduct that could not be the subject of criminal proceedings in the UK because of where it took place may nonetheless form the basis of “illegal content” for the purposes of the Bill.
  • Harmful content: The real complexity arises in the context of the definitions of “content that is harmful to children” and “content that is harmful to adults”. There appear to be three ways in which “regulated content” can be harmful:
  1. If following a review by Ofcom it is designated in secondary legislation as “primary priority content” that is harmful to children or “priority content” that is harmful to children or adults.
  2. If the service provider has “reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact” on a child or adult of “ordinary sensibilities”.
  3. If the service provider has “reasonable grounds to believe that there is a material risk” of the dissemination of the content “having a significant adverse physical or psychological impact” on a child or adult of “ordinary sensibilities”. This takes into account (in particular) the number of users of a service and the speed and scope that content can be disseminated on the service.

The distinction and relationship between these categories is unclear. It would seem that “priority content” is intended to be additive, but there is scope for it to be clarificatory, given the uncertainties likely to arise.

There is no definition or guidance on how content might “indirectly” cause harm or how to determine the “ordinary sensibilities” of an adult or child, in circumstances where the content may have been accessed by a large number of people and only be expected to have an adverse physical or psychological impact on a small minority of them. The concept of “psychological harm” is elaborated on in the explanatory notes to the Bill, which suggests that “serious anxiety and fear” and “depression and stress” are included alongside “medically recognised mental illnesses”.

  • Consumer protection against online fraud: Following pressure from the FCA, backbencher MPs, UK Finance and other bodies, the Government’s press release suggests that the Bill will force online companies to address romance scams and fake investment opportunities posted by their users. However, there is no express reference to protecting customers from online fraud or scams within the Bill. The extent to which regulated services will have to address such content will depend on the content fitting within the general scope of being “harmful”, as set out above. In respect of fraudulent activity, it is notable that in assessing the risk that content will cause significant adverse physical or psychological impact you are to exclude any impact flowing from any potential financial impact. Emails and SMS / MMS messages are excluded content and so the legislation as drafted would have no impact on scams conducted by these means.

Duties of services within scope

The Bill includes several so-called “duties of care” for regulated services (as set out below). For lawyers, a “duty of care” is a concept most readily associated with the first building block of civil liability for negligence and so at a superficial level the Bill raises the spectre of additional civil liability for regulated services. However, the duties laid out in the Bill are not really akin to what might traditionally be considered to be a “duty of care” in this sense. The Bill expressly excludes liability of service providers for breaches of any Ofcom codes made under the Bill but it does not expressly exclude any right to claim for damages arising from breaches of service providers of any statutory duties imposed by the Bill. That said, it also does not expressly provide for rights of compensation for individuals and the White Paper response (which also referred to duties of care) did state that “the regulatory framework will not establish new avenues for individuals to sue companies” (the idea being that Ofcom decisions could be used as supportive evidence for any civil claims, but the Bill would not in and of itself give rise to any causes of action).

The “duties of care” provided for in the Bill are in summary as follows:

  • Duties of regulated services: The Bill sets out the duties imposed on regulated user-to-user services and regulated search services. There are differences between the two sets of obligations, but they broadly include risk assessment duties, safety duties (in terms of the mitigation and minimisation of harmful content), reporting and redress duties and record-keeping and review duties. While some of these duties will be ones that service providers already have in place (e.g. reporting and redress) service providers will also have duties to protect freedom of expression and privacy, imposing on regulated services a difficult balancing exercise between these duties.
  • Duty to protect democratic content (Category 1 only): A further addition to the Bill is the duty imposed on Category 1 regulated services to protect “content of democratic importance”, by having processes in place to ensure that the importance of free expression of this type of content is taken into account in the context of, for example, considering whether to uphold a complaint. While this express duty did not appear in the White Paper response, it appears to have replaced the previous proposal to exclude from regulated content “relating to political opinions or campaigning, shared by domestic actors within the law”. A service provider will have the difficult task of determining what content will “contribute to democratic political debate in the United Kingdom” and ensuring that protection is applied equally across the political spectrum.

Exemptions

The Bill addresses both exempt content on regulated services, and exempt services, and there is a crossover between the two that will require close scrutiny by businesses who may fall within the scope of the regulation:

  • Exempt content: “Regulated content” is defined as user-generated content except emails, SMS messages, MMS messages, comments and reviews on provider content, one-to-one live aural communications, paid-for advertisements and news publisher content. This exemption therefore covers comments on articles published on news websites (given that those are comments on “provider content”) as well as articles that are shared on social media services (i.e. content from “recognised news publisher” sites), but does not appear to cover comments on social media services on the shared versions of those articles. You could therefore have the anomalous position where a user comment on a news article falls outside of the Bill’s scope if it is published on the original news publisher’s website, but is caught by the Bill is if it a comment made on a social media site under the same article that has been shared on that social media site. Notably, “journalistic content” is also afforded express protection by Category 1 regulated services (similarly to democratic content, the service must consider the importance of journalism in the context of any decision relating to the moderation of content). The definition of this content casts a wide net for the service provider to consider, incorporating everything “generated for the purpose of journalism” which arguably extends, for example, to commentary on an individual’s blog.
  • Exempt services: Services where the only user-generated content is email, SMS or MMS services or one-to-one voice calls are not caught, nor are workplace intranets, “limited functionality services” (which again covers news websites in its definition) and public bodies.

Regulatory aspects

  • Register of services: Once the Bill comes into force, Ofcom must publish and maintain a register of categories of regulated services. These categories will be based on the number of users, service functionalities and other factors which are to be set by the Secretary of State, with Category 1 facing the highest level of regulation (for user-to-user services), and Category 2 facing fewer restrictions and being split into Category 2A for regulated search services and Category 2B for user-to-user services. DCMS had previously intimated that not all regulated services would be listed exhaustively on the register, and presumably that remains the case (given the likely impracticalities of including every regulated service on a register).
  • Fees: Regulated service providers may need to pay a fee to Ofcom (regardless of their Category status), depending on its worldwide revenue (presumably only in respect of such regulated services, although this is not made clear in the Bill) with the threshold figure to be determined by Ofcom. The providers are also under a duty to notify Ofcom of the details of the regulated services they are providing and the details in respect of their revenue.
  • Overview of Ofcom powers: As anticipated, Ofcom will be the online harms regulator, and will have a range of powers, including to (a) gather information from regulated service providers and (b) bring enforcement action against regulated service providers for non-compliance, including the power to issue a penalty of up to the greater of £18 million or 10% of the person’s ‘qualifying worldwide revenue’. However, such enforcement action can only be taken following continued failure by a service provider to comply with a duty or requirement following a “technology warning notice” or “provisional notice of enforcement action”. That said, the Bill also sets out criminal offences for individuals and senior managers of service providers in relation to information notices served by Ofcom. As for appeals against Ofcom decisions, the Bill only provides for appeals to the Upper Tribunal by regulated service providers in relation to (a) their inclusion on the Ofcom’s register of categories of regulated services or (b) in response to an issued “technology warning notice”. This therefore leaves open the potential for judicial review challenges on decisions that are not covered by the statutory appeals process.
  • Individual rights: The Bill does not include any express right for individuals to complain to Ofcom to adjudicate their specific cases. That is consistent with the White Paper response which indicated that Ofcom would not investigate or arbitrate individual cases. The Bill provides for “super complaints” to have to be made by eligible entities (which is another point left for secondary legislation).

Comment

The Bill brings a broad range of businesses within the scope of “user-to-user services”, including not only the social media ‘tech giants’, but could also cover businesses such as smaller review websites, independent forums and online marketplaces. The categorisation thresholds – which rest with the Secretary of State and Ofcom – will be key in understanding the breadth of the Bill’s scope, and businesses will need to consider the extent to which the regulation will apply to them, given the assessment and other duties imposed by the Bill, alongside the significant regulatory and enforcement powers granted to Ofcom. Indeed, much of the detail of the new regime has been left to secondary legislation and to Ofcom codes/guidance, including key issues around what constitutes illegal and/or harmful content. While it is anticipated that the designations will be wide enough to cover hate crimes, scams and other material that is often the subject of social media take down requests, it remains to be seen how such content is designated.

The draft Bill will now be subject to pre-legislative scrutiny by a joint committee of MPs before it is formally introduced in Parliament. Such scrutiny can take months and the draft Bill may be subject to further amendments, which will prolong enactment even further. The draft Bill may therefore not be formally introduced until a subsequent Parliamentary session, which could be as late as next year or beyond.