This morning, the Government published its initial response (the “Response”) to last year’s Online Harms White Paper consultation. The Response leaves many issues concerning the regulatory and legislative structure yet to be decided (such as funding and enforcement powers), with the final policy to be published by the Government in the spring. However, before the regulatory framework is completely operational, the Government expects to produce “voluntary” codes in the coming months to tackle the more serious harms “where there is a risk to national security or to the safety of children”.
The “harms” that will be covered
When considering the online harms which will be prohibited, the Response differentiates between illegal content and content that is not illegal but has “the potential to cause harm”, with examples being given of “online bullying, intimidation in public life, or self-harm and suicide imagery”. While regulated companies will need to remove the former content “expeditiously” and put in place “effective systems” to minimise the risk of illegal content appearing in the first place (although these terms are yet to be defined/explained in any detail), it will give regulated companies some freedom to put in place its own policies and processes relating to legal but potentially harmful content (with a higher level of protection to be afforded to children).
Those processes will nonetheless be overseen by the regulator, who will require companies to have “effective and proportionate user redress mechanisms which will enable users to report harmful content and to challenge content takedown where necessary” (recognising concerns raised by the consultation respondents relating to the potential risk of regulation to freedom of expression), but will not investigate or adjudicate on individual complaints relating to individual pieces of content.
There is no mention in the Response of “online disinformation”, which featured in the White Paper. However, there is an indication that the regulation of ‘fake news’ is only a little further around the corner, with the Response stating that “[t]he government is undertaking an ambitious programme of wider work on how we govern digital technologies to unlock the huge opportunities presented by digital technologies whilst minimising the risks. Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”
In this regard, it is also noticeable that the Response actually starts with a section on “freedom of expression”, recognising its “critical importance”, which is something of a change of tone from the consultation document.
It is still not clear what companies will and will not be in scope of the regulation. While some respondents to the White Paper suggested that companies should be allowed to self-assess whether their services are in scope, the Response did not adopt this suggestion, and instead stated that the regulations will “only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example through comments, forums or video sharing”, and only where the service is provided directly (rather than being hosted on a social media platform). The response also states that this will cover “only a very small proportion of UK businesses (estimated to account to less than 5%)”, but implicitly acknowledges that this is not always a straightforward question by stating that “guidance will be provided to give clarity on whether or not the services they provide or functionality on their website would fall into the scope.”
The White Paper also did not properly address jurisdictional issues, and the Response does not either. While there is a reference to “UK businesses” in the Response, it does not clarify that only businesses registered in the UK will be caught within the regulator’s scope for the purposes of Online Harms regulations, or whether businesses elsewhere that offer services to individuals in the UK will also be caught (and if the latter then this raises complex jurisdictional issues which become all the more complex in a post-Brexit world where several tech companies have their European headquarters in the Republic of Ireland).
The regulator and its powers
One issue that appears settled by this Response is the appointment of Ofcom as the regulator for Online Harms, which is not stated in the Response to be an interim regulator pending the appointment of a new, permanent regulator (contrary to some of this morning’s media reports). This decision to extend the remit of an existing regulator has been made despite the majority of consultation responses expressing the view that a new body should be appointed.
The Response in its Joint Ministerial Foreword did however acknowledge implicitly that there is an overlap between the intended remit of the Online Harms regulator and that of the ICO, with reference to the latter’s recently-published final Age-Appropriate Design Code. While the Government acknowledges the need for the “right regulatory regime and legislation” to be in place, the Response does not shed any light on how the regimes of the ICO and Ofcom would fit together when considering online content that is aimed at children. Given that the Response states that under the Online Harms regulation the Government “expect[s] companies to use a proportionate range of tools including age assurance, and age verification technologies to prevent children from accessing age-inappropriate content and to protect them from other harms”, there appears to be a real risk of overlap with the ICO’s remit, and potential for companies to find themselves with liability to two regulators for the same issues.
This might be a particular problem when it comes to penalties. The Response does not shed any light on the future penalties regime of Ofcom with its “Online Harms” hat on; in particular, there is no indication of whether Ofcom will be given similar fining powers to the ICO (up to the higher of 4% of annual global turnover or €20 million). Although Ofcom’s current penalties structure is quite complex and varies depending on the particular contravention, its maximum fine of £42m to date is much smaller that the recent fines levied by the ICO in the hundreds of millions of pounds. The broader Online Harms enforcement regime is clearly still being thought through by Government, with nothing yet concrete in the Response regarding personal liability (criminal or otherwise) of directors and other senior management, which will be revealed in the final policy that is published in the spring.