The CMA’s focus on algorithms: computing for compliance success?  

United Kingdom

National lockdowns and closed workplaces in recent months have only accelerated the trend of an ever-greater proportion of our lives being conducted through the digital world. The increasing importance of the digital space is reflected in the signalling by the Competition Markets Authority (“CMA”) this month that it intends to significantly step up its monitoring and enforcement activity with respect to algorithms.

This week the CMA assembled a panel of experts for a webinar, who were consistent in their views that more investigations and more regulation will be needed. The panel highlighted a number of ways in which they say algorithms can be problematic including:

  • Harm to consumers – for example via personalisation which could create discrimination.

  • Competition law harms – such as the exclusion of competitor market access via self-preferencing, or algorithms being used to make collusion easier and more efficient.

This expanded on a research paper and accompanying call for information on algorithm harms published by the CMA on 19 January (available here).  It is clear that the CMA intends to use all powers at its disposal to take an increasingly tough line on suspected harms -  including in due course the new ex ante powers that it hopes to obtain on the establishment of its Digital Markets Unit (“DMU”).

While acknowledging that the behaviour of machine learning algorithms in particular can be hard to predict, the CMA is clear that it considers firms to be responsible for effective oversight of algorithmic systems, which should include robust governance, holistic impact assessments, monitoring and evaluation.

The CMA’s message is that any business that deploys algorithms to offer its services needs to understand the impact those algorithms are having on the effective operation of markets. If they are potentially causing harm, it will want businesses to consider whether their practices will need to be adjusted, or to bear the risk of a future investigation.

However, algorithms are already ubiquitous and often hugely beneficial. While the CMA may be concerned, in order to successfully challenge any particular algorithmic system, it will need to meet a high standard of proof in order to actually evidence its novel theories of competition or consumer harm.

Why does the CMA want to scrutinise algorithms more closely?

Our previous alerts (see “Artificial Intelligence – Data as the new measure of competition”) have commented that data -  and the algorithms that are used to process it  - are areas that have steadily moved up the enforcement agenda in recent years, as increasingly large segments of the economy move online, new technology is developed and new business models evolve.

In recognition of this, the CMA established its Data, Technology and Analytics (“DaTA”) unit in 2018. This unit aims to use sophisticated data engineering, machine learning and artificial intelligence techniques in order to inform the CMA’s enforcement work. The team claims to be the largest of any competition regulator in the world and includes data scientists, data engineers, technologists and behavioural scientists.

The DaTA unit is now at the centre of a new CMA programme to analyse algorithms, developing its knowledge to better identifying and addressing harms.  The first output from this new campaign is a paper which draws together a summary of many different types of alleged algorithmic harms.

What potential harms to competition and consumers has the CMA now identified?

The CMA focuses particular attention on possible harms brought about by ‘personalisation’. While there is little evidence to date of personalised pricing taking root, the CMA is concerned that rapidly-growing volumes of customer data coupled with increasingly powerful algorithms can cause harms. The CMA says these could include manipulation of choice architecture, customer journeys, search rankings and design in ways that benefit the platform (and/or in certain cases the platform’s customers) but not necessarily the consumer. The CMA says this could also lead to discrimination against protected groups or vulnerable consumers.

The CMA next focuses on the use of algorithms in exclusionary practices, which they are concerned can be used by dominant firms to deter competitors from challenging their market position, including:

  • Self-preferencing – the CMA emphases online marketplaces and search engines which have strong market positions and represent a gateway for businesses to access customers. The CMA is considering search rankings, choice architecture and the possible effect of information asymmetries or conflicts of interest on the part of the platform.

  • Manipulating ranking algorithms to exclude competitors – the CMA cites the risk of search and ranking algorithms placing insufficient weight on maintaining competition and which could reinforce market power in other markets.

  • Changing algorithmic systems in gateway services in a way that harms businesses which rely on it – the CMA says that gateway platforms which cause harm to rivals or businesses that rely on the platform (even unintentionally) by changing their algorithms could breach competition law.

  • Predatory pricing – the CMA says that in theory an incumbent firm could use similar data, algorithms and techniques for personalised pricing in order to identify and selectively target customers most at risk of switching, or who are otherwise crucial to a new competitor achieving critical mass. This could make it easier to predate, foreclose or marginalise competitors.

The CMA is giving thought to the question of potential collusion by pricing algorithms. It cites three broad potential areas of concern in its paper:

  • More data makes it easier to collude – greater volumes of pricing data and automated pricing systems can make it easier to maintain explicit coordination between competitors, since it is easier to detect and respond to deviations from price fixing agreements.

  • Hub-and-spoke – if competing firms choose the same third-party software developer for their algorithmic systems, or delegate pricing decisions to the same intermediary, could this create a ‘hub-and-spoke’ structure in which prices all move together, and facilitate information exchange?

  • Autonomous tacit collusion – could pricing algorithms learn to collude without requiring other information sharing or existing coordination, and without human input?

The CMA is also keen to investigate further the possibility of platforms claiming that they are instigating algorithms to address certain online harms, but those algorithms being ineffective - and it not being possible for third parties to understand and detect this ineffectiveness to hold them accountable. The CMA cites by example, systems to detect and delete fake online reviews.

What might the CMA do with the information it is gathering?

The CMA’s research paper indicates that it is determined to get to grips with specific market issues that it suspects are present in the tech space. One webinar panellist this week evoked an ‘arms race’ between algorithm developers and the regulatory authorities. If that is so, the CMA is taking a big step forward in seeking to catch up.

The CMA has made clear in both its paper and webinar that it wants to gather data on algorithmic harms, with a view to identifying potential cases. These would set precedents and increase deterrence. The CMA could investigate broadly under its existing powers as follows:

  • Market studies/market investigations – this tool is very powerful, allowing a holistic review of an entire market rather than individual participants, and can (after a multi-year process) allow the CMA to order a wide range of remedies. It is especially targeted at markets where the CMA considers that systemic concerns may have accrued.

  • Competition law enforcement – penalties can be severe, at up to 10% of global turnover, but investigations (and ultimately appeals) can be extremely long. While it is a powerful deterrent, this approach is becoming increasingly unattractive to regulators seeking to address suspected harms in fast-moving digital markets.

  • Consumer law enforcement – the CMA has been particularly active in using these powers in the last 2 to 3 years. It currently has no ability to levy fines of its own and must prosecute in the courts, but these powers are likely to be boosted following an ongoing review by John Penrose MP.

Many of the examples of potential harms that the CMA has cited appear to relate to platforms, or gatekeeper businesses, which will eventually be subject to the new ex-ante regulatory regime for digital markets to be administered by the CMA’s new Digital Markets Unit (see our previous Law-Now here). The CMA says this will ‘go live’ in shadow form on 1 April, just a few days after the close of its consultation, such that it appears that much of the information to be gathered in response to the CMA’s call for evidence will be funnelled into that unit as it starts to orientate itself and set enforcement priorities.

What practical steps should digital businesses take now?

Businesses which operate in the UK will need to carefully review their processes against the CMA’s report, which essentially sets out a taxonomy of the algorithmic harms that the CMA is currently aware of and suspects could be problematic. They will need to assess their risk of exposure in light of the CMA’s views, and the risk of a future investigation, versus any damage to their businesses in adjusting those processes.

The compliance burden on companies that deploy algorithms is also set to increase. In its report, the CMA comments on the transparency measures that it considers that businesses will need to put in place to self-assess compliance, and to be able to demonstrate to the CMA in future that their algorithms are not causing harms. This includes a range of different algorithm auditing techniques, potentially using third parties. The CMA says that it expects companies to:

  • Keep records explaining their algorithmic systems, including ensuring that more complex algorithms are explainable.

  • Be ready to be held responsible for the output of their algorithms, especially if they lead to anti-competitive, illegal or unethical outcomes.

  • Keep design documents that record system goals, assumptions about sociotechnical context, and other important considerations before development of an algorithm commences. 

The CMA also considers that algorithmic systems may need to be monitored on an on-going basis. One suggestion is ‘regulatory sandboxes’. The ideal would be to provide a safe space for firms to technically test algorithms in a live environment, without being subject to the usual regulatory consequences. However, how close an emulation of the real world could be achieved within such a sandbox remains an open question.

Comment

The digital regulatory environment for algorithms only looks likely to toughen in the UK over the next few years, and the CMA’s enhanced scrutiny is likely to be replicated in other jurisdictions. This means that to the extent businesses are required to engage with the CMA’s concerns, this thinking is unlikely to be wasted effort, as approaches honed in one jurisdiction can be redeployed elsewhere.

Having said this, it should not be lost sight of that algorithms are already ubiquitous, and often create significant benefits for consumers and society at large. The challenge for the CMA will be to avoid eroding or damaging those benefits in the pursuit of those algorithms which the CMA considers to be problematic. In particular, post-Brexit it will be especially important not to chill UK-based innovation for products and services that are inherently global and could be developed elsewhere.

While the CMA is clearly concerned about a number of potential issues, in order to successfully challenge any particular algorithmic system, it will also need to meet a high standard of proof. There can be no short-cuts in assembling the evidence for what will often be particularly novel and complex theories of harm.