Algorithms and Collusion – the debate by the OECD Competition Committee

EU

On 16 May 2017, the Secretariat of the OECD made available a background note about “Algorithms and Collusion” that was at the centre of the debate by the OECD Competition Committee on 21-23 June 2017. After a general contextualization, the note identifies a number of risks and challenges that algorithms could present in the future for competition law enforcement. It explores the potential for regulating algorithms while acknowledging the theoretical dimension of its approach and emphasizing the necessity to conduct a sound cost-benefit analysis before adopting any kind of regulation that could have deleterious impacts on the digital economy and that could outweigh the potential benefits. The approach adopted in the note has the great merit of summarizing a series of the relevant questions and of fuelling the current debate about algorithms that is growing in importance day by day. The note is summarized and commented on below. Two reactions from members of the US Federal Trade Commission are also reported.

1. OECD SECRETARIAT BACKGROUND NOTE ON “ALGORITHMS AND COLLUSION

1.1 Introduction

On 16 May 2017, the OECD Secretariat made available a background note prepared for the OECD Competition Committee roundtable about Algorithms and Collusion that took place on 21-23 June 20171 (hereinafter the “Note”).

The OECD Secretariat aims to assess how algorithms could change “the competitive landscape by offering to firms opportunities to achieve collusive outcomes in novel ways that do not necessarily require the reaching of an agreement in the traditional antitrust sense, or may not even require any human interaction2 (Chapter 1).

1.2 Contextualization

The Note provides general definitions and reports that algorithms improve predictive analysis and optimize processes in many business areas such as engineering, finance, health and biology, offering cost reductions, quality improvements and better allocation of resources. It also describes how algorithms are used by governments to detect patterns of criminal behaviour and how algorithms can be used to detect collusion that would be difficult to identify by other means (Chapter 2).

The Note goes on to describe pro-competitive effects of algorithms generated by dynamic pricing algorithms that improve market efficiency on the supply side and by “digital butlers” that improve comparison on the demand side3 (Chapter 3).

After this introductory and rather descriptive part, the Note then focuses on the concern that algorithms could make collusion between undertakings easier and more likely to be observed in digital markets (Chapter 4).

The OECD Secretariat reports in that context that several competition authorities point out that the increase in market transparency and in the frequency of interaction induced by algorithms could lead to the determination of a collusive supra-competitive prices equilibrium.

The Note assesses the impact of algorithms on demand and supply factors, and on the likelihood of collusion. This impact appears to be uncertain. The Note acknowledges that the model used for this analysis is based on strong underlying assumptions that are not verified in reality. It identifies, nevertheless, a clear risk that algorithms “may facilitate anti-competitive strategies, such as collusion and other market manipulations4.

The OECD Secretariat identifies four categories of algorithms that may facilitate collusion:

  • Monitoring algorithms that may allow colluding undertakings to avoid price wars (with the caveat that the implementation of collusive monitoring algorithms still generally requires traditional communication between the cartelists, which is not a new situation).
  • Parallel algorithms that are dynamic pricing algorithms that companies share and that could be programmed to help determine anticompetitive prices.
  • Signalling algorithms through which companies continuously send new signals (such as offers to raise prices) and monitor the reactions of cartelists until each of them agrees on the offered price by sending the same signal, until the launch of a new “signalling algorithm” mediated negotiation begins.
  • Self-learning algorithms that generate “decisions being taken” within so-called “black boxes” (that are at the heart of artificial intelligence (“AI”) and that could lead to “collusive behaviour”.

1.3 Challenges for competition law enforcement

According to the OECD Secretariat, algorithms, and especially AI/self-learning algorithms, create new risks related to behaviours not covered by the current antitrust rules and raise potential challenges for competition law enforcement. In particular, in a scenario where an AI/robot decides to adopt a collusive behaviour, it would be difficult to attribute liability to its owner and it could lead to more general questions regarding the scope of antitrust liability.

The Secretariat acknowledges that this approach may still be fairly theoretical, noting: “Although the use of algorithms by companies is widespread in certain industries, the use of complex algorithms based on deep learning principles may still be relatively rare across traditional sectors of the economy. At the moment, there is still no empirical evidence of the effects that algorithms have on the actual level of prices and on the degree of competition in real markets.”5

Nevertheless, it identifies potential challenges that AI/self-learning algorithms could present in the future for competition law enforcement (multiplication of forms of tacit collusion, the potential need to “revisit” the notion of “agreement” and the scope of antitrust liability) and it refers to the available competition law tools to prevent algorithmic collusion (such as market studies and investigations, ex ante merger control, commitments and remedies) while emphasizing their limits (Chapter 5).

1.4 Potential regulation of algorithms

Against that background, the OECD Secretariat examines the idea of potential regulation of algorithms (Chapter 6).

1.4.1 Justifications

The Note summarizes some of the arguments in favour of regulating algorithms. In terms of such arguments and without restricting the debate to risks of collusion, it refers in particular to the potential risks related to algorithmic selection. More specifically, it expresses concerns about market failure that could be caused by: (1) imperfect information resulting from algorithm secrecy (and the difficulty of interpreting them even when access is granted), (2) data-driven barriers to entry (algorithms need data; a refusal to access the required data may affect the development of competitive pressure), (3) algorithms’ information selection could lead to information bias that could potentially affect serendipity and eventually prevent the emergence of innovation (Chapter 6.1).

Following that, the OECD Secretariat mentions possible forms of regulatory intervention. If the market forces themselves or self-regulation appear to be insufficient, the Note reports that some suggest: (1) establishing “new regulatory institutions to govern the digital economy”, such as “the creation of a global digital regulator, a central and independent agency that would be responsible for coordinating and supervising the different regulatory aspects of internet and data”; or (2) establishing “a new AI regulatory regime” and an “agency tasked with certifying the safety of AI systems6.

It observes however that the feasibility and social opportunity of regulating algorithms still need to be the subject of debate. It notes that the market-oriented approach towards the digital economy adopted since the launch of the internet contributed to the current success of online commerce and the launch of innovative services. It also recalls that “policy makers should cautiously evaluate the risks of over-enforcement, as excessive regulatory interventions could result in new barriers to entry and reduce the incentives of companies to invest in proprietary algorithms, which so far have brought a great value for society” and that the OECD “recommends governments to evaluate the competitive impact of market regulations, emphasising that ‘competition assessment of proposed public policies should be integrated in the policy making process at an early stage’”7.

With this in mind, the OECD Secretariat refers to concerns expressed (and initiatives taken) regarding algorithms’ transparency and accountability in the US (by the FTC’s Bureau of Consumer Protection and the US Public Policy Council of the Association for Computing Machinery (USACM)) and in Europe (by Commissioner Vestager addressing the Bundeskartellamt, and by the German Chancellor Angela Merkel).

The OECD Secretariat points out two specific concerns about transparency and accountability:

  1. Even if full transparency of algorithms were to be obtained, that may not be sufficient to understand how they work (and especially how decisions are taken in the AI “black box”) and additional explanations could be required in this respect.
  2. Regarding algorithms involving cross-border questions that concern “privacy law, transparency law, data protection, intellectual property rights, consumer protection and competition law”8, it will be a challenge to identify the regulator(s) that will be in a position to examine them.

In this context, the OECD Secretariat reports that the European General Data Protection Regulation (GDPR) confers on European citizens a “right to explanation” implying that they can “seek and receive an explanation for decisions made by algorithms, especially if they are using profiling techniques9. This provision does not create a full transparency obligation for owners of algorithms, but it obliges them – in some circumstances – to be able to provide an explanation on how their algorithms work, if asked.

1.4.2 Propositions of regulation to prevent algorithmic collusion

Although the OECD Secretariat recognizes that the scenario of an algorithmic “virtual” collusion may still be hypothetical, it nevertheless suggests “three potential types of regulatory intervention10 that could be discussed further because of the potential risks that algorithms could pose to competition. It suggests considering the following approaches:

  1. Introduction of price regulation.
  2. Adoption of policies that alter market transparency (e.g. systems of secret discounts). The OECD Secretariat openly acknowledges that this kind of measure can alter both competition and consumer welfare.
  3. Regulation of algorithm design to make the prohibition on adopting anticompetitive conduct an inherent part of algorithm design (similar to Asimov’s three laws of robotics). This suggestion implies that the regulator should be able to supervise (and understand) algorithms so as to be in a position to verify that they effectively comply with competition rules.

1.5 The conclusion of the Secretariat

Having explored the question of possible regulation of algorithms to prevent algorithmic collusion, the OECD Secretariat concludes its Note by stressing the need for a cautious and multi-dimensional approach in addressing this question:

However, at this stage, there are still concerns that any regulatory interventions might have severe negative impacts on competition that could outweigh their potential benefits. If regulatory solutions are to be considered, competition concerns would only be an element of such discussion and considerations going beyond the risk of collusion would have to be factored in such discussions. […] Given the multi-dimensional nature of algorithms, policy approaches should be developed in co-operation with competition law enforcers, consumer protection authorities, data protection agencies, relevant sectorial regulators and organisations of computer science with expertise in deep learning. In conclusion, despite the clear risks that algorithms may pose on competition, this is still an area of high complexity and uncertainty, where lack of intervention and over regulation could both pose serious costs on society, especially given the potential benefits from algorithms. Whatever actions are taken in the future, they should be subject to deep assessment and a cautious approach.11

2. COMMENTS

The Note is interesting as it gives an understanding of the framework in which the OECD Competition Committee will debate algorithms and collusion. The Note reflects on doctrinal questions (expressed by academics such as Ariel Ezrachi and Maurice Stucke in their book “Virtual Competition12) and presents the concerns that have already been expressed by policy makers (such as EU Competition Commissioner Margrethe Vestager). The Note itself appears to attempt striking a careful balance between the promotion of tools that pave the way for amazing developments and concerns about the misuse of those amazing tools.

However, certain elements in the Note appear to raise questions and two short comments may be made.

2.1 First comment: “If pricing practices are illegal when implemented offline, there is a strong chance that they will be illegal as well when implemented online13

One may question whether several categories of algorithm that may facilitate collusion according to the OECD Secretariat in fact present new challenges for competition laws. Indeed, the implementation of monitoring algorithms described in the Note still requires traditional communication between the cartelists, which is not really a new situation. The harmful scenario of collusive parallel algorithms in the Note implies a rather old-fashioned exchange of information at the time of inception of parallel algorithms. Also, the potential risk generated by signalling algorithms implies, in order to be prohibited, that cartelists agree on the signal before launching the mechanism. So, at least for those categories of algorithm, the exchange of information that takes place between cartelists at various stages, with a view to altering competition, should already be covered by the scope of current competition rules and does not require the adoption of new regulation.

A debate around this question has clearly been fuelled by the Note as evidenced by some of the first reactions to it.

For example, a few days after the publication of the Note, Terrell McSweeny, Commissioner of the US Federal Trade Commission, published on the FTC’s website a speech entitled “Algorithms and Coordinated Effects”14. In her speech, she comments on the risk of collusion through pricing algorithms, pointing out that pricing algorithms are tools that can be used to implement human-prohibited agreement and that this use will potentially make human-prohibited agreement even more difficult to detect and ultimately “may make price fixing attempts more frequent15. She also expresses some concerns about price discrimination and notes that “Concerns about algorithmic tacit collusion are still largely theoretical at this point16 but recommends remaining vigilant. Against that backdrop, she declares that she will take note of the OECD works prepared by the Note: “Next month, the OECD will be holding a roundtable on algorithms and collusion and I look forward to reading the contributions of participants.17

The following day, Maureen K. Ohlhausen, acting chairman of the US Federal Trade Commission, published on the FTC’s website another speech: “Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing.”18

Maureen Ohlhausen states: “I’d like to suggest tonight that although antitrust enforcers should always remain vigilant for new forms of anticompetitive behavior, some of the concerns about algorithms are a bit alarmist. From an antitrust perspective, the expanding use of algorithms raises familiar issues that are well within the existing canon. An algorithm is a tool, and like any other tool, it can be put to either useful purposes or nefarious ends. There is nothing inherently wrong with using mathematics and computers to engage more effectively in commercial activity, regardless of whether that activity is participation in the financial markets or the selling of goods and services.19

She reminds us that “Setting prices together is illegal, while observing the market and making independent decisions is not” and considers that the existing antirust framework in the US “is sufficiently flexible and robust that it can already accommodate several of the current concerns applicable to the widespread use of algorithms20.

About the potential risk that algorithms could facilitate collusion that competition law may not be able to address (and that could ultimately lead to the adoption of a specific regulation as described in the Note), Maureen Ohlhausen comments: “Again, this is fairly familiar territory for antitrust lawyers, and we even have an old-fashioned term for it, the hub-and-spoke conspiracy. Just as the antitrust laws do not allow competitors to exchange competitively sensitive information directly in an effort to stabilize or control industry pricing, they also prohibit using an intermediary to facilitate the exchange of confidential business information. Let’s just change the terms of the hypothetical slightly to understand why. Everywhere the word ‘algorithm’ appears, please just insert the words ‘“a guy named Bob’. Is it ok for a guy named Bob to collect confidential price strategy information from all the participants in a market, and then tell everybody how they should price? If it isn’t ok for a guy named Bob to do it, then it probably isn’t ok for an algorithm to do it either.21

This last argument illustrates well the recurring temptation to regulate (too quickly?) what people do not (yet?) (fully?) understand. In a recent contribution, Thibault Schrepel points out this reflex: “In fact, we should remain prudent not to heed the siren’s call that asks for more interventionism each time there is a new technical/technological evolution. As described by Henry Hazlitt in its Economics in One Lesson, each evolution of production techniques/emergence of new means is always the occasion for some to reintroduce the same old arguments asking for a greater degree of government intervention. It was already the case in the late 19th century (…), in the 1930s, the 1960s and the 1970s (Gunnar Myrdal accused machinery to reduce the amount of work available to humans, calling… for intervention). Today, the New Economy triggers the same desire. The emergence of high-tech has indeed become the pretext to regulate what we understand very little.22

This leads to the second comment.

2.2 Second comment: “Like an employee (…) an algorithm remains under the firm's control, and therefore the firm is liable for its actions23

The OECD Secretariat identifies the risk that the implementation of self-learning algorithms could lead to the “decision” within an AI “black box” to adopt a “collusive behaviour24.

However, the Note is not very clear on one essential point: With whom/what will the AI collude?

Illustrating the risk of “collusion and other market manipulations25 generated by algorithms, the Note refers to the Flash Crash of 6 May 201026. One may, of course, question if the Flash Crash episode that happened almost in lab conditions27 is really transposable to the actual digital economy and if it could be used to justify the potential regulation of algorithms in general, but that is not the point here.

The point here is that the risk of collusion as pointed out in the Note is a risk of collusion between AIs and it is in this context that the OECD Secretariat suggests considering regulating algorithm design in order to make the prohibition on adopting anticompetitive conduct an inherent part of algorithm design.

Although there is very little doubt that interactions between AIs will create challenges for various aspects and fields of law, including competition law, it still seems fair to consider that this should not mean that (very broad and overarching) regulation is necessarily needed. The key issue is to find the appropriate balance between ex ante and ex post approaches.

The suggestion of regulating algorithm design on an ex ante basis seems to be in line with the precautionary principle. From that perspective, it is not unreasonable to assess risks and require the prevention of damage occurring rather than to adopt a wait-and-see approach and try to attribute liability afterwards.

Still, it should be noted that competition rules in all their aspects and complexities may be far more sophisticated than Asimov’s three laws of robotics referred to in the Note and that their inclusion in algorithm design may create a series of very complex challenges that could require, if at all possible, very significant amounts of time before being overcome in an algorithm design that is “certified” “fully competition law compliant and without any (future) risk”.

Similarly, there is also the risk that a far-reaching and stringent ex ante approach prevents innovation and produces a series of undue negative off-target effects. It is even possible that the three potential types of regulatory intervention suggested in the Note (and especially the third one) could be more deleterious than the risk of a potential virtual algorithmic collusion in the digital economic world, as the OECD Secretariat itself acknowledges.

The Note is very clear on the multi-dimensional nature of algorithms but seems to just focus on the risks of undesirable side effects of algorithms regulation in the digital economic world while the regulation of algorithms will not only affect this sector but could also affect other domains of human activity of crucial importance.

In life sciences, for example, the development of new-generation sequencing tools makes more human genomic data available every day. Algorithms and AI will very soon be crucial to be able to cope with the volume of data that those sequenced genomes represent and could hopefully allow researchers to identify new treatments for cancer or rare diseases28, for instance. It would obviously be particularly harmful if general regulation of algorithms hampered the development of algorithms and AI in this field.

Further debates on these crucial questions are clearly needed and from this perspective the OECD initiative is very welcome. In this context, due regard should be given to the proposition that at this stage a cautious ex post approach may be adequate, provided it is clear for AI owners that “Like an employee (…) an algorithm remains under the firm's control, and therefore the firm is liable for its actions29.

This does not mean that competition enforcers should remain passive and sit back and wait. Indeed, it could, in particular, be suggested that competition enforcers should continue to develop their own AIs that will, one day, target anticompetitive behaviour of other AIs in the future. Also, there may be a case for (early and compulsory?) knowledge and guidance sharing when actual risks are identified so that risks do not spread and are mitigated in an appropriate manner. In all these aspects, competition enforcers, as well as law practitioners, should probably also begin to get accustomed to programming language (such as Python) to be able to better understand algorithms.

After the publication of the OECD Secretariat’s Note, FTC Commissioner Terrell McSweeny expressed a very similar position, noting that “as the technology running the algorithm becomes smarter and more autonomous, research should focus on whether it tends to achieve a collusive outcome without being programmed to do so30 and concluding her speech with the following observation and thought: “One thing I can say with confidence is that the rise of pricing algorithms and AI software will require changes in our enforcement practices. We, as enforcers, need to understand how algorithms and AI software work in particular markets. At the FTC, we have taken steps to expand our in-house expertise by adding the Office of Technology, Research and Investigations, which includes technologists and computer scientists. As I have said before, this is just a first step. I believe that technologists will come to play an increasing role in cases involving pricing algorithms and AI in the future.31

3. CONCLUSION

Algorithms are reshaping our world and not only the digital economy.

The Note of the OECD Secretariat has the merit of fuelling the debate about algorithms and their potential regulation that, if needed one day, will be extremely complex to implement due to, among other difficulties, their multi-dimensional nature.

Algorithms are tools that can be misused and there is no doubt that they will be.

The crucial question is likely to be how to identify “the good” and “the bad” and how to ensure that “the bad” does not occur as an unintended consequence of “the good” while not preventing “the good” from contributing to innovation and progress. To answer those questions and determine the extent to which one is liable for what one creates and uses, it will first of all be essential to understand what one is dealing with.

Therefore, regulators, law enforcers and lawyers must make the effort to learn programming language and at least acquire a better understanding of algorithms. This should help regulators to try to prevent the misuse of algorithms, law enforcers to try to identify and sanction the actual misuse of algorithms, and lawyers to assist their clients in the design of algorithms and to defend their rights.


1 Contributions from EU, Italy, Russia, Singapore, Ukraine, United Kingdom, United States and others are available at http://www.oecd.org/competition/algorithms-and-collusion.htm.

2 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.5.

3 While making the traditional antitrust SSNIP test old fashioned (obsolete) in a new context of individualized prices (Box 11).

4 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.22. Referring to the Flash Crash of 6 May 2010 (Box 7).

5 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.32.

6 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.45.

7 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.46.

8 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.47.

9 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.48.

10 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.48.

11 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.50.

12 Ezrachi, A. and Stucke, M. E. (2016), Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy, Harvard University Press, United States.

13 Algorithms and Collusion - Note from the European Union of 14 June 2017, p.9.

14 Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).

15 Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).

16 Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).

17 Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).

18 Maureen K. Ohlhausen, Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing, 23 May 2017 (online).

19 Maureen K. Ohlhausen, Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing, 23 May 2017 (online).

20 Maureen K. Ohlhausen, Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing, 23 May 2017 (online).

21 Maureen K. Ohlhausen, Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing, 23 May 2017 (online).

22 Thibault Schrepel, Here’s why algorithms are NOT (really) a thing, Concurrentialiste, 15 May 2017 (online).

23 Algorithms and Collusion - Note from the European Union of 14 June 2017, p.9.

24 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.31 (and illustration).

25 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.22.

26 Secretariat of OECD, Algorithms and Collusion, background note of 16 May 2017, p.23 (Box 7). Referring to the Flash Crash of 6 May 2010 (Box 7).

27 See: https://en.wikipedia.org/wiki/2010_Flash_Crash.

28 See, for example, see “the genome Aggregation Database (gnomAD)” and Daniel MacArthur and al., Guidelines for investigating causality of sequence variants in human disease, NATURE, 24 April 2014, doi:10.1038/nature13127 (online).

29 Algorithms and Collusion - Note from the European Union of 14 June 2017, p.9.

30Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).

31 Terrell McSweeny, Algorithms and Coordinated Effects, 22 May 2017 (online).