Resume: Pricing algorithms are increasingly used as a new tool by businesses to respond to an old problem of economic theory: setting the right price on the market for a given product. If such algorithms can have multiple benefits for the consumer and the market, they remain however associated with major risks. These dangers mainly consist in infringements of articles 101 and 102 of the Treaty on the Functioning of the European Union: collusion and price discrimination. The main difficulty remains then to enjoy the promises of pricing algorithms without suffering their perils. Finding and maintaining such an equilibrium is a very challenging task for the European competition law. The escalating sophistication of algorithms, amounting to intelligent algorithms – the so-called ‘black box’ algorithms – born from the use of Artificial Intelligence in increasing cases, makes it considerably harder for competition authorities to identify and delineate the side effects of such phenomenon. In addition to this technical difficulty, the main authorities agree that the current competition tools appear to be inadequate to deal efficiently with pricing algorithms.

The purpose of this paper is to reflect on the different perspectives on how to approach optimally pricing algorithms. The current approach favored by the European Commission is articulated towards regulation and governmental intervention. However, the regulation carries some important weaknesses and could even be counterproductive if it was to be used in excess. An alternative view such as the Chicago School originated in the United States is worth studying. Established on a market-based approach, some interesting elements would indeed be very useful to reshape the European regulation. Ultimately, it is the task of the agencies and courts to decide how to resolve these issues. This research, by proposing some concrete solutions to supervise pricing algorithms, is one contribution in this direction.

To quote this paper: Anne-Sophie Thoby, “Pricing Algorithms & Competition Law: How to think optimally the European competition law framework for pricing algorithms?”, Competition Forum, 2020, art. n°0009, available at: https://www.competition-forum.com/.

INTRODUCTION

All of us are constantly using, knowingly or not, pricing algorithms in our every-day life. Whether it is for ordering a taxi ride, shopping in an online marketplace, booking a hotel for our next vacations, or filling the car with fresh gasoline. Pricing strategies are indeed implemented through powerful algorithms hidden behind well-known platforms: Amazon, Uber, and Google among others.

These new tools are an integral part of the globalization process, constantly associated with the digitalization of the markets “triggering a domino effect that promotes wider use of algorithms in an industry”[1]fuelled by Big Data[2]and Big Analytics[3]. This presence of algorithms specialized in pricing is particularly remarkable in the e-commerce sectors: a survey conducted by the European Commission in 2017 revealed that 78% of the retailers that admitted using software to track prices subsequently adjust their own prices to those of their competitors[4]. Beyond the behavior of their competitors, market players are also now able to build pricing strategies based on the behavior of individual consumers, operating personalized pricing, even if only a few – 2% of the respondent retailers – admitted it in the survey[5].

If the recent emergence of the phenomenon and its particular relevance in the economy today made it a topic of particular interest to write about, then it is even more fascinating to notice the multidimensional nature of pricing algorithms[6]. At the interplay between data, market power, and competition law[7], pricing algorithms are also evolving at the crossroads of multiple other areas of law such as intellectual property, privacy, data protection, or consumer protection. Pricing algorithms are, as such, particularly challenging to envisage, often requiring legal but also economical and technical approaches. In this research, we will limit ourselves to the study of the competition-law related aspects of pricing algorithms.

Another interesting aspect of this topic is tied to its controversy: pricing algorithms are indeed at the heart of fierce discussions, involving public actors such as competition agencies and intergovernmental organizations as well as academicians, economists, and lawyers among others. While some argue that these technological advances can have multiple benefits on the economy itself, as well as on the market dynamics and the consumers; some fear that this “digitalized hand”[8] could lead to competition law infringements, by incentivizing companies to collude, enabling the larger ones to achieve abuse of market power at the expense of end-consumers, subject to personalized pricing practice for instance. The negative effects of such algorithms are also a source of concerns with regards to the sustainability of the fair play of competition or the potential deregulation of the market.

Despite the numerous attempts of the regulators to capture and delineate the subject through reports, we indeed realize that many of the implications of pricing algorithms are still unknown. Such implications can indeed arise outside the retail market. As an example, pricing algorithms are very frequently used to trade in financial markets[9]. In the high-frequency trading, the algorithm is able to generate profits at a speed and frequency that is impossible to reach for a human trader. Such “algorithmic traders” algorithms, by monitoring very efficiently the market, facilitate gains from trade.

The difficulty faced by policymakers is partly due to the increasing technicity of the algorithms, adopting a deep learning approach (Jones 2014, Cuéllar 2016).  Algorithms are indeed evolving from simple machines in which the human is entering the input parameters to obtain an output to “intelligent” machines able to learn autonomously and produce outputs on their own, without any human intervention. In consequence, “pricing algorithms” are also very difficult to define and there is no consensus up-to-date on the definition of such algorithms as it was admitted by the French and German authorities in their joint report[10] on the subject, nor is there for the English competition agency which uses instead an informal definition for pricing algorithm: “an algorithm that uses price as an input, and/or uses a computational procedure to determine the price as an output[11].

In light of these elements, this research project will aim at giving some elements of answers to the following question: Should pricing algorithms be considered as a real threat to competitive market? If so, to what extent should pricing algorithms be regulated?

The first part of this paper will look at the pro-competitive effects that can be induced by pricing algorithms before considering its potential negative impacts on the competitive play, to determine the risks at stake for the consumer and the market dynamics. The second part will then ponder on the optimal answer to pricing algorithms, and assess whether it would be in the best interest to favor one of two opposing approaches. Some authors indeed emphasize the regulatory approach while others advocate for a more liberal view relying on the self-regulation of the market. Finally, the third part will envisage some practical solutions to exploit pricing algorithms without fearing their competition-related issues.

 

I – PRICING ALGORITHMS: WHAT EFFECTS ON THE COMPETITIVE INTERPLAY?

As briefly introduced before, the implications of pricing algorithms have considerably challenged the traditional market dynamics, introducing a very powerful technological power based on artificial intelligence. By answering customers’ expectations in an always more advanced and precise way, pricing algorithms can also benefit to the whole competitive process by intensifying it and achieving, as such, its initial procompetitive promise[12]. However, these benefits have to be balanced with the potential anticompetitive effects of pricing algorithms (collusion, monopoly, price discrimination….), the so-called “perils” of the algorithm-driven economy[13].

 

Acknowledged benefits of pricing algorithms

It is important to highlight the procompetitive effects of pricing algorithms on the economic process as well as the major benefits of pricing strategies for consumers.

Pricing algorithms obviously benefit firms but also consumers, providing them with new, better, and more tailored products and services[14].  This is mainly due to the increased transparency offered by some pricing algorithms such as those implemented in price comparison websites. Such increased transparency, by reducing the information asymmetries[15]and transaction costs that used to exist between sellers and customers at the expense of the latter, will help consumers to make better-informed choices[16]. More generally, transparency achieves an efficient allocation of resources. Consumers are indeed able now to compare a larger number of offers, and they can decide to switch from one product to another product of a competing firm if they are not satisfied with the first product’s quality-price ratio. This mechanism benefits customers as it participates in lowering their search costs: they no longer need to move from one place to the other to compare the prices of a given item but they can find all the relevant information directly on online platforms. In this way, well-informed customers are less subject to higher or monopolistic pricing[17]. This is partly facilitated by the user-defined parameters enabling customers to choose the maximum price they are willing to spend on an item or to see the average user rating[18].

Digital comparison websites functioning with pricing algorithms are then entitled to the role of “digital half” or “digital butlers”[19]assisting the customers in their purchasing decisions[20]. Personalization algorithms for instance go one step further to align the recommendations made by online market platforms and refine the subsequent results to fit the consumer’s specific interests and needs[21].

As a consequence of these simultaneous shifts between businesses or from one platform to another[22], the competitive pressure on suppliers is increased and directly incentive them to innovate and compete to preserve their market power[23]. This virtuous mechanism, by putting companies under constant pressure, generates dynamic efficiencies[24], thus improving market efficiency (Weiss and Mehrotra, 2001). By making the market more transparent, algorithms are indeed promoting a better adequation of demand and supply[25]: they can prevent the unsatisfied demand and supply excess and ensure that the market is in constant equilibrium[26]. As the customer is now able, using these online tools, to easily identify the provider or product that better matches its needs[27], firms and suppliers can adapt in consequence to optimize their inventory levels and manage their stocks. It allows resources to be efficiently allocated. For instance, the practice of dynamic pricing allows to adjust constantly the prices and to optimize individual prices according to various factors such as the available stock and the anticipated demand[28]. But the typology of the algorithm is far from being fixed as pricing algorithms are constantly evolving from a perspective of efficiency. For instance, some algorithms are even able to process large amounts of data through trial and error, find some patterns, and consider them to get optimal pricing[29].

All these elements lead Ezrachi and Stucke to envisage that pricing algorithms could sign the end of the “old world antitrust problems”[30]as they amount to give sellers less market power. The manufacturers and sellers, submitted to both Intraband[31]and Interbrand[32]competition are indeed pressured to reduce their prices, increase the quality of their products, and even provide services to the customers to retain them[33]. Moreover, the fact that customers can now very easily access the relevant information on the product they want and compare its price is making it more difficult for sellers to increase the prices or to undermine the quality of a product in a selective way, which tends to reduce the opportunities for price discrimination to develop[34]. In the same way, collusion seems less likely given the radical change of its form: competitors used to agree “in smoke-filled hotel rooms” to fix the prices and the collusion depended on the trust between the accomplices. With the rise of pricing algorithms, the “format” of collusion becomes radically different: each firm is now relying on its own pricing algorithm and computers do not take into account any parameters of “trust”[35]. Pricing algorithms are indeed privileging “cold, profit-maximizing price calculations” instead of trust[36].

Given these elements, it is undeniable that digital economy and pricing algorithms may have substantial benefits, not only for the consumers but also for the economy in general. However, in practice, the ideal relation between consumers and pricing algorithms, consecrated by the concept of “algorithmic consumers” (Gal an Elkin-Koren, 2017) is far from being as neutral as it appears to be. There are indeed some concerns, rightly expressed by Ezrachi and Stucke among others, that the competitive gains emanating from these algorithms could in fact be annihilated by anticompetitive practices founded on similar algorithms[37]. 

 

Potential dangers of pricing algorithms

As many firms and industries are migrating towards the adoption of pricing algorithms, there is a growing concern that, instead of the end of collusion as firstly envisaged by Ezrachi and Stucke, new forms of collusion may appear[38]. By using subtler means, pricing algorithms may potentially act beyond the reach of the law, favoring tacit collusion or amounting to price discrimination as identified by the competition literature[39].

In order to understand the potential negative consequences of pricing algorithms, it is necessary to understand the way pricing algorithms operate, as the anti-competitive practices at stake will not be the same according to the types of algorithms involved. The joint report co-written by the German and the French competition authorities considers three types of algorithms that may put into perils the consumer welfare and unbalance the market dynamics.

The first risk identified by both competition agencies is tacit collusion. Pricing algorithms could indeed be “facilitators” of collusion[40]in two scenarios developed by Ezrachi and Stucke and integrated into the joint report: the Messenger and the Hub & Spoke.

The Messenger scenario designates the situation in which the pricing algorithm is designed and used by humans as a means to collude. Under this scenario, algorithms are only a “technological extension of human will”[41]. Computers are generally in charge of implementing and policing the cartel[42]through the exchange of information or signaling. In the very famous Topkins case[43], for instance, such an algorithm was at stake and enabled David Topkins and its co-conspirators to sell posters on Amazon Marketplace and to coordinate their respective prices, violating the section 1 of the Sherman Act. In concrete terms, the pricing algorithm was able to collect all the competitor’s pricing information, thus allowing humans to align their prices.  Such a price-fixing scheme was punished by the American Department of Justice by a criminal prosecution. A similar conspiration was observed one year later in the UK, known as the “Trod” case[44]. In a more recent case, in 2015[45]five banks were fined by the American Justice Department $5,7 billion total for conspiring to manipulate the price of US dollars and euros exchanged in foreign currency exchange spot market, using an electronic chatroom and coded language[46].

About the Topkins case, another element is worth noticing: in this case, the price finally decided by the algorithm was disconnected from reality, due to an algorithmic bug. As a result, the price was so high and misaligned with consumer needs that no one finally bought the product. Then, no harm to the consumer can be characterized. Such a hypothesis raises questions about whether a real risk for consumers can be characterized when the price attributed is the result of an algorithmic bug?

According to Ezrachi and Stucke, this first scenario is not the most dangerous as computers are “only” used as an intermediary to help humans to collude[47]. Collusion is then always depending on the human’s initial will to cooperate[48], which makes it easier for competition agencies to detect and regulate.

The Hub and Spoke scenario is slightly different from the Messenger scenario: the collusion is induced by the fact that the competing firms use a similar algorithm for the simple reason that they had recourse to the same IT companies and programmers for the creation of their algorithm[49]. In this situation, the computer does not merely limit itself to execute the orders given by the humans and the fact that competitors use the same or similar pricing algorithm can lead to a situation of tacit co-ordination[50]by virtue of the simple interaction of algorithms on the market. By stabilizing the prices, competition can be damaged[51].

In this situation, collusion may become a consequence, even if it was not the original intent of the firms (sometimes competitors do not even know that there are using the same algorithm to determine their pricing strategies[52]). Uber provides a good illustration of the Hub and Spoke scenario: Uber Technologies Inc, the online platform, is the ‘hub’ and the Uber drivers are its ‘spokes’. The online platform, relying on a single automated price-setting algorithm, will determine a market price that will be charged to the customers. Contrary to one could reasonably think, the price fixed by the platform is not a truly competitive one taking into account factors such as the consumer demand in a specific location or the presence of a sufficient supply of available car drivers[53]. In reality, the pricing algorithm is not determining the true market price[54]but relying on its “algorithmic monopoly” situation. The firm indeed takes advantage of the fact that passengers sometimes have no other choice but to pay the high price if they want to come home, the company being the only one to propose car services at given hours of the night or in given locations. Such a scenario is very threatening to consumer welfare as such pricing algorithms do not hesitate to charge high prices, exploiting the consumers. It was for example the case during a snowstorm in New York where the prices of the rides charged were multiplied by 8.25 times over the normal situation[55]or even during a hostage situation in Sydney[56]!

This second scenario, considered by the CMA as presenting the most “immediate risk” for the competition[57], is more challenging for competition authorities: they will have to interfere in the algorithm to determine if there was an original intent to design the algorithm in a way to enable consumer exploitation[58]. Another difficult question (that will be treated later) is to determine the extent to which the third party that developed the algorithm could be held liable for such an anticompetitive effect. Concerning this issue, the European Court of Justice was called upon to rule on the liability of intermediaries who facilitated anticompetitive practices at the occasion of a preliminary ruling referred to the Court by the Lithuanian Supreme Administrative Court, following a first decision by the Lithuanian Competition Council[59]. The competition agency here identified that the online booking platform Eturas and thirty other travel agencies colluded to apply a common cap on discounts by communicating through an internal messaging system[60]and fined them. The CJEU considered that, as soon as the travel agencies using the platform know about the content of the administrator’s messages, they must be presumed to have participated in the collusive agreement, unless they expressly took some distance from it.

The joint report then identifies a third situation in which pricing algorithms can lead to collusion. This situation is met when individual algorithms are used without any prior human interactions[61]between the different representatives of the competing firms. This situation covers the hypothesis of self-learning and machine learning, theorized under two types of scenarios by Ezrachi and Stucke: the predictable agent and the digital eye. Even though the CMA considered in 2016 that both scenarios presented a less immediate threat to consumers[62], the innovation in technological markets, fuelled by the progress in artificial intelligence, is so quick that they have to be considered very seriously.

The first hypothesis, the Predictable Agent, is met when competing companies each adopt a different profit-maximizing pricing algorithm[63], programmed to monitor price changes, react instantaneously to any competitor’s price reduction, and to adapt its own price. Such algorithms are so efficient that they can act in consequence and adapt their prices in a matter of milliseconds. This conscious parallelism is not punished in itself by competition law, as there is no express collusive agreement among executives. However, it can still be considered as an anticompetitive practice if it can be proven that competitors knew that such pricing algorithms could collude.

The last scenario, the Digital Eye, embodies the ultimate level of sophistication of algorithms: self-learning algorithms also known as “black-box” algorithms. Such algorithms, learning through experience, are able to process high volumes of data in real-time to achieve optimal pricing strategies to increase profits. They are so efficient that they are able to anticipate and react to competitive threats in advance, by analyzing the feedbacks of the market, even before any pricing change[64]. Ezrachi and Stucke consider that such algorithms are achieving a “God-like view of the marketplace”[65]. Under this scenario that can lead to tacit collusion, the harm is greater according to both authors as humans are detached from the algorithm: they do not know whether, when, and for how long the algorithms have been colluding[66]. For competition agencies, once the anticompetitive outcome is identified, it remains very difficult to find evidence of any anticompetitive agreement as they cannot rely on the concept of intent. Moreover, several liabilities issues must be taken into account. Who will be held liable for the collusion enabled by the self-learning algorithms: the firm, the humans? Either way, competition policymakers will have to complete their empty toolkit to envisage such a scenario as many companies – on the model of Uber that recently acquired Geometric Intelligence and launched its own A.I. Labs – are on the verge of investing in Artificial Intelligence and machine learning[67].

Besides collusion, there is a second risk covered by article 102 TFEU, potentially arising from pricing algorithms, known as behavioral discrimination, which have negative impacts on both the consumers and the competition process. Pricing algorithms can indeed amount to price-discriminate consumers through two types of pricing strategies: differentiate pricing and dynamic pricing.

With pricing differentiation, each firm will adapt and charge different prices for each of the customer groups targeted. By segmenting the customers in various categories (students, adults, seniors…), the pricing algorithm is able to identify the demand elasticity for each group of customers, which means the price that each consumer is willing to pay[68]for a given product or service, the also called “customer’s reservation price”[69]. Firms then charge the customer based on the price calculated, enabling them to collect all the consumer surplus[70], and ensuring their profitability at the expense of the consumers[71].

Dynamic pricing is a different type of pricing strategy. Mainly used in the airline industry, for instance, it enables the firm to constantly adapt its price according to the supply and demand, taking into account factors such as the availability of seats and additional options.

Dynamic differential pricing, if it can have procompetitive effects such as the regulation of demand and supply, can also be associated with negative effects on consumer welfare. On this point, Professor Yossi Sheffi defines such practice as “the science of squeezing every possible dollar from customers[72]. In other terms, such practice incentivizes consumers to pay more[73], to increase their consumption, and enables an optimization of the extraction of personal wealth[74]. Another negative effect highlighted by the OCDE in 2016 with regards to consumers is the potential for pricing algorithms to amount to social discrimination by categorizing and segmenting customers in this way. While anti-discrimination laws specifically prohibit businesses in the European Union to use factors such as race, skin color or religion[75], algorithms are authorized to make categories in which people are gathered according to personal factors such as their age, marital status, and religion among others[76]. The treatment of such data by pricing algorithms poses serious concerns in terms of ethics. The authors are even concerned that discrimination could become the new norm with such algorithms, in the absence of any legal intervention[77]. 

Moreover, and as it is recalled by the neoclassical economists, price discrimination and dynamic pricing operating in such fashion do not only have anticompetitive effects on the consumers but are also putting at risk the competition play. It can even lead to exclude or eliminate market competitors by increasing barriers to entry[78]. Non-algorithmic sellers may indeed be unable to compete on an equal footing with these sophisticated pricing systems[79].

Such abuse of dominant position can be easily achieved by super-platforms such as Amazon or Google through ranking algorithms[80]. These types of algorithms feature a ranking bias by preferring a company’s proper service at the expense of the competitor. It was for instance the case with Google Search (Shopping) in 2017[81]. The platform, while providing a free search service to end consumers, is remunerated indirectly through advertisements. In order to place its own comparison shopping service in first position, Google used an algorithm to manipulate the search results at the expense of its competitors. The Commission fined Google Search € 2.42 billion – the largest fine ever imposed by the European Commission on a single company, considering that Google cannot favor its own ads over the ads of its competitors as it would have for effect to deprive consumers of their choice to buy and compare online. The Commission took a very strict approach concerning Google to characterize the harm, almost treating the online platform as an essential facility[82](this point will be detailed later) while consumers remain free to visit other price comparison websites, apart from Google[83].

In the same way, Amazon is being investigated since September 2018 under allegations of anticompetitive conduct[84]. As a “hybrid” platform selling at the same time the products of its retailers (Amazon Marketplace) and Amazon’s own line of products, the online merchant is accused of offering its own products at a lower price than those of the other retailers on purpose. Such a mechanism allows the platform to bring out in first position its own offering on the Marketplace in the “Buy Box” as it is the cheapest offer. Thanks to this ranking, consumers are then strongly incentivized to buy this product, at the expense of similar product proposed by the other retailers on Amazon Marketplace[85].

To conclude, pricing algorithms, without disregarding their benefits as detailed before have nonetheless created new risks for competition enforcement that should not be underestimated[86]. Pricing algorithms cannot then be categorized as good or bad per se. This piece of work aims to go beyond this dichotomy to envisage which approach should be adopted at the European level to face these increasing risks of collusion and price discrimination threatening consumer welfare. Between free-market approach praised by the United States and interventionism in the European Union, which is the more efficient? Are European Union current antitrust tools adequate to regulate these new forms of anticompetitive practices[87].

II – IS THE REGULATION OF PRICING ALGORITHMS THE SOLUTION? MARKET-BASED v. REGULATORY APPROACH

As illustrated before, pricing algorithms can have dual consequences: on one side they can benefit the consumers and change the markets for the better, on the other side, however, they can threaten the well-functioning of the competitive interplay. It is then very relevant to ask ourselves what approach would be the most optimal. While some authors argue in favor of a regulatory approach, acknowledging the need for a reform of the current competition tools, others prefer an alternative approach, born in the United States privileging a free-market based approach. 

The second part of this paper will present and critically examine the solutions brought by both approaches, and determine if they have successfully answered the need to give a framework to pricing algorithms.

 

The inadequacy of the current European framework to regulate pricing algorithms

Almost all competition authorities today agree on the fact that the current antitrust tools are not the most appropriate to face the reality of the risks imposed by pricing algorithms[88]. The House of Lords, in its report on Online Platforms and the Digital Single Market, for instance, recognized its “perception that large online platforms are above the law[89].

Under the current European Union law framework, the Treaty on the Functioning of the European Union prohibits the abuse of dominant position under its article 102(c), but the extent to which it can also cover the hypothesis of price discrimination is not clear[90]. The Court of Justice of the European Union, in 2018, in the MEO case[91]indeed considered that price discrimination imposed by a dominant firm to harm its rivals (“primary line injury”) is not in itself an abuse of dominance under the meaning of the Article 102 TFEU. But, in the case of price discrimination harming directly at the consumer level (“secondary line injury”), things remain unclear: article 102 does not precise the target of the mentioned abuses. In other words, the article does not specify if it imposes sanctions only for abuses that harm firms or also encompass harms to final consumers[92]. The OECD, on this point, added that the abuse of dominance usually does not apply to business-to-consumer relationships[93]before stating that the issue of personalized pricing should instead be regulated by the consumer protection law or anti-discrimination laws which constitutes a “more appropriate tool” according to the intergovernmental organization.

Concerning collusive practices as evoked previously, article 101 prohibits “all agreements between undertakings, decisions by associations of undertakings and concerted practices which may affect trade between Member States and which have as their object or effect the prevention, restriction or distortion of competition within the internal market”. However, the notion of ‘agreement’ itself is problematic as pricing algorithms are most of the time implementing subtler forms of communications that lead to tacit collusion without being properly characterized as an agreement. This could enable pricing algorithms to escape the scope of application of the article. In the same way, mere parallel conduct, such as the simultaneous rise in prices imposed by competitors is insufficient to indicate co-ordination. On the point, the OECD argues that the notion of agreement could be revisited[94]by incorporating a criterion of “meeting of minds” for instance, to encompass the situation where collusion is reached with the assistance of algorithms[95]. A clearer definition remains indispensable to help businesses understanding if their practices comply with competition law.

 

Is the regulatory approach the most appropriate answer to pricing algorithms?

In addition to the unfit wording of article 101 TFEU and the fact that the different competition authorities in the European Union may diverge in the appreciation of the notion of “agreement” under 102 TFEU[96], the regulatory approach is confronted to other difficulties regarding enforcement, temporality, liability and technical difficulties.

The main problem is mainly due to the fact that competition agencies are ill-prepared to the question of pricing algorithms which is certainly a “novel issue” for the moment but will arise more frequently in the future. This lack of information puts governments and regulators at “an enormous informational disadvantage relative to technologies companies”[97]. As long as they do not understand completely how the algorithm works to facilitate an antitrust infringement[98], it will remain very difficult for them to regulate. Competition agencies should as a preliminary, according to the OECD, distinguish between the situations in which algorithms amplify a conduct which is already covered under the current legal framework – ‘adaptive’ algorithms[99]– from situations in which the algorithms are creating new risks not envisaged by the European competition law framework up to date – ‘learning’ algorithms[100].

These enforcement issues are also tied to the technical difficulties arising from pricing algorithms, making their deviations harder to detect and punish. The algorithms using deep learning – black-box algorithms – for instance, can learn themselves to collude[101]. For example, Deep Mind, the algorithm developed by Google can decide, according to circumstances and parameters of a given situation, to opt for cooperative strategies and interact with other algorithms or, on the contrary, decide to not cooperate. Such cooperation will emerge only if it can enable a joint maximization of the profit[102].  This functioning makes it impossible for the regulator to know if the intention behind the algorithm was to collude or not (Castelvecchi 2016) as their code is constantly evolving.

Another technical difficulty is due to the international nature of pricing algorithms as well illustrated by Amazon and other high-tech giants, present all over the globe. The global scope of the algorithmic phenomenon adds a territorial challenge to regulators and competition authorities concerning the design of regulations[103]. As already mentioned, the fact that online companies operate at the interface of different areas of law (privacy law, data protection, consumer protection…)[104]. brings two additional degrees of complexity to the enforcement. First, the intervention has to be coordinated under different policy areas, and secondly, this may involve multiple agencies, all specified in one area of the law (Strowel and Vergote, 2016).

The temporality of the enforcement by regulators and agencies is another problem that deserves to be addressed. Algorithms are indeed characterized by their impact on the structures of the market. Pricing algorithms are incredibly fast to adjust to competitor’s prices but also to increase the number of interactions between them, which influence the markets, now considered as fast-moving, in other terms very dynamic. However, the timing of the algorithm is not the timing of the intervention. For instance, Google was finally sanctioned in June 2017 for its algorithm ‘Panda’ while the infringement was detected in 2013 by the competition authorities but was implemented by the online platform since 2008[105]. This delay made the sanction almost without effects given that the algorithm was at the time already replaced by another algorithm. According to the House of Lords, the length of time to decide the Google case reveals a “wider problem”: as long as the competition case is not concluded, the competitor affected by the competition infringement perpetrated by the super-platform through its algorithm may suffer irreversible harm[106]. The House of Lords added that this incapacity to respond quickly undermines public confidence in the ability of regulators to hold such online platforms accountable[107].

But on the contrary, a too early intervention of regulators would potentially impede the wellbeing gain for the consumer and would interfere in the competitive process according to Marty[108]. The main question, reformulated by Ezrachi and Stucke, then remains as to how to know when the government should intervene[109].

Finally, the heart of the problem of the regulatory approach is tied to the difficulty to attribute and recognize the liability of pricing algorithms: can antitrust liability be established when pricing decisions are made by an algorithm rather than by human beings? Mehra (2016), taking the example of the robot-seller, evokes three options to attribute the responsibility[110]: it could be attributed to the robo-seller itself, to the humans who deploy it, or to no one[111]. While the first option would not have any deterrent effect and appears in practice impossible as algorithms do not have any legal personality as physical persons and moral persons; the third option is not conceivable either as it would amount to impunity. Vestager indeed said that “companies can’t escape responsibility for collusion by hiding beyond a computer program”.  The second option then appears to be the most plausible: algorithms being considered as tools, they have to be connected to the human operators that implement them. But, to put into practice such a responsibility, agencies still have to face another challenge relative to the nature of the link between the agent – the algorithm – and the principal – the human being[112]. However, drawing this causal link is far from being easy when deep learning algorithms make their pricing decisions in total autonomy. Attributing responsibility to an individual that cannot influence the way in which the algorithm makes its decision would not be realistic. In the same way, it would not be fair to hold liable an individual that knew that the algorithm had the potential to collude but has no means to know whether it is the case or not in practice. Under these conditions, enforcers cannot rely on the notion of “intent” anymore to establish the liability and agencies need instead to consider the extent to which humans to control the activities of the algorithm to make their decision instead of looking at the algorithm code which cannot constitute proof of collusion[113]. The OECD recognizes on this point that determining liability depends on the facts at hand and must be appreciated case-by-case[114]. Likewise, there is no clear answer as to who should be fined: is it the company that designed the algorithm, the firm (or person) that used it?

No one can deny that the current competition tools to deal with pricing algorithms are insufficient. To remedy this loophole and strengthen the regulation of algorithms, some argue that additional rules on transparency and accountability of algorithms should be implemented[115]. The former European commissioner for Competition, Margrethe Vestager stated in a speech that “pricing algorithms need to be built in a way that doesn’t allow them to collude[116], and added that they should be submitted to rules of “compliance by design”. Such ex-ante mechanisms could be associated with ex-post clauses to transfer the burden of proof on companies using the algorithms, for them to prove that their algorithm is in conformity with the competition rules [117]. However, such rules of algorithmic design, by setting maximum price regulation, for instance, may consequently restrain the ability of the algorithm to innovate[118]and affect the quality of the products[119]from one side. On the other side, it would also create an additional burden for the agencies in their task of supervising the compliance of companies with competition law. Such rules of design could also consist of restraining the speed at which algorithms can adjust their prices by enforcing lags on price adjustments for instance (Ezrachi and Stucke 2017). These types of rules, by affecting the structure of the market can also result in severe restrictions to competition by impeding the correct matching of demand and supply[120].

Going a step further in the reasoning, Angela Merkel asked in 2016 to online platforms to publicly disclose their algorithms, in a concern for protecting internet users (and customers) right to be informed[121]. However, in practice, imposing such a degree of transparency and accountability is totally counterintuitive: if the law asked developers to not react to market changes, it would be nothing more than asking them to program themselves to be under-efficient as denounced by Marty[122]. Such over-regulation is perfectly illustrated through the requirement of search neutrality implied by the European Commission to Google. Impeding the platform to take a competitive advantage thanks to its pricing algorithm is tantamount to deprive the company of its main purpose, its raison d’être: competing. Very costly in terms of economic efficiencies, such measures would go against the DNA of pricing algorithms and disincentivize companies to innovate, impacting harming end-consumers. It has also to be noticed that some issues in terms of intellectual property and protection of the algorithm’s code as part of trade secret would raise additional difficulties[123].

All these elements can legitimately question the adequacy of a full regulatory approach to apprehend the potential anticompetitive practices of pricing algorithms[124]. According to the OECD, the current toolkit is even more likely to cause harm than to prevent and redress it[125]. In such a state of uncertainty and complexity, it appears that lack of intervention as well as excessive regulation could, either way, pose a serious threat to the competitive process by disregarding the benefits of algorithms. This is why, another approach, mainly influenced by the Chicago School, in favor of a lighter intervention, has also its place at the heart of the debate on pricing algorithms. This market-oriented approach should be preferred by the governments according to the OECD[126].

 

An alternative approach: market self-regulation by the Chicago School

The deregulation or minimal interventionism praised by the Chicago School originated in the United States influences the way competition is framed since the eighties. According to this perspective, pricing algorithms are as such only the result of the new online business dynamics born with the digital economy. For the Chicagoans, the economy should not be planned by either public or private sectors such as bureaucrats, CEOS, or the super-platforms, the GAFAs[127]. The approach only believes in the spontaneous ordering of the market economy.

Even if the market dynamics may be challenged, the functioning of the market remains the same and does not justify any legal intervention as praised in the regulatory approach. As Ezrachi and Stucke mentioned it: “We are dealing with old wine in new bottles[128]: the competitive dynamics remain the same and do not call for a radical change. These new forms of competition and commerce brought about by the digital economy[129]however require a careful approach and a recalibration of the approach to markets and interventions[130].

With that in mind, neoclassical economists invite us to interpret in a modern way the concept of the invisible hand of Adam Smith[131]. Under this concept, Chicagoans consider that traditional markets, as well as online markets, are autonomous and therefore able to correct themselves[132]and that governmental interventions on the market would cause more harm than good. This assumption can be justified through two arguments.

Firstly, government intervention may have a chilling effect on pro-competitive behaviors by interfering in the allocation of resources efficiently distributed by the market. Imposing neutrality rules for instance to integrate into algorithmic design would have for effect to disincentivize firms to innovate through new algorithms. This lack of innovation would in fineaffect consumers, depriving them of long-term benefits. Being unnecessary and harmful, governmental intervention should be reserved for marginal cases of market failure such as explicit price-fixing for instance[133].

Secondly, such intervention by courts and agencies would be difficult to justify considering the dynamic industries at stake: in online markets, antitrust benefits are indeed limited and the market power is as such fleeting[134]. It would then be counterintuitive to regulate to protect firms in the old economy from the new economy while consumer welfare has every opportunity to develop thanks to the emergence of Big Data and machine learning[135].

However, some start to put into question the viability of the original concept of the invisible hand, and it remaining power considering the structure of online markets, overseen by super-platforms. Is the invisible hand still powerful or is it replaced by a “digitalized hand” controlled by algorithms to determine the market prices? This question brings us to wonder if a competitive price really exists or if such price is pure fiction created by the digitalized hand.

Either way, the free-market ideology, as the regulatory approach has its own weaknesses when facing the complexity of pricing algorithms. While the vision praised by some in the United States wants to reduce any governmental or legal interventions in the market, there are still cases in which a total absence of regulation would not be desirable if not dangerous for consumers and competitors according to some authors. Ezrachi and Stucke, for instance, argue in favor of a “smart regulation” that may be quite beneficial[136]. The Neo-Brandesian economists – another US-originated approach further detailed – favor regulatory interventions and are closer to the European approach.

After discussing the two dominant approaches of competition law on algorithms up to date, the third part of this paper will present some concrete and practical solutions that seem promising to smartly regulate pricing algorithms.

 

III – OVERVIEW OF PRACTICAL SOLUTIONS

Promoting competition where innovation and investment flourish while minimizing the potential harms of algorithms represents a big challenge for regulators and competition agencies[137]. If competition authorities have well identified the potential harms arising from pricing algorithms, as illustrated in their reports, they remain unable to find adequate solutions to fix them. And this problem needs to be addressed because, even if “concerns about algorithmic collusion are still largely theoretical at this point, recent examples suggest that the concern is not fanciful” (McSweeny, 2017).

This third part will, in a very concrete way, provide an overview of the solutions that seem the most appropriate to tackle the algorithmic phenomenon by exploring two paths : firstly, by focusing on the urgent need for agencies and regulators to change their approach to pricing algorithms, and secondly, by revisiting structural remedies, fallen in misuse, but very relevant as a practical solution to tackle the algorithm’s anticompetitive conduct.

 

Changing the approach to pricing algorithms

Firstly, most of the literature on the subject agrees on one point: the regulatory side suffers from a severe lack of information concerning the use and implications of pricing algorithms in competition law. This phenomenon can be partly linked to political treatment, marked by mistrust, reserved to online platforms, and high-tech giants in Europe.

However, competition agencies have demonstrated their intention to overcome this distrust and change their perspective on pricing algorithms. In concrete terms, this changing approach is operating through the increasing adoption of “pro-competition” ex-anterules instead of only relying on ex-postenforcement. Professor Tirole, qualifying this approach of “participative antitrust”, argues that ex-ante regulation is more beneficial than its ex-post countermeasures[138]. When designed in cooperation with stakeholders, consumer welfare would be better conserved than in the case of enforcement, designed for individual cases.[139]

The first step to remedy this lack of information of regulators and governments regarding the algorithmic phenomenon is to provide competition agencies with extensive investigation powers, enabling them to conduct market studies and sectors inquiries[140]but also to go a step further by engaging into “market investigations”[141]as it is already the case in the United Kingdom, Iceland, and Mexico. Such ex-ante measureswould enable the agencies to issue non-binding recommendations to companies implementing pricing algorithms to constrain them to reduce their collusive risks and restore competition in the market before any contentious procedure, while “providing a valuable framework for considering new enforcement tools”[142].

Other authors such as Harrington (2018) and Nicolas Petit (2017) also support the adoption of ex-ante regulation rules. Another example of ex-ante regulation is the incubator of algorithmic collusion. It enables competition agencies to track the behavior of algorithms in conditions very close to those of the market, to deduce their risks of collusion, and to address some recommendations to the firms using such algorithms. The type of regulation also enables the competition agency to prohibit pricing algorithms that present a “tendency to collude”[143], according to the three-step approach developed by Harrington.

Marty does not deny the efficiency of such a model but argues that this method has also its limitations: being static, it cannot take into account the effects of the most advanced forms of pricing algorithms such as machine learning[144]. At the moment, for the digital eye scenario, there are no other enforceable tools than market evaluations[145].

Ex-anteregulation remains however interesting by its influential role in companies and super-platforms. It indeed incentivizes the companies using such algorithms to self-assess their compliance with the competition rules[146]. Such self-regulation could even be formalized through the adoption of codes of conduct by companies[147]. The Digital Competition Expert Panel[148]for instance recommended to the UK government to establish a Digital Markets Unit (within the CMA) to ensure the maintaining of healthy competition through the use of an ex-antedigital platform code of conduct targeting more specifically digital companies holding ‘strategic market status’. Such a voluntary code of practice can also be adopted individually by online platforms and smaller firms implementing pricing algorithms.

On another note, the Furman Report recommended to the government to favor cross-border cooperation between competition authorities in sharing the best practices[149]. At the European level, the platform-to-business trading practices (‘P2B’ regulation) were adopted under the form of a mandatory set of rules to promote fairness and transparency for businesses in online platforms[150]. Even if this regulation is not specifically addressing pricing algorithms, we can reasonably suppose that it encompasses de facto the anticompetitive practices arising from pricing algorithms implemented by high tech giants.

A fourth solution to prevent pricing algorithms from colluding or price discriminating could be to introduce some auditing mechanisms[151]as argued by Chen et al. (2016) to certify that the algorithms are programmed in a way that is compliant with competition rules. However, the efficacy of such audits is always subordinated to the type of algorithm concerned. As recalled by Ezrachi and Stucke, algorithms involving deep learning would probably escape from the scrutiny of auditing. Besides, algorithms are generally programmed to maximize profit, not to collude, which makes it more difficult for the auditing party to detect any competition concern[152].

The OECD proposes another potential way to regulate pricing algorithms, reiterating the arguments put forward by Saurwein and al. (2015): the supply and demand-side market solutions. Estimating that not all the risks of algorithmic selection call for governmental intervention,[153]the authors favor the “voluntary” change in the market conduct, whether it is operated by the action of the consumer (demand side) or the action of the firm supplying the algorithm (supply-side). In concrete terms, consumers may act in a certain way to protect themselves against risks: for instance, they can decide to refuse to use a given platform and switch to its competitor. This market solution has the advantage to reinforce consumer power. Likewise, suppliers can decide to take action in a way that will prevent the algorithm to be manipulated, by submitting themselves to rules of design for example[154]. According to the intergovernmental organization, such market solutions should be applied in priority as they are less detrimental to innovation or new entry than regulation[155].

A sixth solution to answer the issue of pricing algorithms is consisting of the cooperation inter-agencies, which is indispensable. For instance, a pricing algorithm that is price discriminating a consumer is generally not only endangering the competition principles but also the data protection rules. To impede such an algorithm to behave in a discriminatory way, some rules of privacy by design should be added by default to the code of the algorithm. Otherwise, the algorithm will continue to collect more data than reasonably needed to achieve its pricing strategy, enabling it to discriminate[156]. Cooperation between competition law agencies and data protection agencies then appears as crucial for being able to apprehend the algorithmic phenomenon with coherence. Such cooperation should also be extended to intellectual property and consumer protection laws as harms to the competition are generally associated with infringements in these areas.[157]

Finally, the last solution could be to make the algorithm “at the same time the object and the vector of the regulation” [158]. In other terms, regulators may create their own algorithms and use them as a tool to tackle anti-competitive practices of pricing algorithms. This idea, launched in 2018 by Vestager[159], would open a new era in the European competition law, but it has not been materialized since.

 

Giving a seat back to the table to structural remedies

All the concrete solutions mentioned previously are behavioral ones. However, another category of solutions, the structural remedies, neglected by the regulators until recently, could make their strong return in the European Union. Behavioral remedies appear indeed as insufficient to face the challenges that internet technology presents today, notably with the high-tech giants, making more relevant than ever structural remedies.

During her candidacy to the US presidential elections, the democrat Elizabeth Warren used structural separations as one of her major campaign arguments, associating economic threat with a threat to democratic values. She indeed promised at that time to “make big, structural changes to the tech sector to promote more competition, including breaking up Amazon, Facebook and Google[160]. Such structural changes would be imposed to all “platform utilities” – large online platforms exceeding §25 billion revenue per year – considered as very threatening for smaller competitors[161]. As an example, Google would be constrained to separate from DoubleClick and Amazon prevented from selling its own branded products through its platform.

Without killing the business model of the online platform, such separation of business lines would have the potential to avoid anti-competitive practices by impeding platform utilities to own and participate in a marketplace at the same time[162]. Such a separation would then enable smaller rivals to compete on an equal footing without fearing that the super platforms use their market power to apply predatory pricing.

The economic structuralism, largely developed by the New Brandeis School (or “hipster antitrust” to its critics[163]), questions the relevance of the ‘consumer welfare’ standard of the Chicago School[164]. Besides, it considers that the current antitrust regime is unfit as it continues to allow a few companies to monopolize certain markets or at least become very dominant while markets should be maintained open[165]. Such movement advocates in favor of a governmental intervention to limit the capacities of a firm to abuse its market power as it is the case with the high-tech giants through the use of ranking algorithms for instance.

By taking the example of Amazon, Khan, one of the leading figure of the economic structuralism, argues indeed that, if this platform does not look apparently contrary to the interests of the consumer as it enables this latter to benefit from the best prices, the reality is quite different when looking at the structure of the platform. As explained before, this multi-sided platform[166](Amazon Web Services and Amazon Marketplace), by using a ranking algorithm, pushes its own products into prominence compared to the products of its competitors. More than achieving discrimination between retailers, such a scheme is in fineaffecting consumers, whose choice is reduced. To remedy this issue, Khan proposes to apply the essential facility doctrine to such dominant platforms[167]. In concrete terms, she considers that public intervention can be a solution to limit the possibilities for a dominant company to abuse its market power, as soon as a public interest is threatened by an economic operator. Apart from regulation, the only alternative would be to dismantle the online platform in order to prevent the potential exclusion of competitors that would need the facility to access the market[168].

In addition to focusing on the price, the economic structuralism considers that the political power born from consumer welfare goes beyond “low prices” to embody various goals such as decentralization of power and innovation[169]. This theory presents a major advantage: it eases the regulation as risks of abuse of dominant position are radically reduced.  However, being very invasive, such remedies should be envisaged to frame algorithms only in last resort for some authors, once all behavioral remedies are exhausted.[170]

While structural remedies were fallen in disgrace and politically controversial, the European Union showed its interest very recently to use them in order to regulate digital markets. The European Commission indeed identified two main structural concerns[171]. Firstly, there are structural risks inherent to the competition: the conduct of companies, associated with certain market characteristics may result in a threat to competition. It is particularly the case in presence of powerful market players having a ‘gatekeeper position’. Secondly, there is a lack of competition when oligopolistic market structures present an increased risk for tacit collusion: it can be the case in presence of algorithms[172]. To address such concerns, the European Commission is currently developing a “new competition tool” (NCT) to allow the Commission to launch a detailed market investigation and to impose structural remedies to companies. An additional Digital Services Act package – expected to be unveiled by the European Commission this month – includes a proposal for the ex-ante regulation of digital platforms. Such a proposal of new competition tool was submitted to public consultations and fiercely debated by doctrinal authors. Such regulation, if it was adopted would have for effect to extend the enforcement powers of the competition authorities by allowing them to intervene not only to impose behavioral but also structural remedies. Moreover, super-platforms who play the role of gatekeeper would have to comply with some additional requirements[173].

In any case, tech-giants will probably have to comply with more stringent EU regulation in the future, incorporating not only behavioral but also structural remedies targeted at the platform’s algorithms, and particularly those responsible for setting pricing strategies.

CONCLUSION

The appearance of the notion of “algorithmic antitrust” witnesses the growing importance of algorithms and their impact on the competition landscape in digital markets[174]. We enter an era in which autonomous machines are increasingly used to perform a multiplicity of tasks. Such learning machines become so prevalent that some authors even start referring to “algorithmic governance”[175]. This is why the question of the governability of such algorithms cannot be disregarded by policymakers any longer. According to Scherer, “the legal system [has] to decide what to do when those machines cause harm and whether direct regulation would be a desirable way to reduce such harm[176]. However, with regards to the most suitable approach to frame pricing algorithms, there is no consensus between economists, competition agencies, and academics. The Chicagoans and their free-market ideology indeed challenge the European vision, more regulatory. In the same way, the Neo-Brandesians, favorable to structural remedies, were ignored by the EU for a long time which preferred the application of behavioral remedies.

These divergent opinions make it considerably more complex to respond to the initial question: Should pricing algorithms be considered as a real threat to competitive market? If so, to what extent should pricing algorithms be regulated?

The answer must indeed be articulated in two phases. Firstly, as long as an appropriate framework will not be agreed at the European level, pricing algorithms will continue to present growing risks for the consumer welfare and the economy itself, the tech-giants becoming very powerful on the market thanks to their intelligent algorithms. However, as soon as the regulators will be able to delineate and to control the algorithms, pricing algorithms have the potential to be very useful technological tools to enhance competitiveness. Secondly, it is worth reflecting on the adequacy of the regulatory model to address pricing algorithms. As demonstrated above, the regulatory approach is associated with significant weaknesses. Competition agencies are indeed most of the time dealing with a lack of information, making a hard task for them to detect the subtle infringements to competition law enabled by algorithms[177]. Besides, the wording of the European Treaty appears to be too restrictive to cover the new anticompetitive conducts associated with algorithms[178]. At the level of enforcement, the duration of the investigations conducted by the European Commission is excessive compared to the constant evolution of pricing algorithms and the sanctions imposed on companies have only a minor deterring effect. Moreover, imposing rules of accountability and transparency towards regulators to algorithms, while considered by Pasquale a prerequisite to govern algorithms[179], are in fact very difficult to implement. Requiring total transparency of the code of the algorithm would indeed constitute a serious breach of trade secret and intellectual property[180](and remains anyway technically impossible for black-box algorithms). Such rules would even have a counterproductive effect, reducing the incentive of firms to innovate at the expense of the freedom of choice of the consumer. Finally, the question of liability for the anticompetitive conduct of the algorithm remains very difficult to answer in the hypothesis of fully autonomous algorithms, performing without any human interference.

Given the inadequacy of the current toolkit at the disposal of the competition authorities, this paper envisaged alternative economic theories such as the Chicago School, relying on the self-regulation of the markets, without interventionism of the State. Even if the US approach appears to be too far away from the European tradition, it could still influence the European competition law framework on some aspects as it has been already been the case in the past: on several occasions, the European Commission indeed incorporated some Chicago School theory in its provisions[181].

In terms of regulation, the so-called ‘compliance by design’ rules may be an interesting tool to integrate into the framework applying to pricing algorithms, subject to conditions. According to Vezzoso, this kind of rule has the potential to become an effective tool only if it built through an “ongoing dialogue with all the interested stakeholders: enforcers, firms, computer science experts, designers of algorithms, academicians and consumers”.[182]In concrete terms, making pricing algorithms antitrust compliant can be done by having recourse to some private AI companies tasked with helping companies to meet their regulatory compliance needs[183]: this option is known as “regtech” (regulatory technology).

Besides private action, public action could also be envisaged with the creation of an agency, the AIDA[184], tasked with certifying the safety of AI systems as envisaged by Scherer in its regulatory proposal[185]. In terms of enforcement, such a system would be associated with a liability tort system for uncertified AI systems applying to distributors, sellers, and operators[186]. The implementation of this system, rather than direct regulation, has a major advantage: it provides de factoa strong incentive for algorithm developers to comply with competition rules.

Finally, to answer the lack of efficiency of the sanctions – mainly behavioral – currently imposed by the European Commission for anticompetitive conduct, structural remedies, defended by the New-Brandeis movement, are to be considered. Ignored for a long time by the Commission, the structural remedies are now increasingly integrated into the European framework through the New Competition Tool (‘NCT’) designed to govern algorithms. Targeted to dominant digital firms – the so-called ‘gatekeepers’ -, this tool will enable the European Commission to address structural problems even if no infringement to the antitrust rules can be found. However, such a regulation, following the Furman report recommendation asking for the establishment of a code of competitive conduct applicable “only to particularly powerful companies”, could pose some serious risks for their innovation. Indeed, if a company is forced to resell some of its businesses based on a structural remedy while no breach of antitrust rules has been found, the situation becomes so insecure for companies that they may lose every incentive to innovate.

To conclude, one cannot deny that despite the willingness of policymakers (Amazon for instance is currently facing important EU antitrust charges), pricing algorithms remain a complex and fascinating phenomenon to govern at the European level.

 

Anne-Sophie THOBY

 

[1]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.12

January 2020

 

[2]“Big Data” refers to large amounts of data produced very quickly by a high number of diverse sources. Big Data is associated with three characteristics – ‘3 Vs’ – according the definition given by Doug Laney (1997), analyst for Gartner: high-volume, high-velocity and high-variety of data.

 

[3]Ariel Ezrachi and Maurice E Stucke, Virtual Competition(Harvard University Press 2016)

 

[4]Based on the answers of 343 retailers who responded that they were using software to track prices. European Commission, Commission Staff Working Document – Final report on the E-commerce Sector Inquiry (2017) paras.149, p.51

 

[5]European Commission, Commission Staff Working Document – Final report on the E-commerce Sector Inquiry (2017) paras.152, p.52

 

[6]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.48

 

[7]Autorité de la Concurrence and Bundeskartellamt, Competition Law and Data(2016) p.4

 

[8]Ariel Ezrachi and Maurice E Stucke, op.cit. p.209

 

[9]This use of algorithms in trading is often referred as ‘automated trading’ or ‘algorithmic-trading’

 

[10]Autorité de la Concurrence and Bundeskartellamt, Competition Law and Data (2016) p.3

 

[11]Competition and Market Authority, Pricing algorithms – Economic working paper on the use of algorithms to facilitate collusion and personalised pricing(2018)

 

[12]Ariel Ezrachi and Maurice E Stucke, Virtual Competition(Harvard University Press 2016), Preface vii

 

[13]Ibid. p.2

 

[14]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.4

 

[15]OECD, ‘Roundtable on Information Exchanges’ (2010) p.27

 

[16]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.16

 

[17]Ariel Ezrachi and Maurice E Stucke, op.cit.p.5

[18]Ariel Ezrachi and Maurice E Stucke, op.cit.p.5

 

[19]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.16

 

[20]Autorité de la Concurrence and Bundeskartellamt, Competition Law and Data (2016) p. 2

 

[21]Ibid.

 

[22]Digital Competition Expert Panel (Furman), Unlocking digital competition – Report of the Digital Competition Expert Panel (2019)

 

[23]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.18

 

[24]Ibid. p.7

 

[25]Frédéric Marty, ‘Algorithmes de prix, intelligence artificielle et équilibres collusifs’ (2017) Revue Internationale de Droit Economique. p 84

 

[26]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.16

 

[27]BakosJ.Yannis ‘Reducing Buyer Search Costs: Implications for Electronic Marketplaces’ (stern.nyu.edu, 1997)

 

[28]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.16

 

[29]Ibid.

 

[30]Ariel Ezrachi and Maurice E Stucke, op.cit.p.8

 

[31]Definition of “intrabrand competition”: competition between online and brick-and-mortar retailers selling the same product

 

[32]Definition of “interbrand competition”: competition between different manufacturers/ brands selling the same product

 

[33]Ariel Ezrachi and Maurice E Stucke, op.cit.p.8-9

 

[34]A. Mitra & Jr. J.G Lynch, ‘Towards a Reconciliation of Market Power and Information Theories of Advertising Effects on Price Elasticity’ (1995) Journal of Consumer Research 21, p.644-659

 

[35]C.R. Leslie, ‘Trust, Distrust and Antitrust’ (2004) Texas Law Review 82, p.628

 

[36]Ariel Ezrachi and Maurice E Stucke, op.citp.9

 

[37]Frédéric Marty, op.cit. p.84

 

[38]Ariel Ezrachi and Maurice E Stucke, op.cit.p.36

 

[39]See the reports published by the OECD (2016), the Autorité de la Concurrence and the Bundeskartellamt (2016 et 2019), the Furman Report (2019).

 

[40]Autorité de la Concurrence and Bundeskartellamt, Competition Law and Data (2016) p. 37

 

[41]Ariel Ezrachi and Maurice E Stucke, op.cit.

 

[42]Definition of “cartel”: agreement between businesses not to compete with each other. Cartel members may agree price fixing, big rigging, output quotas/restrictions, market sharing (source: Office of Fair Trading, ‘Cartels and the Competition Act 1998’ (2005)

 

[43]United States v. David Topkins, CR 15-00201 WHO (N.D. Cal. April 30, 2015), Plea Agreement

 

[44]Online sales of posters and frame(Case 50223), Decision of the CMA of 12 August 2016

 

[45]US banks: JPMorgan Chase and Citigroup; British banks: Barclays Plc and Royal Bank of Scotland; Swiss bank UBS

 

[46]Karen Freifeld, ‘Five global banks to pay $5.7 billion in fines over rate rigging’ (reuters.com 2015)

 

[47]Ariel Ezrachi and Maurice E Stucke, Virtual Competition(Harvard University Press 2016) p.44

 

[48]Ibid. p.45

 

[49]Autorité de la Concurrence and Bundeskartellamt, Competition Law and Data(2016) p. 38

 

[50]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.28

 

[51]Ariel Ezrachi and Maurice E Stucke, op.cit.

 

[52]Ibid. p.48

 

[53]Video ‘Dynamic Pricing/Uber Prices’ Youtube

 

[54]Douglas MacMillan & Telis Demos, ‘Uber Valued at More Than $50 Billion’ Wall Street Journal (2015)

 

[55]Annie Lowrey, ‘Is Uber’s Surge-Pricing an Example of High-Tech Gouging?’ New York Times Magazine (2014)

 

[56]Jay Hathaway, ‘Uber Turned On Surge Pricing for People Fleeing Sydney Hostage Scene’ (gawker 2014)

 

[57]Competition and Market Authority, Pricing algorithms – Economic working paper on the use of algorithms to facilitate collusion and personalised pricing(2018)

 

[58]Ariel Ezrachi and Maurice E Stucke, op.cit.p.54

 

[59]Case C-74/14 Eturase.a.[2016] European Court of Justice

 

[60]Sophie Lawrance& Marc Linsner, ‘Eturas – Any conclusions on platform collusion…?’ (Kluwer Competition Law Blog 2018)

 

[61]Autorité de la Concurrence and Bundeskartellamt, ‘Competition Law and Data’ (2016) p. 38

 

[62]‘CMA publishes pricing algorithms study’ (Ashurst.com, 2018)

 

[63]Ariel Ezrachi and Maurice E Stucke, op.cit.p.77

 

[64]Ariel Ezrachi and Maurice E Stucke, op.cit.p.71

 

[65]Ariel Ezrachi and Maurice E Stucke, op.cit.

 

[66]Ariel Ezrachi and Maurice E Stucke, op.cit. p.78

 

[67]Mike Isaac, ‘Uber Bets on Artificial Intelligence With Acquisition and New Lab’ The New York Times(2016)

 

[68]OECD – Directorate for Financial and Enterprise Affaires Competition Committee, Executive Summary of the discussion on Personalised Pricing in the Digital Era(2018) p.4

 

[69]Ariel Ezrachi and Maurice E Stucke, op.cit.p.85

 

[70]The consumer surplus is the difference between the price the customer actually pay for a particular good or service and the reservation price of the customer (source: Competition and Market Authority, The commercial use of consumer data (2015)

 

[71]Ariel Ezrachi and Maurice E Stucke, op.cit.p.88

 

[72]James Surowiecki, ‘In Praise of Efficient Price Gouding’ (2014) MIT Technology Review

 

[73]Ariel Ezrachi and Maurice E Stucke, op.cit. p.117

 

[74]Ibid.

 

[75]Article 21, Charter of Fundamental Rights of the European Union 2000; Directive 2006/54/EC of the European Parliament and of the Council on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation

 

[76]Ariel Ezrachi and Maurice E Stucke, op.cit.p.124

[77]Ariel Ezrachi and Maurice E Stucke, op.cit.p.130

 

[78]Ibid. p.118-119

 

[79]Le Chen, Alan Mislove and Christo Wilson, An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace(2016)

 

[80]Frédéric Marty, op.cit. p.89

 

[81]Google Search (Shopping)(Case COMP/AT.39740), Commission decision of June 27, 2017

 

[82]Originated in US antitrust law and recognized for the first time by the European Commission in 1992 in the Sealink/B&I case, the concept of ‘essential facility’ refers to an infrastructure “owned or controlled by a dominant company where independent companies need access to the facility in order to provide their own products or services”. The refusal by the dominant company to grant such access to an essential facility has significant restrictive effects on competition. The essential facility concept is very common when the infrastructure in question is a port, an airport or more generally in the sector of energy transportation.

(SeeSuzanne Rab, ‘The evolving essential facilities doctrine’ (LexisNexis)

 

[83]‘Press release: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service (ec.europa.eu, 2017)

 

[84]‘Press release: Antitrust: Commission opens investigations into possible anti-competitive conduct of Amazon’ (ec.europa.eu, 2019)

 

[85]Thomas Höppner& Philipp Westerhoff, ‘The EU’s competition investigation into Amazon Marketplace’ (Kluwer Competition Law Blog 2018)

 

[86]Ariel Ezrachi and Maurice E Stucke, op.cit.p.51

 

[87]Ibid. p.29

 

[88]Notably the UK House of Lords, the French Autorité de la Concurrence and the German Bundeskartellamt (cf. their respective reports)

 

[89]House of Lords, Online Platforms and the Digital Single Market (2016) paras.373, chapter 9

 

[90]As specified by the Autorité de la Concurrence & the Bundeskartellamt in their report “Algorithms and Competition” (2019): the provision explicitly prohibits “applying dissimilar conditions to equivalent transactions with other trading partners, thereby placing them at a competitive disadvantage” (art. 102(2)(c) TFEU)

 

[91]Case C-525/16 Meo – Servicos de Comunicacoes e Multimédia[2018] European Court of Justice

 

[92]Marco Botta& Klaus Wiedemann, ‘To discriminate or not to discriminate? Personalised pricing in online markets as exploitative abuse of dominance’ (2019) European Journal or Law and Economics

 

[93]OECD – Directorate for Financial and Enterprise Affaires Competition Committee, Executive Summary of the discussion on Personalised Pricing in the Digital Era(2018) p.4

[94]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.36

 

[95]Ibid.p.38

 

[96]Frédéric Marty, op.cit. p.86

 

[97]Guillaud H, ‘Comment prouver les pratiques anticoncurrentielles à l’heure de leur optimisation algorithmique ?’ Le Monde – Le blog d’Hubert Guillaud, Xavier de la Porte et Rémi Sussan (2017)

 

[98]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.33

 

[99]Calvano et al.(2018a)

 

[100]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.33

 

[101]Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolo and Sergio Pastorello, ‘Algorithmic Pricing: What Implications for Competition Policy?’ (2019) Review of Industrial Organization 

 

[102]Frédéric Marty, op.cit. p.100

 

[103]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.49

 

[104]Ibid. p.48

 

[105]‘Press release: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison-shopping service (ec.europa.eu, 2017) 

 

[106]House of Lords, Online Platforms and the Digital Single Market (2016) p.103

 

[107]Ibid.

 

[108]Frédéric Marty, op.cit. p.86

 

[109]Ariel Ezrachi and Maurice E Stucke, op.cit. p.218

 

[110]Salil K. Mehra, ‘Antitrust and the Robo-Seller: Competition in the time of algorithms’ (2015) Minnesota Law Review

 

[111]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.39

 

[112]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.39

 

[113]Frédéric Marty, op.cit. p.85

 

[114]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.4-39

 

[115]Frédéric Marty, op.cit. p.103

 

[116]Extract of the speech of Margrethe Vestager “Algorithms and competition” at the Bundeskartellamt 18th Conference on Competition (16 March 2017)

 

[117]Frédéric Marty, op.cit. p.103

 

[118]Florian Saurwein, Natacha Just & Michael Latzer, Governance of Algorithms: Options and Limitations(July 14, 2015). info, Vol. 17 No. 6, pp. 35-49 (SSRN 2015)

 

[119]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.50

 

[120]Ibid.

 

[121]Connolly K, ‘Angela Merkel: internet search engines are ‘distorting perception’ The Guardian (2016)

[122]Frédéric Marty, op.cit. p.103

 

[123]OECD, Algorithms and Collusion: Competition Policy in the Digital Age (2017) p.45

 

[124]Ariel Ezrachi and Maurice E Stucke, op.cit.p.218

 

[125]OECD, Algorithms and Collusion: Competition Policy in the Digital Age (2017) p.29

 

[126]Ibid. p.47

 

[127]“GAFAM”: acronym (Google, Apple, Facebook, Amazon & Microsoft) used to design the five american firms dominating the digital market, also known as “The Five”

SeeScott Galloway, The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google(Random House 2017)

 

[128] Ariel Ezrachi and Maurice E Stucke, op.cit. p.232

 

[129]Ibid.p.22-23

 

[130]Ibid.p.203

 

[131]Adam Smith, The Wealth of Nations(1776)

 

[132]Ariel Ezrachi and Maurice E Stucke, op.cit. p.25

 

[133]Richard A. Poster, ‘The Chicago School of Antitrust Analysis’ (1978) 127 University of Pennsylvania Law Review 925

 

[134]Ariel Ezrachi and Maurice E Stucke, op.cit. p.25

[135]Ibid. p.26

 

[136]Ariel Ezrachi and Maurice E Stucke, op.cit. p.32

 

 

[137]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017)

 

[138]Allison Schrager ‘A Nobel-winning economist’s guide to taming tech monopolies’ (qz.com, 2018)

 

[139]Maurits Dolmans and Tobias Pesch ‘Should we disrupt antitrust law?’ (clearygottlieb.com)

 

[140]Ariel Ezrachi and Maurice E Stucke, 2017

 

[141]  OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.41

 

[142]Ariel Ezrachi and Maurice E Stucke, op.cit. p.225

 

[143]Joao E. Gata, ‘Controlling Algorithmic Collusion: short review of the literature, undecidability, and alternative approaches’ REM Working Paper 077 (2019)

 

[144]Frédéric Marty, op.cit. p.103

 

[145]Ariel Ezrachi and Maurice E Stucke, op.citp.79

 

[146]Frédéric Marty, op.cit. p.102

 

[147]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.41

 

[148]This panel has written the “Furman Report”

 

[149]Digital Competition Expert Panel (Furman), Unlocking digital competition – Report of the Digital Competition Expert Panel (2019)

 

[150]‘Platform-to-business trading practices’ (ec.europa.eu, 2019)

 

[151]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.43

 

[152]Ariel Ezrachi and Maurice E Stucke, op.cit. p.230-231

 

[153]Florian Saurwein, Natacha Just & Michael Latzer, op. cit.

 

[154]Ibid.  

 

[155]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.46

 

[156]Ariel Ezrachi and Maurice E Stucke, op.cit.p.227

 

[157]Ibid. p.221

 

[158]Frédéric Marty, op.cit. p.103

 

[159]Competition Policy International, ‘EU: Regulator may create algorithms to find anticompetitive pricing’ (competitionpolicyinternational.com 2018)

 

[160]MJ Lee, Lydia DePillis and Gregory Krieg, ‘Elizabeth Warren’s new plan: Break up Amazon, Google and Facebook’ CNN Politics (2019)

 

[161]Herbert Hovenkamp, ‘The Warren Campaign’s Antitrust Proposals’ (2019) The Regulatory Review

 

[162]Elizabeth Culliford, ‘Where U.S. presidential candidates stand on breaking up Big Tech’ (reuters.com 2020)

 

[163]Lina Khan, ‘The New Brandeis Movement: America’s Antimonopoly Debate’ (2018) Journal of European Competition Law & Practice

 

[164]Jake Walter-Warner & William F. Cavanaugh ‘The New Brandeis School Manifesto’ (pbwt 2020)

 

[165]Lillian Garcia, ‘Lina Khan, New Technologies, and Institutional Reform, Working Paper No.29’ (Portland State University 2018)

 

[166]Multi-sided platforms (MSPs) are “technologies, products or services that create value primarily by enabling direct interactions between two or more customer or participant groups” (Professor Andrei Hagiu)

 

[167]Frédéric Marty, ‘Online Platforms and Abuse of Dominant Position: Reflections on Possible Exploitative Abuse and Abuse of Economic Dependence’ (editionsthemis.com 2019) p.99

 

[168]Frédéric Marty, ‘Pouvoirs économiques privés et ordre concurrentiel : une application à l’économie numérique’ (halshs.archives 2018) p.15

 

[169]David Dayden, ‘How to think about breaking up big tech’ The Intercept (2019)

 

[170]Howard A. Shelanski& J. Gregory Sidak, ‘Antitrust Divestiture in Network Industries’ (2001) (68) University of Chicago Law Review (1)

 

[171]‘Press release: Antitrust: Commission consults stakeholders on a possible new competition tool’ (ec.europa.eu 2020)

 

[172]Ibid.

 

[173]‘Press release: Antitrust: Commission consults stakeholders on a possible new competition tool’ (ec.europa.eu 2020)

 

[174]Deng A, ‘From the Dark Side to the Bright Side: Exploring Algorithmic Antitrust Compliance’ (Nera.com 2019)

 

[175]Christian Katzenbach & Lena Ulbricht, ‘Algorithmic governance’ (2019) Internet Policy Review, 8(4)

 

[176]Matthew U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harvard Journal of Law & Technology 2. p.400

 

[177]Digital Competition Expert Panel (Furman), Unlocking digital competition – Report of the Digital Competition Expert Panel (2019) p.4

 

[178]OECD, Algorithms and Collusion: Competition Policy in the Digital Age(2017) p.19

 

[179]Frank Pasquale, The Black Box Society – The Secret Algorithms that Control Money and Information (Harvard University Press 2015) p.91

 

[180]OECD, Algorithms and Collusion: Competition Policy in the Digital Age (2017) p.45

 

[181]Dzmitry Bartalevich, ‘The Influence of the Chicago School on the Commission’s Guidelines, Notices and Block Exemption Regulations in EU Competition Policy’ (2016) 54 Journal of Common Market Studies 2

 

[182]Extract of the speech of Simonetta Vezzoso, ‘Competition by Design’ prepared for presentation at 12th ASCOLA Conference Stockholm University (2017) p.23–24

 

[183]Deng A, ‘From the Dark Side to the Bright Side: Exploring Algorithmic Antitrust Compliance’ (Nera.com 2019)

 

[184]Such an agency would be created through the establishment of a new AI regime, ‘AIDA’ (Artificial Intelligence Development Act) according to the regulatory proposal of Scherer

 

[185]Matthew U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harvard Journal of Law & Technology 2. p.398

 

[186]Ibid. p.398

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *