Resume: On June 28, 2024, the French Competition Authority (ADLC) published an opinion (24-A-05) on the generative artificial intelligence (AI) sector, identifying significant competitive risks—including high entry barriers, potential market abuses, and structural challenges—and providing recommendations to safeguard innovation, ensure market contestability, and promote fair competition in this rapidly evolving technological landscape.
To quote this paper: C. VANHERRENTHALS, “Generative AI under watch: The French Competition Authority’s perspective on future competitive challenges” Competition Forum, 2025, n° 0063 https://competition-forum.com.
According to Ronald Coase, a British economist, “The history of regulation in the broadcasting industry […] suggests that lawyers and economists should not be so overwhelmed by the emergence of new technologies as to change the existing legal and economic system without first making quite certain that this is required.”[1]
Generative artificial intelligence (AI) is a striking example of a technological innovation with both beneficial and severe consequences. In recent years, it has generated considerable interest both in public debate and the economic sector. AI, which is gradually infiltrating most digital sectors and beyond, indeed raises numerous political, ethical, and especially legal and competitive questions.
On February 8, 2024, the French Competition Authority (ADLC) decided to self-initiate a review of the competitive functioning of the generative AI sector under Article L. 462-4 of the French Commercial Code. In its opinion, published on June 28, 2024, the Authority outlines France’s view on generative AI and its impact on competition law, particularly in terms of economy and innovation. The purpose of this opinion is to provide a competitive analysis of the generative AI market, highlight potential risks from a competition perspective, and offer recommendations for improving market functioning.
In this opinion, the ADLC attempts to answer this question: to what extent is generative artificial intelligence already reshaping the competitive dynamics of markets, and what recommendations can be made to safeguard fair competition in an increasingly digital world?
First, the French Competition Authority (ADLC) defines generative AI, outlines the specific characteristics of the market, and identifies competition risks inherent to this technology (I). It then presents a set of ten recommendations aimed at guiding economics actors and safeguarding free competition and market dynamics. This initiative reflects the growing international interest in generative AI and its associated challenges (II).
I. A deep dive into the competitive challenges of generative AI
After defining generative AI and explaining its potential for exponential innovation (A), the ADLC focused its analysis on the upstream market of generative AI, highlighting significant entry barriers and major competitive risks associated with this sector (B).
A. Generative AI, a disruptive innovation transforming the digital landscape
The ADLC first sets the stage for its analysis and defines key terms. According to the Authority, generative AI is a technology that creates new content such as texts, images, videos, or code, typically in response to a specific request. This definition aligns with that of the European Parliament, which sees AI as a tool enabling machines to replicate human behaviours such as creativity and reasoning. The ADLC emphasizes the importance of large language models (LLMs) and the widespread use of generative AI across various sectors, highlighting its impact on competition, particularly due to the concentration of market power among major tech players.
The ADLC distinguishes generative AI from other forms of AI due to its ability to create original content and address a wide range of requests, making it a “general-purpose” model. This view aligns with the European AI regulation, which classifies large generative AI models as general-purpose AI models. In contrast, the European Commission’s definition focuses more on the technical capabilities of AI models, such as their generality and ability to perform a broad range of tasks.
The ADLC asserts that generative AI, like any major digital evolution, is a groundbreaking innovation that will inevitably have a massive economic impact on the market. The Authority even suggests that generative AI is poised to become the “dominant digital platform.” In fact, as economist Xavier Lambin points out, AI has evolved from a primarily predictive role to a decision-making role, marking a technological breakthrough. Previously, AI was limited to analysing data and providing forecasts or recommendations. Today, with major technical advancements, it can make autonomous decisions and generate substantial responses, influencing complex processes across various sectors.[2]
The Authority’s proactive approach, enabled since 2009[3], allows the ADLC to engage with strategic issues that present particular interest for consumers. The ADLC recognizes that generative AI will disrupt the digital sector and the current balance in place, and thus aims to take pre-emptive action to expose the competitive risks of this emerging sector (B).
B. A market already characterized by a lack of contestability and notable competitive risks
An inherently unbalanced market
Defining the market, as in any competitive analysis, requires caution, especially in the AI sector. Experts from the International Center for Law and Economics note that “the lack of a proper understanding of the outward and inward boundaries of AI markets has practical implications for antitrust policy and regulation because it may lead to inaccurate assessments of market concentration and market power, resulting in both under and over-enforcement of competition law compared to the social optimum.”[4]
In this opinion, the ADLC focuses its analysis on the upstream part of the generative AI value chain—namely, the design, training, and specialization of large language models. By concentrating on the practices of major digital players at the upstream level, the ADLC overlooks the impact of these practices on end consumers and the broader economy. A broader analysis, incorporating the downstream value chain and global challenges, appears necessary for more comprehensive and tailored regulation.
The ADLC’s findings are clear: entering the generative AI market is already difficult for companies seeking to innovate and establish themselves in the sector. First, this market presents extremely high entry barriers. Generative AI operates based on vast amounts of data, and processing these data requires immense computational power, which is only achievable through specialized chips. Accessing these data and leveraging these powerful computing mechanisms requires significant investments. The Authority highlights that investments in the sector have nearly sextupled between 2022 and 2023, with companies raising more than $22 billion in 2023 (around €20 billion).
Under these conditions, digital giants such as Microsoft and Alphabet are in a favourable position, with priority access to the various inputs needed to operate generative AI. Moreover, their vertical integration and presence at different levels of the value chain further advantage them. These large players benefit from their established position in other digital markets to strengthen their entry into the generative AI market. For example, Google, through its YouTube video service, already has privileged access to vast amounts of data, which it can later use to develop its AI services. Finally, digital giants tend to integrate AI tools directly into their own services, reinforcing the non-contestability of the generative AI market.
Behavioural and structural competitive risks
In this market lacking contestability, the competitive risks posed by generative AI are of two types, though they are interconnected: risks of anti-competitive behaviours and risks related to market structure.
The Authority first highlights the risks of abuse of dominance concerning computing components, particularly GPUs, which are essential for generative AI calculations. Companies developing these components, mainly Nvidia, could lock the market, leading to anti-competitive practices such as price-fixing, supply restrictions, or unfair contractual terms. Additionally, Nvidia’s recent investments in cloud service providers like Core weave further strengthen this market lock-in and increase the risk of monopolistic practices.
Furthermore, risks of lock-in are also present with cloud service providers. These companies, particularly sector giants, attract AI startups with attractive cloud credits, financially tying them to their infrastructure. Additionally, technical limitations make it difficult for these startups to migrate to other providers. These practices, already regulated by recent laws such as Act No. 2024-449 and the Data Act, could be seen as abuse of dominance if they harm competition by restricting companies’ freedom of choice.
Access to data is another point of tension. As Bruno Lasserre, member of the French National Commission for Information Technology and Liberties (CNIL), points out, the generative AI sector embodies the joint consideration of two crucial imperatives: data protection and free competition. Dominant companies may refuse to share data or impose discriminatory conditions. Exclusive agreements, where major companies reserve exclusive access to essential data, could also harm competing actors[5]. A notable example is the unauthorized use of content to train AI models, as occurred with Google Gemini. Such practices could be considered anti-competitive behaviour or collusion.
Finally, the Authority underscores the significant risk of predatory concentrations, which illustrate the phenomenon of coopetition, blending competition and cooperation. Big Tech companies increasingly tend to invest in small innovative players, capturing their innovations and benefiting from them. While these partnerships may foster innovation, they risk reducing competition by weakening the competitiveness between involved entities, limiting market transparency, and locking certain parts of the value chain. The exclusive partnership between Microsoft and OpenAI is an example.
As Alain Ronzano points out, the concerns raised by the Authority in this opinion are of paramount importance, as for AI’s benefits to materialize, the competitive functioning of the sector must foster innovation and allow for the presence of a variety of actors[6]. After presenting the generative AI market and the various risks arising from this technological innovation, the Competition Authority puts forward several recommendations aimed at economic actors, thus contributing to a global initiative on AI-related challenges (II).
II. The French response still in development: the limitation and ambitions of competition regulation for AI
After highlighting the various risks threatening competition, the French Competition Authority offers a set of recommendations aimed at guiding economic actors in managing these challenges (A). These proposals are part of a broader approach, seeking to proactively address the complex issues tied to the rise of generative AI and anticipate its impacts on the global economic landscape (B).
A. Pragmatic yet limited recommendations: the need for rapid regulation
The Competition Authority has proposed ten recommendations to guide sector actors and public authorities in preserving free competition in the generative AI market.
First, it suggests that the European Commission pay special attention to Model-as-a-Service (MaaS)[7] offerings and consider designating companies providing these services as access controllers under the European Digital Markets Act (DMA). It also encourages the DGCCRF to examine the use of cloud resources in AI, in line with the 2024 law regulating the digital space. The Authority stresses the importance of ensuring the regulation does not hinder the emergence of smaller actors or unduly favor larger players.
Internationally, it calls for coordination to avoid discrepancies between national and international regulations. Major digital companies operate globally, and their practices affect various markets. Therefore, it is crucial for authorities to coordinate in combating abuses in the use and exploitation of generative AI at all levels of the value chain.
The Authority also advocates for fully utilizing competition law tools to respond quickly to the challenges of generative AI. It acknowledges that implementing dedicated AI regulations to address today’s issues is difficult due to the sector’s constant evolution. Regulatory adoption is a lengthy process, making it necessary to rely on the swift tools of competition law and restrictive business practices.
Regarding access to computational power for data processing, the Authority advocates for the development of public supercomputers in Europe, their partial opening to private players, and creating criteria for using generative AI models on these infrastructures. However, this raises questions about public authorities’ intervention modalities and ensuring equal access for all players.
Finally, the Authority calls for greater transparency regarding digital giants’ investments in the AI sector to enhance monitoring of market concentrations. However, ensuring this transparency is complex, as most acquisitions bypass antitrust scrutiny due to not meeting required thresholds, which are mainly based on turnover. This issue raises the need to consider changing criteria, shifting from turnover-based thresholds to transaction-value-based ones, as turnover does not always reflect a company’s true economic value, particularly in the dynamic and innovative AI sector.
Nonetheless, this opinion from the ADLC is not isolated. It is part of a broader movement of recognizing the challenges related to AI, which is becoming an increasing priority for authorities both in Europe and globally (B).
B. An initiative by the ADLC within a European and international dynamic around AI challenges
On November 1, 2023, the Bletchley Declaration, the first international text on AI security, was signed by over 54 countries. Although non-binding, this agreement affirms a common ambition: to ensure that AI is developed and used safely, human-centered, trustworthy, and responsible.
In line with these commitments, the European Union reached a decisive milestone with the publication of the AI Regulation on June 2024[8], the first comprehensive legislative framework in this area. This regulation oversees the development and use of AI systems, aiming to prevent risks to health, safety, or fundamental rights.
However, beyond ethical and safety considerations, the rise of generative AI raises questions in competition law. Elodie Vandenhende, head of the digital economy unit within the ADLC, notes that since 2023, several competition authorities worldwide have launched investigations or studies on the generative AI sector, including in Portugal, UK, and Hungary[9]. The European Commission has also called for contributions on this issue and announced it would examine certain agreements between major players in the digital sector and developers and providers of generative AI. Beyond Europe, the ADLC’s concerns align with those of the US Federal Trade Commission (FTC), which launched an investigation in early 2024 into Alphabet, Amazon, Anthropic, Microsoft, and OpenAI regarding their recent investments and partnerships in AI. In fact, the FTC declared that “These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.”[10]
The ADLC’s opinion, published on June 28, 2024, underscores the complexities of regulating a rapidly evolving technological sector. As generative AI reshapes numerous industries, a pressing question arises: can national and European regulatory frameworks adapt swiftly enough to safeguard fair competition? In the long term, this may spark a broader discussion about the need for a dedicated authority to oversee AI, equipped to ensure effective oversight in a constantly changing environment.
Clémence VANHERRENTHALS
[1] COASE, R. H. “The Federal Communications Commission.” The Journal of Law & Economics, vol. 2, 1959, pp. 1–40
[2] Y. GUTHMANN, J. THERY, X. LAMBIN, Artificial Intelligence: Competitive Issues, Economic Challenges, and Practical Applications in the Media Sector (Nasse Seminar – Paris, September 19, 2024), November 2024, Concurrences No. 4-2024, Art. No. 121309
[3] LOI n° 2008-776 du 4 août 2008 de modernisation de l’économie
[4] A. ABBOTT & T. SCHREPEL, Artificial intelligence and competition policy, LGDJ, 2024
[5] “Conclusions of the mission on the interplay between data protection and competition” entrusted by Marie-Laure Denis, President of the CNIL, to Bruno Lasserre, President of the CADA, and member of the CNIL board, with the support of the CNIL’s Economic Analysis Unit, November 28, 2024
[6] A. RONZANO, “Intelligence artificielle : l’Autorité de la concurrence recommande d’utiliser les outils existants pour le secteur de l’IA générative »
[7] Model as a Service (MaaS) is a cloud-based AI approach that provides developers and businesses with access to pre-built, pre-trained machine learning models.
[8] Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024
[9] E. VANDENHENDE, Intelligence artificielle, Dictionnaire de droit de la concurrence, Concurrences, Art. N° 120483
[10] Fed. Trade Commission, Staff in the Bureau of Competition & Office of Technology, Generative AI
Raises Competition Concerns, FTC (June 29, 2023)
0 Comments