Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Oliver Howley

Generative AI: Questions for competition and consumer protection authorities

A digital face in profile against a digital background.

Generative AI: 3 critical questions for competition and consumer protection authorities It is barely a year since the launch of ChatGPT by OpenAI brought generative AI and foundation models to the forefront of public consciousness. The development and use of GenAI have seemed exponential, and governments are racing to regulate the potential risks it poses without limiting its transformative potential or discouraging AI-related investments in their jurisdictions.

In this context, participants at the UK’s AI Safety Summit on 1 and 2 November 2023 have much to discuss. The Summit will bring together governments, academics, civil society and company representatives to consider how to manage misuse and loss of control risks from recent advances in AI, with a view to promoting international collaboration and best practice.

The Summit will not discuss competition and consumer protection issues in GenAI, where international cooperation appears limited. As the use and capabilities of GenAI develop, three questions are critical to fostering competition, innovation and informed consumer choice.

Question 1: How can regulators promote GenAI leadership, rather than GenAI dominance?

The leading models and associated tools are being offered by a small number of big players. Smaller players are also bringing innovative products to market, but a lot of them base their products on the models developed by the bigger players.

While recognizing and rewarding the significant investments by the major players, it will be important for regulators to foster an environment where access to models is provided on fair, reasonable and non-discriminatory terms. The biggest players have their own downstream offerings, as well as strategic investments and partnerships with other AI developers. These arrangements can bring efficiencies, new capabilities and enhanced consumer choice, but also provide opportunities to foreclose downstream competitors or lock in the supply of services to downstream operators on terms weighted in favor of the bigger player.

Competition between model operators will also need to be encouraged, to support the development of high-quality models and different monetization strategies, including open source and closed source modes. Limited competition between model operators may also reduce differentiation of downstream products: if the same prompt is submitted by multiple downstream providers to the same underlying model, that model will return the same or a similar result.

Question 2: Will players commit to GenAI safety over GenAI speed?

GenAI is attracting significant investment and public attention. As a result, certain operators could maximize functionality, cleverness and speed to the detriment of consumer safety: developments may be so profitable, valuable and popular that operators do not build in sufficient safety mechanisms in the rush to bring out new products ahead of the competition.

These risks are exacerbated because many consumers do not currently understand the limitations of foundation models, including the potential for popular tools to produce inaccurate, biased, offensive and infringing outputs. Transparency and education around such limitations is critical while the technology improves to tackle these issues, including to enable businesses that incorporate AI into their products to satisfy their consumer law obligations.

Industry bodies, such as the Frontier Model Forum, will no doubt facilitate development and dissemination of best practices. It will be particularly important to ensure accountability is clear throughout the value chain to create competition in relation to consumer experience, complaints handling and redress. 

Question 3: Can foundation model operators respond to fine-tuning demands without creating a barrier to switching?

Fine-tuning enables foundation models to be refined for specific customer-facing applications. While the usefulness of ‘bespoke’ models to a business is obvious, widespread reliance on such models in their current form may adversely affect market growth.

A customer of a model operator is unlikely to acquire ownership rights in a fine-tuned version of that model: fine-tuning only creates a modified version of the base model, so any grant of rights in a fine-tuned model would likely undermine the operator’s base model ownership. As a result, even if a customer retains ownership of its fine-tuning data and/or acquires fine-tuning parameters from the operator, it may be difficult for the customer to achieve equivalent performance and functionality from another operator without spending significant time and money fine-tuning the new operator’s model.

This may dissuade the large number of customers that rely on fine-tuned models from moving operators, stifling the emergence of new operators in the long-term.

Differing regulatory approaches

Antitrust and consumer protection law typically address issues after they have arisen. Such after-the-fact enforcement is perceived as less effective in solving digital competition concerns given the pace at which these markets evolve and the significant network effects and first-mover advantages. This has spurred consideration of upfront regulation, with different approaches emerging internationally.

The EU has led in regulating the major digital players with the Digital Markets Act, which qualifies certain large online platforms as “gatekeepers” subject to access, interoperability and fair treatment requirements. The EU is also ahead on AI-related protections: the EU’s AI Act will govern AI system providers that offer AI systems in the EU (whether or not they are physically present in the EU) and the EU’s AI Liability Directive will make it easier for victims of AI-caused damage to prove liability and receive compensation.

In comparison, the UK intends to create a world-leading AI ecosystem without AI-specific legislation. Instead, individual regulators have published guiding principles, checklists and techniques for the responsible development and provision of AI systems, and will apply sanctions within their respective remits. To this end, the Competition and Markets Authority (CMA) published the initial findings from its AI Foundation Models Review in September 2023 and the Office of Communications referred public cloud infrastructure services, a critical resource for developers and customers of foundation models, for a full market investigation in October 2023. The Digital Markets, Competition and Consumer Bill currently going through Parliament will allow the CMA to impose fairness and transparency obligations on firms designated as having “Strategic Market Status” for foundation models and/or associated software, and give the CMA significantly enhanced consumer enforcement powers.

The US is considerably behind the EU and UK in regulating AI at the federal level. In July 2023 the White House convened a meeting of leading AI companies that produced a set of voluntary commitments surrounding security testing, bias and privacy research, information risk sharing, and transparency measures. President Biden has also issued an Executive Order that will push federal agencies to develop new AI safety and security standards.

An opportunity for consensus

It is barely a year since the launch of ChatGPT by OpenAI brought generative AI and foundation models to the forefront of public consciousness. The development and use of GenAI have seemed exponential, and governments are racing to regulate the potential risks it poses without limiting its transformative potential or discouraging AI-related investments in their jurisdictions.

In this context, participants at the UK’s AI Safety Summit this November had much to discuss. The Summit will bring together governments, academics, civil society and company representatives to consider how to manage misuse and loss of control risks from recent advances in AI, with a view to promoting international collaboration and best practice.

The Summit will not discuss competition and consumer protection issues in GenAI, where international cooperation appears limited. As the use and capabilities of GenAI develop, three questions are critical to fostering competition, innovation and informed consumer choice.

We've featured the best CX tool.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.