Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

AI boom may not have positive outcome, warns UK competition watchdog

ChatGPT logo on a mobile phone screen
The emergence of ChatGPT has triggered a debate over the impact of generative AI on the economy. Photograph: Nicolas Economou/NurPhoto/Rex/Shutterstock

People should not assume a positive outcome from the artificial intelligence boom, the UK’s competition watchdog has warned, citing risks including a proliferation of false information, fraud and fake reviews as well as high prices for using the technology.

The Competition and Markets Authority said people and businesses could benefit from a new generation of AI systems but dominance by entrenched players and flouting of consumer protection law posed a number of potential threats.

The CMA made the warning in an initial review of foundation models, the technology that underpins AI tools such as the ChatGPT chatbot and image generators such as Stable Diffusion.

The emergence of ChatGPT in particular has triggered a debate over the impact of generative AI – a catch-all term for tools that produce convincing text, image and voice outputs from typed human prompts – on the economy by eliminating white-collar jobs in areas such as law, IT and the media, as well as the potential for mass-producing disinformation targeting voters and consumers.

The CMA chief executive, Sarah Cardell, said the speed at which AI was becoming a part of everyday life for people and businesses was “dramatic”, with the potential for making millions of everyday tasks easier as well as boosting productivity – a measure of economic efficiency, or the amount of output generated by a worker for each hour worked.

However, Cardell warned that people should not assume a beneficial outcome. “We can’t take a positive future for granted,” she said in a statement. “There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”

The CMA defines foundation models as “large, general machine-learning models that are trained on vast amounts of data and can be adapted to a wide range of tasks and operations” including powering chatbots, image generators and Microsoft’s 365 office software products.

The watchdog estimates about 160 foundation models have been released by a range of firms including Google, the Facebook owner Meta, and Microsoft, as well as new AI firms such as the ChatGPT developer OpenAI and the UK-based Stability AI, which funded the Stable Diffusion image generator.

The CMA added that many firms already had a presence in two or more key aspects of the AI model ecosystem, with big AI developers such as Google, Microsoft and Amazon owning vital infrastructure for producing and distributing foundation models such as datacentres, servers and data repositories, as well as a presence in markets such as online shopping, search and software.

The regulator also said it would monitor closely the impact of investments by big tech firms in AI developers, such as Microsoft in OpenAI and the Google parent Alphabet in Anthropic, with both deals including the provision of cloud computing services – an important resource for the sector.

It is “essential” that the AI market does not fall into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews, the CMA said.

In the long term, it could enable firms that develop foundation models to gain or entrench positions of market power, and also result in companies charging high prices for using the technology.

The report says a lack of access to key elements for building an AI model, such as data and computing power, could lead to high prices. Referring to “closed source” models such as OpenAI’s GPT-4, which underpins ChatGPT and cannot be accessed or adjusted by members of the public, the report says development of leading models could be limited to a handful of firms.

“Those remaining firms would develop positions of strength which could give them the ability and incentive to provide models on a closed-source basis only and to impose unfair prices and terms,” the report says.

The CMA added that intellectual property and copyright were also important issues. Authors, news publishers including the Guardian and the creative industries have raised concerns over uncredited use of their material in building AI models.

As part of the report, the CMA proposed a set of principles for the development of AI models. They are: ensuring that foundation model developers have access to data and computing power and that early AI developers do not gain an entrenched advantage; “closed source” models such as OpenAI’s GPT-4 and publicly available “open source” models, which can be adapted by external developers, are both allowed to develop; businesses have a range of options to access AI models – including developing their own; consumers should be able to use multiple AI providers; no anticompetitive conduct like “bundling” AI models into other services; consumers and businesses are given clear information about use and limitations of AI models.

The CMA said it would publish an update on its principles, and how they had been received, in 2024. The UK government will host a global AI safety summit in early November.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.