Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Eerke Boiten, Professor of Cybersecurity, School of Computer Science and Informatics, De Montfort University

Spring budget 2023: AI announcements hint at data grab behind the scenes

The UK government is keen to "maximise the potential" of AI Shutterstock / TierneyMJ

In the area of digital technologies including artificial intelligence (AI), the UK budget can be a barometer of technological development and hype. However, there is a worrying drive towards deregulation in the background – combined with an apparent desire to encourage the rights holders for data to share it with companies involved in AI.

The chancellor’s latest spring budget expresses an understandable keenness for the UK to “maximise our potential in AI”. Also mentioned in the budget is a £900 million investment in an “exascale supercomputer and AI Research Resource” to support the significant need for computing power this AI commitment will entail. There is also an annual £1 million prize for the best scientific progress in AI.

This budget has been competing for news coverage with the latest ChatGPT release by OpenAI. However, it also reveals a desire on the part of those in government to get in on the development of large language models (LLMs) – the name of the technology underlying AI chatbots such as ChatGPT.


Read more: Spring budget 2023: experts react to UK government's plan to get the economy moving


Advancing “UK sovereign capability” in this area is a laudable objective, but may be a stretch given that most LLMs have been developed by the larger Silicon Valley players, which can actually provide the millions of dollars required to train these models.

I have full sympathy for the announcement that the associated “AI foundational models taskforce” will inform ministers directly. Some of what we have seen of LLMs in the higher education sector – including the potential for using it to complete assignments – suggests the need to change our practices dramatically.

Similarly, I do see the potential for impacts that will need a rapid policy response at the national level, even if I do not subscribe to belief in an imminent “artificial general intelligence” – where a chatbot can comprehend intellectual tasks just like a human – or, in what would be an even more advanced step, the emergence of sentience in LLMs.

Junior partner

While quantum computing is still on the agenda, many other developments of the last few years and decades get little emphasis. The metaverse was mentioned by the chancellor in a single breath, with Web 3.0 as “the future of web technology” – but without any associated policies for either. There is no mention of blockchains, of smart contracts, or of bitcoins. The only use for crypto-currencies, according to this budget, is to tax them.

In the newly formed Department for Science, Innovation and Technology, science is clearly the junior partner – occurring rarely by itself in this budget, and then often only as an application area of AI.

To my academic eye, the science that might deliver the next generation of innovations and technologies is mostly absent. So is re-establishment of the European collaboration known as Horizon Europe, which the UK higher education sector has been eagerly awaiting since the apparent progress on Northern Ireland in February.

More generally, the social responsibility aspect of technological development gets little more than lip service in the budget. The most significant line in this respect is:

The government is taking forward all Sir Patrick Vallance’s recommendations on the regulation of emerging digital technologies.

However, these recommendations – according to a separate review published (together with the government response) on the same day as the budget – present a worrying picture of deregulation for the sake of innovation.

Questions over regulation

While the argument is mostly presented as eliminating regulatory uncertainty, the main thrust is that existing regulation prevents innovation and commercial exploitation of AI in the UK.

Unlike the EU, the UK does not have plans for the specific regulation for AI. The extent to which existing data protection legislation such as (UK) GDPR affects it – in particular, its rules around automated decision making and transparency – has barely been tested and hardly enforced by the Information Commissioner’s Office (ICO), which oversees data protection.

Representation of data
Best-Backgrounds/Shutterstock

This may be related to the fact that in 2017 (based on section 108 of the Deregulation Act), the ICO was given a “duty to consider the desirability to promote economic growth”. This was seen by some to have undermined its independence as a data protection regulator.

There is no reference to the ongoing revision of data protection in the UK, which could already be empowering AI at the expense of privacy by relaxing constraints around “research” use – potentially including the training of AI models by private companies. At least Vallance includes a recommendation on “AI as a service” to clarify the data controller and data processor roles – important definitions in GDPR.

This is probably intended to be cryptic, and remains undigested in the government response. However, in the context of GDPR, it can be read as a battle over who controls what may be done to data in an AI context, who can be held responsible, and in which contexts AI firms can appropriate any data for training purposes.

Accessing information

Other rights perceived to be in the way of AI progress are Intellectual Property Rights, such as copyright. Vallance makes the case that AI firms currently face too many barriers in using copyrighted data to train their models.

The government response instructs the Intellectual Property Office (IPO) to force rights holders to offer “a reasonable licence” for their data for any well-behaved AI firm. It is not clear how to reconcile this data grab with the recognition that the UK has world-leading creative industries that are central to realising the government’s growth ambitions. These companies will want to protect their intellectual property, and may not want to hand some of it over to other companies.

The final source of data for AI companies enabled by Vallance’s proposed deregulation is government itself. Following the “unlocking the power of data” slogan familiar, for example, from the 2017 Digital Strategy, data sharing and linkage across the public sector is encouraged.

This includes making it “easier for private sector firms to access this information safely”. The term “safely” is then meaningfully and sensibly expanded, including a reference to privacy-enhancing techniques and safe data platforms such as that of the Office for National Statistics.

Overall, Vallance’s recommendations propose a “flexible” approach to regulation. Divergence at an early stage where different blocs such as the UK and EU have different rules; international regulatory harmonisation only once technologies are becoming established. However, considering the speed that AI is developing right now, I do not believe this to be a responsible approach to its regulation.

The Conversation

Eerke Boiten receives funding from UKRI and the Alan Turing Institute/Accenture for research projects involving AI aspects.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.