A larger-than-life Michelle Donelan beams on to a screen in Google’s London headquarters. The UK science and innovation secretary is appearing via video to praise the US tech behemoth for its plans to equip workers and bosses with basic skills in artificial intelligence (AI).
“The recent explosion in the use of AI tools like ChatGPT and Google’s Bard show that we are on the cusp of a new and exciting era in artificial intelligence, and it is one that will dramatically improve people’s lives,” says Donelan. Google’s “ambitious” training programme is “so important” and “exceptional in its breadth”, she gushes in a five-minute video, filmed in her ministerial office.
Welcome to the AI arms race, where nations are bending over backwards to attract cash and research into the nascent technology. Google’s move is a “vote of confidence in the UK”, supporting the government’s aim to make the UK “both the intellectual home and the geographical home of AI”, says Donelan.
Few countries have been more accommodating than the UK, with Donelan’s tone underlining the red carpet treatment given by Rishi Sunak’s government to tech firms and his desire to lure AI companies in particular.
Google’s educational courses cover the basics of AI, which it says will help individuals, businesses and organisations to gain skills in the emerging technology.
The tuition consists of 10 modules on a variety of topics, in the form of 45-minute presentations, two of which, covering growing productivity and understanding machine learning, are already available.
The courses are rudimentary: they cover the basics of AI and Google says they do not require any prior technological knowledge.
About 50 people, including small business owners, attended the first course at Google’s King’s Cross offices in London last week, just across the road from where its monolithic £1bn new UK HQ, complete with rooftop exercise trail and pool, is being built.
The UK – home to Google’s AI research subsidiary, DeepMind – is the launchpad for its new training, but the company said it expected to roll it out to other countries in the future. Co-founded in 2011 by Demis Hassabis, a child chess prodigy, DeepMind was sold to Google for £400m in 2014 and now leads Google’s AI development under the new Google DeepMind title. It has increasingly embedded itself into the machinery of the state, from controversially partnering with the NHS to try to build apps to help doctors monitor kidney infections, to Hassabis advising the government during the Covid-19 pandemic.
The first sessions are the latest addition to the digital skills training offered by the company in the UK since 2015, accessed by 1m people.
“We see a cry for more training in the AI space specifically,” Debbie Weinstein, the managing director of Google UK and Ireland, tells the Guardian.
“We are hearing this need from people and at the same time we hear from businesses that they are looking for people with digital skills that can help them.”
Google’s pitch is that AI could increase productivity for businesses, including by taking care of time-consuming administrative tasks. It cites a recent economic impact report, compiled for Google by the market research firm Public First, which estimated that AI could add £400bn in economic value to the UK by 2030, through harnessing innovation powered by AI.
The company said the report also highlighted a lack of tech skills in the UK, which could hold back growing businesses.
But there is little mention of any of the feared downsides of AI, including the impact on huge swathes of the economy by making roles redundant. Those attending the inaugural presentations appear more keen to know basics, such as whether AI can help with tasks including responding to emails and booking appointments.
The charm offensive by Google may also highlight deep unease about the breakneck pace of AI expansion and its potential to completely upend the world of work, and the Silicon Valley company’s nervousness over any backlash.
Google and other tech firms, including Microsoft, Amazon and Meta, are working feverishly to develop AI tools, all hoping to steal a march on rivals in what some believe is a winner-takes-all competition with unlimited earnings potential.
Google launched its Bard chatbot in the US and UK in March, its answer to OpenAI’s ChatGPT and Microsoft’s Bing Chat, a service which is capable of answering detailed questions, giving creative answers and engaging in conversations. Facebook’s parent company Meta has recently released an open-source version of an AI model, Llama 2.
A recent report by the Organisation for Economic cooperation and Development (OECD) warned that AI-driven automation could trigger mass job losses across skilled professions such as law, medicine and finance, with highly skilled jobs facing the biggest threat of upheaval.
Others are concerned that profit-maximising private tech companies are expanding apace in a fledgling sector where there is now no regulation, with echoes of the early days of the internet, when the land grab by tech companies left regulators and ministers trailing in their wake and eventually forcing a belated reckoning for social media giants.
Dr Andrew Rogoyski, of the Institute for People-Centred Artificial Intelligence at the University of Surrey, says Google’s training drive is unlikely to be motivated by altruism. “Making free training available makes absolute sense,” he says. “If you use one company’s training material, you’re more likely to use their AI platform.”
Rogoyski adds that tech firms of all sizes are offering educational courses.
“I think a lot of businesses are struggling at the moment with the feeling that they should be doing something with AI and not knowing where to start,” he says.
“I would like to see more warnings, the things that businesses should be aware of when looking at AI, [that] it’s not just about technical and coding skills to knock something up that you can push out to your website.”
He also wants companies to be aware of potential pitfalls.
“There are much more impactful issues that people need to think about such as privacy, security, data basis, all of the concerns and limitations that you might feel are being glossed over if [tech firms] are pushing us to try AI and start tinkering.”
Politicians are waking up to the risks of AI. Labour’s digital spokesperson Lucy Powell recently said the UK should bar technology developers from working on advanced AI tools unless they have a licence to do so. Powell suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. But both main parties are captivated by potential prize: Sir Keir Starmer recently held a shadow cabinet meeting at Google’s London office, and the Labour leader and Sunak focused on AI in their recent London Tech Week speeches.
Globally, governments including the UK’s, are working out how they can reap the benefits of tech firms like Google up-skilling its workforce, at the same time as they are hoping to rein in those very firms.
Sunak has changed his tone on AI in the past couple of months, and is now planning to host a global summit on safety in the nascent technology, as he aims to position the UK as the international hub for its regulation.
The sudden adoption of AI chatbots and other tools are worrying managers in the UK, leaving them fearful about potential job losses triggered by the technology, as well as the associated risks to security and privacy.
Two in five managers (43%) told the Chartered Management Institute (CMI) they were concerned that jobs in their organisations will be at risk from AI technologies, while fewer than one in 10 (7%) managers said employees in their organisation were adequately trained on AI, even on high-profile tools like ChatGPT.
Anthony Painter, the CMI’s director of policy, who met a group of Google executives and small business representatives on the sidelines of the training launch, says that AI brings “huge opportunity, but also huge risks, and we have to take time to get that right”
“The practical skills necessary to adopt AI aren’t where they need to be [among businesses],” he says. “But we don’t have the regulatory structure to do that effectively, and it might not be bad to have a bit of a go-slow while we think through regulation, ethics and skills in practical terms.”