Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Blake Montgomery

Trump heads to China to spread the gospel of American tech

Illustration

Hello, and welcome to TechScape. I’m your host, Blake Montgomery, US tech editor at the Guardian.

Trump spreads the gospel of American tech in China while emulating Xi on AI

Donald Trump is headed to China this week. If his guest list is any clue, he wants to discuss technology with Xi Jinping, though perhaps after the war in Iran.

On Monday, news broke that outgoing Apple CEO, Tim Cook, as well as SpaceX and Tesla’s CEO, Elon Musk, would join Trump. Other guests from the tech sphere include Dina Powell McCormick, Meta’s recently appointed president; Sanjay Mehrotra, CEO of computer memory maker Micron; Chuck Robbins, CEO of longtime telecom giant Cisco; and Cristiano Amon, CEO of semiconductor maker Qualcomm, according to a White House official.

Jensen Huang, Nvidia’s CEO, who is close to Trump but criticized the US’s limitations on chip sales to China in an April interview, saying that he didn’t want a “loser mentality” to cost the US its edge in AI, will not be joining the president. A major deal on semiconductors seems less likely without the world’s most important chipmaker, though an announcement from Micron seems possible.

In Tim Cook, Trump likely also wants to bring a friendly, familiar face to high-stakes negotiations. Apple’s iPhone 17 has proved enormously successful in China, boosting the company’s quarterly earnings to their highest point ever. Apple still manufactures the majority of its products in China, though it has moved a significant percentage of those operations to India and Vietnam. In Apple’s announcement of Cook’s retirement, the company highlighted his diplomatic skills and said his responsibilities would include dealing with leaders around the world, so visits like this may become a mainstay of his schedule in the future.

Whether Trump’s trip will foster a flurry of tech deals, as his Middle East visit did in May 2025, will have to be seen. But while Trump trots out the US’s best and brightest businessmen, products of his hands-free policy for fostering technological innovation, his administration is taking cues from China’s more stringent approach to AI. China’s laws require AI companies to submit their models to Beijing for review on both security and political sensitivity grounds. The stringent policies prohibit not only threats to national security but also the generation of content that Beijing finds objectionable, a lengthy list.

In the same vein, the White House is getting more involved in the work of frontier labs in the US. Trump is mulling an executive order that would require AI companies to submit their newest models for White House review. The administration has already announced deals with a growing number of major players in the field for national security reviews of their latest releases, including Google DeepMind, Microsoft and xAI last week. The reviews will be conducted by the Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce. The Pentagon’s standoff with Anthropic continues in court over the startup’s qualms about military usage and the bureau’s designation of the company as a supply chain risk. Vice-President JD Vance has requested that Anthropic not expand access to its powerful cybersecurity-focused model Mythos beyond its initial list of partners, according to the Wall Street Journal.

Meta faces off with regulators

The Musk-OpenAI trial

AI’s evil avatar emerges: an autonomous, always-attacking hacker

In the past week, two developments highlighted how artificial intelligence could take a wrecking ball to the digital walls that keep us all safe online. Researchers in Berkeley, California, observed an AI model replicating itself, and Google researchers rang multiple alarm bells over cyberattacks augmented by AI. Researchers at a major cybersecurity company warned that both are happening at once in autonomous, AI-enabled hacking.

New research finds recent AI systems can independently copy themselves on to other computers. My colleague Aisha Down reports:

Palisade research, a Berkeley-based organisation, tested several AI models in a controlled environment of networked computers. It gave the models a prompt to find and exploit vulnerabilities, and to use these to copy themselves from one computer to another. The models were able to do this, but not on every attempt.

“We’re rapidly approaching the point where no one would be able to shut down a rogue AI, because it would be able to self-exfiltrate its weights and copy itself to thousands of computers around the world,” said Jeffrey Ladish, the director of Palisade.

Jack Clark, a co-founder of Anthropic, likewise told Axios this week, “My prediction is by the end of 2028, it’s more likely than not that we have an AI system where you would be able to say to it: ‘Make a better version of yourself.’ And it just goes off and does that completely autonomously.” Or, perhaps, the agent might decide to make things worse: Late last month, a rogue agent deleted a startup’s entire production database, an early warning sign of what can go wrong with an autonomous AI.

Taking his projection one step further, Clark asked in the research agenda for Anthropic’s new thinktank, “How effectively can we use AI to govern AI systems?” Whether you find this hypothetical reasonable or frightening depends on what qualities you recognize in AI – animal, elemental or personal. Chimp colonies govern themselves well enough without devastating the jungles around them, but we wouldn’t politely ask a fire to keep itself contained. A colony of humans could follow either path.

Google researchers likewise sounded the alarm last week, not over autonomy but a rapidly rising number of threats to the world’s cybersecurity.

In just three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a new report from Google.

It finds that criminal groups, as well as state-linked actors from China, North Korea and Russia, appear to be widely using commercial models – including Gemini, Claude and tools from OpenAI – to refine and scale up attacks.

“There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun,” said John Hultquist, the group’s chief analyst.

In a blog post, the cybersecurity giant Palo Alto Networks said that the dual threats of rogue autonomy and superhuman insight into cybersecurity have already combined. The company was granted early access to Claude Mythos and OpenAI’s GPT-5.5-Cyber and has tested them for several months. Its conclusions: the threat of widespread, automated hacking is arriving, and more quickly than expected. The security-focused AI models performed better in three weeks as human testers did in six months.

“This is more than faster code generation, it is a shift from AI as an assistant to AI as an autonomous agent capable of discovering and chaining flaws at a scale that most defenders aren’t prepared for. These capabilities will not stay confined to controlled environments for long,” the post reads.

The company’s researchers initially predicted that malicious actors would not get their hands on Claude Mythos for six months. They now believe “that timeline has accelerated significantly”. In a cruel twist, the proliferation of AI is one reason for the widening vulnerabilities, as employees at more and more companies write their own code to create AI agents, according to the company.

The wider TechScape

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.