Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

A leaked Google memo raises the alarm on open-source A.I. But the White House doesn't seem to have gotten it.

Photo of US President Joe Biden. (Credit: Kevin Dietsch—Getty Images)

It was another big week of A.I. news. The White House summoned the heads of four technology companies working on cutting-edge A.I. to a meeting with Vice President Kamala Harris to discuss potential regulation. President Joe Biden dropped by the meeting to issue an on-camera warning to the assembled executives that “what you’re doing has enormous potential and enormous danger.” He also said he was sure the executives were aware of this and that he hoped they could “educate us as to what you think is most needed to protect society.”

A summary of the meeting later provided by the White House said that the executives and Harris had “a frank and constructive discussion” on the need for the companies to be more transparent about their A.I. systems, the importance of there being a way to evaluate and verify the safety, security, and performance of this software, and the need to secure the systems from malicious actors and attacks. The White House also used the occasion to announce several new initiatives, including $140 million in funding to establish seven new National A.I. Research Institutes, a major red-teaming exercise where seven major A.I. companies will voluntarily submit their A.I. models to probing by independent security, safety, and ethics researchers at the DEFCON 31 cybersecurity conference in August, and a policymaking effort from the Office of Management and Budget that will result in guidelines for how the U.S. federal government will use A.I. software.

What got the most attention, however, is who was in the room—and who wasn’t. Meeting with Harris were the CEOs of Microsoft, Google, OpenAI, and Anthropic. (Google DeepMind’s Demis Hassibis also looked to be there from the video clip of Biden addressing the group.) When asked why only these companies, and not others, were present, the White House said it wanted to meet with the “four American companies at the forefront of A.I. innovation.” Many read that as a burn on Mark Zuckerberg’s Meta, which has invested heavily in A.I. technology and research, but, unlike the companies meeting with Harris, has not integrated the technology into a consumer-facing do-it-all chatbot, as well as Amazon and Apple, both of which are perceived as lagging in A.I. development.

But there were a lot of other players absent: Nvidia is participating in the red-teaming exercise at DEFCON 31 but wasn’t invited to the White House. Yet it’s an American company, its chips are a linchpin of the current generative A.I. boom, and it is also building its own large language models. What about Cohere (which is technically Canadian) but is also building very large language models, with financial backing and close support from Google?

There also were no representatives from the fast-growing open-source A.I. ecosystem, most notably Hugging Face and Stability AI, both of which are also participating in the DEFCON 31 exercise. Stability is a British company, but Hugging Face is incorporated in the U.S., and its CEO and cofounder, Clem Delangue, although French, lives in Miami. The open-source models these companies are building (and hosting in the case of Hugging Face) are being used by thousands of businesses and individual developers. They are rapidly matching the capabilities of the systems built by OpenAI, Microsoft, and Google. These open-source players really ought to be “in the room where it happens” if the Biden Administration is serious about grappling with A.I. and its risks.

Arguably, the dangers with open-source software are greater than with the private models the big tech companies are building and making available through APIs: While is often easier to find security vulnerabilities or safety flaws with open-source software, it is also much easier for those with ill-intentions, or simply a cavalier attitude towards potential risks, to use these models however they want. If you wanted to create a malware factory, it would make more sense to download an open-source language model like Alpaca from Hugging Face than rely on OpenAI’s API. OpenAI could always cut off your access if it discovered your operation. People are also already using open-source software to turn LLMs into nascent agents that can perform actions across the internet. Regulating the open-source A.I. world is a much, much bigger challenge than slapping limits on companies like Microsoft and Google. But any serious effort to govern advanced A.I. is going to have to figure out what to do about open-source.

Which brings me to another very interesting bit of A.I. news from last week: that allegedly leaked Google “We have no moat” memo. Google has neither confirmed nor denied the memo’s legitimacy, but it seems likely to be genuine. The reason it leaked when it did, the same day as the White House meeting, is indeed suspect. (Since it makes the case that Google’s A.I. tech is increasingly being matched, if not superseded in some respects, by open-source alternatives and thus might bolster arguments that the “big four” called to the White House should not be singled out for any regulatory action.)

The leaked memo does a good job of laying out some of the problems with the ultra-large generative A.I. models that Google, Microsoft, and OpenAI have been building their products around: The open-source community has quickly sussed out clever and innovative ways to mimic their performance with smaller models trained at a fraction of the cost, both financial and in terms of energy and carbon footprint. These models often run much faster and allow users to keep any proprietary data private. All of which means these open-source A.I. Ford Focuses and Volkswagens may be preferred, especially by large enterprise customers, over big tech’s A.I. Cadillacs and Rolls Royces. The open source community has also, as the memo’s anonymous author notes, not gotten hung up around sensitivities concerning “responsible release”—it just puts stuff out there as fast as possible.

But as Emad Mostaque, Stability cofounder and CEO, tweeted, the memo’s author doesn’t seem to actually understand the concept of “moats” as applied to business strategy. In business, moats are only rarely about a core technology. They are more often around a product—which includes UX as well as feature sets—data, location, convenience, customer service, and brand. Mostaque reckons that OpenAI, Microsoft, and Google still have some big advantages in most of those areas that will be hard for others to match. One thing about serving proprietary models through APIs is that they are much easier for companies with less technical expertise to implement and maintain than open-source models are. The plugins that OpenAI has created for ChatGPT also make that product very sticky, as Mostaque points out.

One thing I think the leaked memo probably does capture accurately, at least in its tone, is the sense of pure panic within Google over the sudden challenge to its position at the forefront of A.I. technology. Tomorrow, Google will unveil a host of new A.I. product enhancements at its annual Google I/O developer conference that will constitute a big part of its effort to fight back. We’ll see how successful it is in being able to recalibrate perceptions about its place in the A.I. arms race.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.