Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

The A.I. safety debate isn’t just polarized—it’s worse than that

(Credit: Piaras Ó Mídheach/Sportsfile for Web Summit Rio via Getty Images)

What should the average person think about A.I.? As recent polling in the U.S. showed, the vast majority see an existential threat, no doubt because a host of big names keep telling them that A.I. imperils civilization itself. But that’s an extreme position—and there are at least two other extreme positions on the subject that are also jostling for the narrative around the technology.

I got an education on one of them on Monday, at the re:publica festival here in Berlin, where Signal Foundation president Meredith Whittaker gave a keynote on the subject of “A.I., privacy, and the surveillance business model.” 

The former Google A.I. researcher, who had an acrimonious split with that company, frames today’s A.I. push as a continuation of Big Tech’s long-running assault on digital and other rights—an orgy of exploitation (underpaying and downplaying the humans who sort and label datasets; the “indiscriminate siphoning of artistic work”) and nascent authoritarianism (“The world that A.I. companies and their boosters want is a world where robust privacy, autonomy and the ability for self-expression and self-determination are seriously impaired."). 

Like many other critics of the existential-threat brigade, Whittaker smells an attempt to divert the world’s attention from current or near-term A.I. harms—bias, disinformation, and using A.I.-powered phone-scanning to bypass messaging encryption and entrench mass surveillance—to theoretical long-term threats that may never arrive. “There is no evidence A.I. is on the brink of malevolent superintelligence or ever will be,” she scoffed.

However, when I asked Whittaker after her speech if she saw any positive use cases for today’s A.I., she was entirely dismissive: “There’s a billion hypotheticals we could float, but they would require significant structural changes to the incentives that are propelling the companies developing these—again, the Big Tech companies. We can’t pin our hopes on hypotheticals that have no basis in the structural reality of the incentives that are driving the companies.”

And then we have veteran venture capitalist Marc Andreessen, who yesterday waded into the debate with a lengthy screed on “Why A.I. will save the world, in which “will” becomes “may” a mere 35 words later.

The way Andreessen tells it, “every child will have an A.I. tutor” that will help them “maximize their potential with the machine version of infinite love.” His essay continues in a similar vein regarding A.I.’s benefits, while also straw-manning the heck out of every call for caution—those who are worried about the automation of jobs think “all existing human labor [will] be replaced by machines”, and the “coastal elites” who fret about trust and safety want “dramatic restrictions on A.I. output…to avoid destroying society.”

Andreessen is actually partially aligned with many Big Tech critics when it comes to the existential-risk crowd, noting that “their position is non-scientific” and pointing out that the regulation they’re calling for could “form a cartel of government-blessed A.I. vendors protected from new startup and open source competition.” Like Whittaker, he connects older debates over social-media moderation with newer concerns about bias and hate speech in A.I., though he draws a very different conclusion: We should reject the imposition of “niche morality” and everyone should build A.I. “as fast and aggressively as they can.”

The ward-off-doomsday people currently command the public A.I. narrative, but Andreessen’s laissez-faire take and Whittaker’s firmly negative stance are also powerful in their own ways. For policymakers and the public, all this might be easier to parse if it were merely a polarized debate, but with at least three extremes to consider—email me if you know of more—it will be very difficult to find a nuanced middle ground that mitigates the risks of A.I. while embracing its benefits.

More news below.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.