Venture capitalist Marc Andreessen thinks the “thought police” are trying to stop A.I. from reaching its full potential.
In an almost 7,000-word manifesto published Tuesday, the co-founder of Andreessen Horowitz called for much larger and faster development of new A.I. tools while attacking several arguments that call for guardrails and limits on the new technology.
Andreessen attacked several of these “doomer” myths in his post, published on Andreessen Horowitz’s website, which he dubbed an “irrational” moral panic.
One such myth, claims Andreessen, is the idea that A.I. might cause significant social harm by working against “human values,” such as through hate speech and misinformation.
The venture capitalist accused “a small and isolated coterie of partisan social engineers” of trying to impose their views on new A.I. technologies, likely to become “the control layer for everything in the world."
"Don’t let the thought police suppress A.I.," he warned.
Developers ended previous A.I. experiments, like Microsoft’s short-lived Tay, after the programs started spewing racist and other offensive language. A.I. tools have also reflected existing social biases and provided factually incorrect information. Experts are also concerned about A.I.’s ability to create misinformation through deepfakes.
That has led some companies to put guardrails on what their programs can do, in order to avoid such a dangerous—or at least embarrassing—outcome. OpenAI, for example, stops ChatGPT from providing answers to prompts focused on hateful, violent, or adult content.
Yet conservative commentators argue that these controls are beholden to allegedly left-wing ideologies. Elon Musk, for example, has complained about “training A.I. to be woke.” Musk said he was considering developing his own chatbot “TruthGPT,” which he described as a “maximum truth-seeking A.I.” in an interview with Tucker Carlson earlier this year.
Andreessen wrote that the debate about A.I. guardrails reminded him of earlier debates about content moderation on social media platforms, which he argued was another instance of governments and activists imposing their views on digital platforms.
The venture capitalist praised Twitter and Substack as exceptions. The former, under Musk’s ownership, has rolled back many of its earlier moderation decisions, including allowing former U.S. president Donald Trump back on the platform.
Newsletter service Substack also takes a more lax view of content moderation, with the company’s founders saying they would “resist public pressure to suppress voices that loud objectors deem unacceptable.”
In his post, Andreessen did not cite a specific examples of what he felt was an example of censorship going too far in the space of A.I., and admitted that any platform would have to abide by some restrictions on speech, such as preventing incitements to violence.
Yet the venture capitalist said that even attempts to moderate “egregiously terrible” content would inevitably lead to a “slippery slope,” risking demands for “even greater levels of censorship and suppression" from outside groups.
Andreessen also claimed that efforts to control what A.I. could say—as well as the broader drive for moderate content deemed harmful—was unpopular. “Most people in the world neither agree with your ideology nor want to see you win,” he wrote.