Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Ronald Bailey

How To Restrain the A.I. Regulators

While some A.I. alarmists are arguing that the further development of generative artificial intelligence like OpenAI's GPT-4 large language model should be "paused," licensing proposals suggested by some boosters like OpenAI CEO Sam Altman and Microsoft President Brad Smith ($10 billion invested in OpenAI) may inadvertently accomplish much the same goal.

Altman, in his prepared testimony before a senate hearing on A.I. two weeks ago, suggested "the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements."

While visiting lawmakers last week in Washington, D.C., Smith concurred with the idea of government A.I. licensing. "We will support government efforts to ensure the effective enforcement of a licensing regime for highly capable AI models by also imposing licensing requirements on the operators of AI datacenters that are used for the testing or deployment of these models," states his company's recent report Governing AI: A Blueprint for the Future.

So what kind of licensing regime do Altman and Smith have in mind? At the Senate hearing, Altman said that the "NRC is a great analogy" for the type of A.I. regulation he favors, referring to the Nuclear Regulatory Commission. Others at the hearing suggested the way the Food and Drug Administration licenses new drugs might be used to approve the premarket release of new A.I. services. The way that NRC licenses nuclear power plants may be an apt comparison, given that Smith wants the federal government to license gigantic datacenters like the one Microsoft built in Iowa to support the training of OpenAI's generative A.I. models.

What Altman, Smith, and other A.I. licensing proponents fail to recognize is that both the NRC and FDA have evolved into highly precautionary bureaucracies. Consequently, they employ procedures that greatly increase costs and slow consumer and business access to the benefits of the technologies they oversee. A new federal Artificial Intelligence Regulatory Agency would do the same to A.I.

Why highly precautionary? Consider the incentive structure faced by FDA bureaucrats: If they approve a drug that later ends up harming people they get condemned by the press, activists, and Congress, and maybe even fired. On the other hand, if they delay a drug that would have cured patients had it been approved sooner, no one blames them for the unknown lives lost.

Similarly, if an accident occurs at a nuclear power plant authorized by NRC bureaucrats, they are denounced. However, power plants that never get approved can never cause accidents for which bureaucrats could be rebuked. The regulators credo is better safe than sorry, ignoring that it is often the case that he who hesitates is lost. The consequences of such overcautious regulation is technological stagnation, worse health, and less prosperity.

Like nearly all technologies, A.I. is a dual use technology offering tremendous benefits when properly applied and substantial dangers when misused. Doubtlessly, generative A.I. such as ChatGPT and GPT-4 has the potential to cause harm. Fraudsters could use it to generate more persuasive phishing emails, massive trolling of individuals and companies, and lots of fake news. In addition, bad actors using generative A.I. could mass produce mis- dis- and mal-information campaigns. And of course, governments must be prohibited from using A.I. to implement pervasive real-time surveillance and/or deploy oppressive social scoring control schemes.

On the other hand, the upsides of generative A.I. are vast. The technology is set to revolutionize education, medical care, pharmaceuticals, music, genetics, material science, art, entertainment, dating, coding, translation, farming, retailing, fashion, and cybersecurity. Applied intelligence will enhance any productive and creative activity.

But let's assume federal regulation of new generative artificial intelligence tools like GPT-4 is unfortunately inevitable. What sort of regulatory scheme would be more likely to minimize delays in the further development and deployment of beneficial A.I. technologies?

R Street Institute senior fellow Adam Thierer in his new report recommends a "soft law" approach to overseeing A.I. developments instead of imposing a one-size-fits-all, top-down regulatory scheme modeled on the NRC and FDA. Soft law governance embraces a continuum of mechanisms including multi-stakeholder conclaves where governance guidelines can be hammered out; government agency guidance documents, voluntary codes of professional conduct, insurance markets, and third-party accreditation and standards-setting bodies.

Both Microsoft and Thierer point to the National Institute of Standards and Technology's (NIST) recently released Artificial Intelligence Risk Management Framework as an example of how voluntary good A.I. governance can be developed. In fact, Microsoft's new A.I. Blueprint report acknowledges that NIST's "new AI Risk Management Framework provides a strong foundation that companies and governments alike can immediately put into action to ensure the safer use of artificial intelligence."

In addition, the Department of Commerce's National Telecommunications and Information Administration (NTIA) issued in April a formal request for comments from the public on artificial intelligence system accountability measures and policies. "This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy," notes the agency. The NTIA plans to issue a report on A.I. accountability policy based on the comments it receives.

"Instead of trying to create an expensive and cumbersome new regulatory bureaucracy for AI, the easier approach is to have the NTIA and NIST form a standing committee that brings parties together as needed," argues Thierer. "These efforts will be informed by the extensive work already done by professional associations, academics, activists and other stakeholders."

A model for such a standing committee to guide and oversee the flexible implementation of safe A.I. would be the National Science Advisory Board for Biosecurity (NSABB). The NSABB is federal advisory committee composed of 25 voting subject-matter experts drawn from a wide variety fields related to the biosciences. The NSABB provides advice, guidance, and recommendations regarding biosecurity oversight of dual use biological research. A National Science Advisory Board for A.I. Security could similarly consist of a commission of experts drawn from relevant computer science and cybersecurity fields to analyze, offer guidance, and make recommendations with respect to enhancing A.I. safety and trustworthiness. This more flexible model of oversight avoids the pitfalls of top-down hypercautious regulation while enabling swifter access to the substantial benefits of safe A.I.

The post How To Restrain the A.I. Regulators appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.