Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Chicago Tribune
Chicago Tribune
Comment
Sheldon H. Jacobson

Commentary: An AI Bill of Rights would be unenforceable and may do more harm than good

President Joe Biden put forward a “Blueprint for an AI Bill of Rights” that provides five guiding principles for the development and implementation of artificial intelligence. They outline aspirational goals that also align with principles in the Democratic Party platform.

Technology leaders have also expressed concern about the untethered growth of AI and its impact on society, including its affect on work and the spread of misinformation. Looking to the 2024 election, now less than a year and a half away, generative AI has the potential to upend campaigns by swaying and manipulating voters. A classified Senate hearing was recently held to discuss the future of AI and the opportunities and risks it poses.

Though many of the points that Biden and technology leaders have raised are worthy of discussion, the challenge of creating guidelines to reign in AI is that they are inherently unenforceable and may ultimately do more harm than good.

AI is already ubiquitous, and its pathway will grow only more expansive. Countries around the world are making large investments in AI, given its potential to accelerate economic growth, with the United States sitting as the top investor. AI is affecting numerous industries, with few left untouched.

Some people may believe that AI systems will literally take over society, relegating people to subservient roles. This overstates where AI systems currently are in development and even their trajectory for the future. Science fiction movies that portray computers that can reason, like HAL 9000 in “2001: A Space Odyssey,” certainly stir people’s imagination and fear, but these fictional creations are far beyond AI systems in existence today and any that will be available for the foreseeable future.

What AI does exceedingly well is learn — hence the names machine learning, deep learning and reinforcement learning. Given the abundance of information online, these models use training data to learn and then use that learning to recognize patterns or make predictions. Given enough training data, AI systems can give the illusion of thinking and reasoning.

To put this into perspective, AI systems are highly effective at the game of chess because they can learn patterns that lead to optimal decisions for each move without being distracted by emotion that may lead to mistakes. This does not mean that AI systems are more intelligent than chess grandmasters. It just allows AI systems to be more effective at playing chess.

AI developments and advances are being made in industrial labs such as Google Research and in academia. What drives many of these advances is curiosity and ambition. Numerous government agencies, such as the National Science Foundation and the Department of Defense, are making substantial investments to push the boundaries of knowledge. Given this thrust, AI is certain to disrupt and touch every aspect of society.

Considering the investments and attention, asking for restraint or a pause is futile, with any guidelines created unenforceable and likely to be ignored by most stakeholders. There is far too much intellectual energy being expended, far too much money at stake and far too much global competition to stop the growth and proliferation of AI systems.

While there is plenty of buzz about the opportunities AI presents, what of the concerns? A major issue many have raised, one in which the line between AI and politics gets muddied, is the potential for bias.

Bias is an inherently human feature that is learned. All people have biases, whether conscious or unconscious. The partiality in data comes from people injecting their biases, either intentionally or unintentionally. Some people want biases that contradict their own beliefs and values, or that they view as distasteful or unfair, to be “cleaned” from data that informs AI systems.

In essence, biases based on politics, socioeconomics, education and a plethora of other factors that define society, will be present in data used to train AI systems. Any manipulation of such data inherently changes the output from AI systems, which is akin to changing society through data.

This is why any proposed guardrails for AI should be crafted by a bipartisan committee; the full spectrum of viewpoints on bias needs to be considered. Engendering trust in the process will make it more likely that any guidelines will provide meaningful information and be taken seriously.

AI is going to affect the future of our nation. No pauses or restraints have the power to stop AI advances. Unfortunately, any positive innovations and benefits will come with negative consequences and pushbacks. This situation is similar to how people benefit from the open communication offered by social media but also must tolerate the misinformation that it facilitates, even with guardrails in place, some of which are now being relaxed or dismantled.

This is the price to be paid as AI continues to grow and expand its reach. There are certain to be many successes to keep hope alive, as well as hazards that provide perspective and keep expectations in check.

At this stage of AI development, calls for restraint, as well intentioned as they may be, are neither feasible nor enforceable.

____

ABOUT THE WRITER

Sheldon H. Jacobson is a professor of computer science at the University of Illinois at Urbana-Champaign. As a data scientist and operations researcher, he applies his expertise in data-driven, risk-based decision-making to evaluate and inform public policy.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.