Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

IBM exec explains the difference between it and prominent AI competitors

Concerns over artificial intelligence are steadily mounting as regulators seek to better understand the technology, corporations work to leverage the technology and creatives grapple with the gray area of copyright infringement that has become essential to the training of popular AI models. 

But where OpenAI and Meta (META) -) have been running into the opening salvos of legal speedbumps — Goerge R.R. Martin, the pen and mind behind "A Game of Thrones" was among a group of novelists to bring the latest class-action suit against OpenAI for copyright infringement — IBM (IBM) -) has been able to stay out of the fray. The reason, according to Christina Montgomery, IBM's chief privacy officer, comes down to use cases. 

Related: The ethics of artificial intelligence: A path toward responsible AI

OpenAI's flagship ChatGPT model, similar to Meta's models, operates as a general search tool that users can interact with to produce (or reproduce) creative content.

IBM is more focused on applying AI tech than it is on offering some sort of generally accessible tool. 

"We're really focused on business applications, enterprise uses of AI," Montgomery told TheStreet in an interview. "And one of our points of view and lead value propositions for our clients is that we're not training our models based on your information unless you ask us to and want us to. So we're sort of a different model."

Christina Montgomery testified alongside Professor Gary Marcus and OpenAI CEO Sam Altman at a Senate hearing on AI oversight in May. 

Bloomberg/Getty Images

Further, she said, IBM is regularly ensuring that its models have safeguards in place so that both the training and output of its models are not in violation of existing copyright law. 

"We're much more focused on enabling our clients to be AI creators and to use AI in smart and trusted ways across their own businesses," Montgomery said. 

Navigating AI regulation

In an effort to better understand how to regulate AI, Senate Majority Leader Chuck Schumer (D-N.Y.) hosted an AI forum Sept. 13. The crowd he and other senators heard from was largely made up of tech executives, including Elon Musk, Sam Altman and Bill Gates

IBM CEO Arvind Krishna was additionally present at the meeting. 

Senate Majority Leader Chuck Schumer (D-N.Y.) hosted Elon Musk, Sam Altman and Bill Gates at his first AI forum Sept. 13. 

The Washington Post/Getty Images

"Everyone ought to have a seat at the table in defining what that regulatory landscape looks like, not just a handful of the largest technology companies," Montgomery said, noting that one of IBM's goals is to foster more open conversation around AI considering the socio-technical impact that AI models could have.

"You can't just have the rules being written by a handful of companies that are the most powerful in the world right now," she said. "We've been very concerned that that's going to influence the regulatory environment in some way that isn't going to be helpful in terms of innovation." 

Her concerns are shared by several prominent AI researchers and ethicists whom TheStreet spoke to earlier in the month. 

And while Altman and Musk have stoked fears of the dangers of superintelligent AI, fears that experts have dismissed as pseudoscience, Montgomery thinks the regulatory effort to rein in AI ought to focus on actionable harms, rather than unlikely hypotheticals. 

"I think there are far more important near-term issues that we should be focusing on from a regulatory perspective," she said. "I think that by thinking about AI from the perspective of a technology that's going to lift all boats if it's deployed in a responsible way, we should be thinking about things like how to ensure it is transparent, it is explainable, we are addressing safety concerns."

Related: ChatGPT update highlights the dangers of AI hype

With AI, 'jobs will get better'

Prominent among regulatory concerns are fears of the economic impact these models could have on global society. Dr. Srinivas Mukkamala, an AI authority, told TheStreet in July that AI could displace untold millions of workers, creating an exponential level of inequality and broadening the gap between skilled and unskilled workers. 

IBM's own CEO said in May that he could see a significant percentage of "back-office" jobs at IBM (around 7,800 jobs) replaced by AI over the next five years. 

"We absolutely believe that every job is going to change and some jobs will be eliminated," Montgomery said. "But the opportunity for new work and for better jobs is much greater than the downsides associated with productivity improvements. Jobs will get better."

IBM Ceo Arvind Krishna has said that he could see 30% of back-office jobs at IBM replaced by AI in the next few years. 

Nathan Howard/Getty Images

IBM committed Sept. 18 to training two million workers in AI by the end of 2026, specifically targeting underrepresented communities. As part of this effort, the firm is ramping up its collaborations with global universities and is enhancing its own course offerings. 

"We need a workforce that's going to be ready to utilize these technologies and not just to develop them for clients but to use them in their day jobs," Montgomery said. "AI can make us all better at what we do, but we need to ensure people are skilled to understand it."

Related: US Expert Warns of One Overlooked AI Risk

A path forward

Some experts have said that the path toward a positive AI future begins with strong regulation, which in turn, will create market demand for responsible AI innovations, which in turn will help instigate a cultural shift toward a world integrated with safe, responsible AI. 

That cultural shift, Montgomery said, is already beginning to happen. 

"Everybody's going to learn how to work with this technology and use it to their benefit over the course of the next five, 10 (years) and future generations," she said. "So it has to be very much holistic across every domain."

In order to ensure that cultural shift happens, she said, companies have to go beyond the science of making more and more powerful AI. They have to focus and attempt to resolve the many ethical issues that stem from it; IBM formed a tech ethics lab with the University of Notre Dame in 2019 to accomplish this goal. 

Having the right ethical thoughts and intentions at the beginning, Brian Green, an ethicist with the Institute for Technology, Ethics, & Culture told TheStreet Sept. 25, is essential to the deployment of safe and responsible AI. 

"I'm hoping when you look at things like drug discovery, climate change, major societal issues, that AI projects can be directed towards helping to protect the planet, helping to lift all tides," Montgomery said. 

Related: The laws to regulate AI are already in place, expert argues

Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.