Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ellen Chang

Microsoft Reportedly Gets Rid of AI Ethics Team

The team that oversaw Microsoft's (MSFT) AI products were shipped with protections to avoid any social concerns was part of its recent layoff of employees.

The AI team was part of the group of 10,000 employees that were let go recently as the tech company slashed its workforce amid a slowdown in advertising spending and fears of a recession, according to an article in Platformer.

DON'T MISS: Microsoft Takes on Google with Unique Tool

Risk increases when the OpenAI tech that is in Microsoft's products are used. The ethics and society team's job was to lower the amount of risk.

The team had created a "responsible innovation toolkit,” stating that "these technologies have potential to injure people, undermine our democracies, and even erode human rights — and they’re growing in complexity, power, and ubiquity."

'Safely and Responsibly'

The "toolkit" sought to predict any potential negative effects the AI could create for Microsoft's engineers.

"The Ethics and Society team played a key role at the beginning of our responsible AI journey, incubating the culture of responsible innovation that Microsoft’s leadership is committed to," a Microsoft spokesperson said in an emailed statement. "That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft."

"We have hundreds of people working on these issues across the company, including net new, dedicated responsible AI teams that have since been established and grown significantly."

During the past six years the company prioritized increasing the number of employees in its Office of Responsible AI, which is still functioning. 

Microsoft’s has two other responsible AI working groups: the Aether Committee and Responsible AI Strategy in Engineering are still active.

OpenAI launched another version of ChatGPT with an advanced technology called GPT-4 that is being used for search engine Bing, according to a Reuters article.

Self-Regulation Is not Sufficient

Emily Bender, a University of Washington professor on computational linguistics and ethical issues in natural-language processing, said Microsoft's decision was "very telling that when push comes to shove, despite having attracted some very talented, thoughtful, proactive, researchers, the tech cos decide they're better off without ethics/responsible AI teams."

She also said, via a tweet, that "self-regulation was never going to be sufficient, but I believe that internal teams working in concert with external regulation could have been a really beneficial combination."

Researchers must decline to participate in hype when it comes to advances in AI and "advocating for regulation," Bender tweeted.

Last November, OpenAI launched ChatGPT, a conversational robot with which humans will be able to converse in a natural language. It has become the buzz tool in tech circles.

The Redmond, Washington-based company invested another $10 billion in OpenAI, the company that created ChatGPT. 

The investment valued OpenAI at around $29 billion. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.