
AI is well on its way to becoming intertwined in almost every facet of work, from recruiting job candidates to evaluating employee performance. But a new bill out of California would put some guardrails around exactly how much power artificial intelligence is allowed to have over human beings in the office.
The “No Robo Bosses” Act, also known as SB 7, written by California Assembly members Sade Elhawary and Isaac Bryan, was introduced in March and introduced by State Sen. Jerry McNerny. The bill is endorsed by organized labor including the California Federation of Labor Unions, an affiliate of the AFL-CIO.
It requires human oversight when it comes to AI’s involvement in promoting, demoting, firing, or disciplining workers. It would also block LLMs from accessing personal information about a worker including their immigration stations, religious beliefs, or health care history. And it would bar the use of AI for predicting a worker’s behavior in the future that results in negative actions against them.
“The Act will be a first-of-its-kind as it places significant restrictions on the use of AI in the workplace,” Luana de Mello, assistant general counsel and HR consultant with Engage PEO, tells Fortune. “This will require businesses to take a closer look at their AI systems, including regular audits, and to ensure they are using these systems transparently and in compliance with state regulations.”
California isn’t the only state making moves to regulate AI in the workplace; more than 30 states are actively looking into their own legislation, according to law firm Fisher Phillips. Colorado was the first state to directly address algorithmic discrimination at work in 2024. That state’s legislation, which will go into effect in 2026, addresses “high-risk” AI systems that make or play a role in major decisions for employees. Employers in that state will also have to do an annual impact assessment, tell employees if a high-risk AI system is being used, and give workers the chance to appeal if the AI contributed to an adverse decision against them.
But looming over the state legislation is a long-discussed 10-year federal moratorium on state laws that regulate AI. A version of that moratorium was included in part of Trumps’ “big, beautiful bill,” the budget reconciliation package dominating political headlines right now, but was taken out earlier this week, leaving the future of AI legislation up for grabs.
“The biggest concern is the conflicting directions that states, and California in particular, are going with respect to AI regulation,” says Danielle Ochs, an attorney and shareholder at Ogletree Deakins, based in San Francisco.
Some lawyers warn that the California bill could mean significant costs to businesses. That includes auditing their AI tools to determine if they are covered by the bill, working with vendors to comply, revising policies, notifying employees, and maintaining records, according to Jason Murtagh, a shareholder and attorney at Buchanan Ingersoll & Rooney in San Diego. He also expects to see an increase in litigation from employees and job applicants.
“Many businesses may seek to slow the rollout or eliminate the use of AI and automated systems, at least until some litigation over the new bill has occurred,” he tells Fortune.
Spencer Hamer, an attorney and shareholder at FBFK Law based in Irvine, believes that the burden of litigating these laws will fall particularly heavy on smaller businesses. “The large companies will at least have the resources to try to get legal compliance,” he says. “But for small employers, they’re going to have to find out often through litigation what they did or didn’t do right.”
Angelina Evans, an attorney at Seyfarth Shaw’s Los Angeles office, however, says that she’s been advising clients for years to create an AI governance team that checks to see that the data that these tools are providing is consistent with what they expect, and to investigate if there’s a discrepancy. This bill, she says, seems to be an extension of that line of thinking.
“The purpose of these laws really is to provide transparency,” says Evans. “And [protect] the people that might have their rights have been violated, and they’re not even aware of it.”