Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Thomas Stuart, Lecturer in Communications, Gustavson School of Business, University of Victoria

DOGE’s AI surveillance risks silencing whistleblowers and weakening democracy

The United States Department of Government Efficiency (DOGE) is reportedly using artificial intelligence to surveil federal agency communications for anti-Donald Trump and anti-Elon Musk sentiment.

AI tools now automate firings and assess U.S. federal employees’ sentiment and alignment with the administration’s “mission.” Musk, who has been appointed a “special government employee” by the U.S. president and leads DOGE, has framed these moves as an attempt to cut waste and increase efficiency.

At least one agency, the Environmental Protection Agency (EPA), has reportedly warned staff to watch what they say, type or do online.

The move has been largely overshadowed by tariff debates and constitutional concerns. But research on AI and governance suggests surveillance may erode the transparency that defines public institutions.

Now, with Musk signalling he may scale back his involvement with DOGE, questions remain about how the system will operate in his absence — and whether anyone will be tasked with dismantling it.

Disruption replaces due process

Musk has presented DOGE as a lean, tech-driven solution to government bloat — a message he has repeated in interviews and on social media. Artificial intelligence, he argues, can cut red tape, trim costs and optimize operations.

However, within federal agencies, AI has been used less to support public servants than to evaluate them — and in some cases, to eliminate them.

Since DOGE assumed control over key functions within the Office of Personnel Management in January, hundreds of federal employees have been dismissed without formal explanation. DOGE also restricted access to cloud systems and sidelined career officials.

The official account of the Department of Government Efficiency (DOGE) is seen in the X app on the screen of a smartphone
DOGE was established by Trump through an executive order on Jan. 20, 2025 and tasked with cutting federal spending. (Shutterstock)

Concerns over data security soon followed. In March, a federal judge barred DOGE from accessing Treasury systems, citing a “chaotic and haphazard” approach that posed a “realistic danger” of exposing sensitive financial information.

Internally, DOGE operates through tools more familiar to startups than government agencies. Staff use disappearing messages via the Signal messenger app and draft documents in Google Docs rather than approved federal platforms.

Grok, a generative AI chatbot launched by Musk in 2023, has been integrated across departments, though its tasks remain unclear.

How Doge’s AI targets workers

Earlier this year, thousands of federal employees received an email from the Office of Personnel Management asking them to provide five bullet points listing what they accomplished that week. “Failure to respond,” Musk warned on X, “will be taken as a resignation.”

The message triggered uncertainty across departments. Without clear legal guidance, many workers were left guessing whether silence would mean termination. The Department of Justice and several intelligence agencies warned staff not to respond.


Read more: Musk's ruthless approach to efficiency is not translating well to the U.S. government


Others, like the U.S Department of Health and Human Services (HHS) and Department of Transportation, instructed staff to comply with DOGE’s requests. HHS later warned responses could “be read by malign foreign actors.” The EPA distributed template responses to help staff navigate the demand.

The following week, the Office of Personnel Management clarified participation was voluntary. By then, responses had already been processed.

DOGE reportedly planned to feed the responses into a large language model to determine whether an employee was mission-critical. Musk later denied this, describing the exercise as a test “to see if the employee had a pulse.”

DOGE’S algorithms judge allegiance

According to reports, DOGE’s AI tools have now been deployed across agencies to monitor political sentiment of workers. There is no indication that these systems otherwise assess employee competence or efficacy.

Trump administration officials reportedly said some government employees have been informed that DOGE is examining staff for signs of perceived disloyalty to both the Trump administration and Musk himself.

When AI is used in this way — without transparency or clear performance frameworks — it optimizes for compliance rather than capability.

AI designed to detect dissent offers little support for the work of public service. Rather than recognizing expertise or ethical judgment, these tools reduce complex decision-making to surface-level signs of loyalty.

Effective collaboration between humans and AI depends on clear boundaries. AI might complement the public service by identifying patterns in data, for example. Humans though must retain authority over context and judgment. When AI polices allegiance, those boundaries collapse, sidelining human skill and integrity.

AI surveillance rewrites workplace behaviour

The inherent limitations of large language models amplify these risks. These models cannot reliably read nuance, navigate ethical grey areas or understand intent. Assigning surveillance or employee evaluations to these systems invites errors.

Worse, such blunt tools force civil servants into self-censorship to avoid misinterpretation. Public service shifts from informed expertise to performative alignment.

For employees, the consequences extend beyond flawed assessments. AI surveillance deployed through tools like Grok and Signal creates uncertainty about how performance is measured and by whom.

As surveillance systems degrade psychological safety, employees disengage and become discouraged. Far from enhancing productivity, covert monitoring erodes trust in both management and mission.

This atmosphere weakens accountability. Whistle-blowing often reflects loyalty to institutional values rather than defiance. By reframing personal beliefs and integrity as disloyalty, DOGE will silence mechanisms that safeguard transparency.

AI surveillance becomes institutional

Musk recently announced his involvement at DOGE “will drop significantly”, likely beginning in May. The move is attributed in part to pressure from Republicans urging Trump to distance himself from Musk, as well as pressure from Tesla investors.

Despite his expected departure, around 100 DOGE employees — and the AI frameworks they manage — will remain embedded across federal departments. Musk’s departure may shift headlines, but it will leave structural risks embedded within federal operations.

Once governments adopt new surveillance tools, they rarely dismantle them, regardless of whether their architect stays to oversee them. With no clear formal oversight beyond presidential discretion, the surveillance system is likely to outlast Musk’s tenure.

Employees monitored for political conformity are less likely to raise concerns, report misconduct or challenge flawed directives.

As human resource protocols are bypassed and oversight is diminished, the balance could shift from policy grounded in principle to regulations grounded in algorithms. Governance risks giving way to control, which could weaken the political neutrality of the civil service.

The Conversation

Thomas Stuart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.