Get all your news in one place.
100's of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

Bad news employee — most executives admit using AI makes them value human workers less

Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence. .
  • Four in five execs say they were less likely to value human employees after using AI
  • AI still requires human oversight, and many struggle to fully trust it
  • Poor and even negative ROI continues to plague many

A new study by Globalization Partners has revealed more than four in five (82%) company execs say they are less likely to value human employees after using AI tools, positioning human workers as secondary assets after more capable systems.

This sentiment differs from the current state of affairs, whereby 60% of the 2,850 surveyed senior execs agreed humans still lead work operations with AI merely serving as a productivity booster.

The difference could imply that, while humans remain integral today, managers may place less of an emphasis on the human workforce in the future as AI gets more work done autonomously.

AI is impacting how much top managers value their human workers

The shift likely positions humans as AI managers, rather than administrative workers, with two in three (69%) now spending more time than ever before monitoring and reviewing AI-generated work. The sense of a lack of trust still lingers, too, with only 23% having total confidence in AI's accuracy and 61% worries about legal accuracy when using AI on sensitive documents.

However, while some execs see AI as a human replacer, many others are still dissatisfied with their returns. Three-quarters (73%) say ROI has fallen short of expectations, with 16% even reporting negative ROI. As a result, around seven in 10 execs say they're prepared to cut AI budgets this year if goals are not met.

Separately, Gartner VP Analyst Padraig Byrne explained, "AI is everywhere, but most organizations are still figuring out how to monitor and trust these systems."

Giving a sneak peak into where companies might be getting it wrong, the research firm implied that those building AI agents without strong semantic and contextual data foundations are most likely to see hallucinations, unreliable outputs and biases.

Together, the two reports indicate that while execs are increasingly seeing AI as unavoidable, many are still struggling to trust it.

Looking ahead, Gartner calls for the implementation of model monitoring policies to provide quite quality metrics and an increased focus on infrastructure to handle high-volume model telemetry.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.