Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

A.I.’s exploitation of human workers could come back to bite it

(Credit: Roger Ressmeyer—CORBIS/VCG/Getty Images)

Despite all those rapidly spreading fears about artificial intelligence making people redundant and potentially extinct, the technology’s developers remain deeply reliant on human labor—and are apparently not very good at getting the best out of their hidden workers.

That’s what I’ve taken away from a couple of interesting articles published in the last day. The first piece is a collaboration between New York Magazine and The Verge, in which writer Josh Dzieza looks into the growing ranks of A.I. annotators—people who have the tedious, poorly paid, and sometimes baffling task of sorting and labeling imagery from photos and videos, so A.I. knows what’s what.

Dzieza himself signed up to annotate stuff for Scale AI, which sells data to OpenAI among others. He found himself having to grapple with 43 pages of very specific directives—“DO label leggings but do NOT label tights...label costumes but do NOT label armor”—that shows how “the act of simplifying reality for a machine results in a great deal of complexity for the human.” Those performing the labor in Kenya are getting paid as little as a dollar an hour, which isn’t exactly likely to elicit the sort of dedication that’s needed to correctly recall and apply such complex instructions.

As my colleague Jeremy Kahn noted in yesterday’s Eye on A.I. newsletter, many enterprising contractors doing this sort of labeling through Amazon’s Mechanical Turk platform have started using A.I. to do their work for them. It’s an understandable hack of what sounds like an incredibly soulless job, but it’s likely to end up worsening the quality of the resulting data.

Meanwhile, The Register published an interview with one of the former employees of a data outfit called Appen, who say they were illegally fired for pushing back over their working conditions. Ed Stackhouse, who wrote to Congress about their concerns before his firing, claims contractors hired to assess the accuracy of Google Bard responses have to do so at excessive speed. 

“You can be given just two minutes for something that would actually take 15 minutes to verify," Stackhouse told the British tech outlet, adding that this hasty feedback could lead Bard to give people bad advice about prescriptions, or to misrepresent historical facts: “The biggest danger is that they can mislead and sound so good that people will be convinced that A.I. is correct.” Google told The Register that Appen was responsible for its working conditions, but did not address the harm concerns. Fortune has asked Appen for comment but had received none at the time of publication.

It's not exactly news that the tech industry can be exploitative and prone to cutting corners, but even if one brushes past the moral implications of such practices, there are unwelcome implications for the end products themselves and the people who use them. Unless the A.I. sector is willing and able to clean up its act, it’s asking for trouble.

More news below.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.