Get all your news in one place.
100’s of premium titles.
One app.
Start reading
GamesRadar
GamesRadar
Technology
Austin Wood

Indie game publisher CEO proposes Black Mirror-grade AI detection for employee burnout, says it's all "hypothetical" amid backlash

Hello Neighbor antagonist

Alex Nichiporchik, CEO of Hello Neighbor publisher tinyBuild, recently proposed the potential for using AI models to help find "problematic team members" and identify employees at risk of burnout, and while the aim was purportedly to help employees, boy did everyone dislike that. 

WhyNow Gaming reported on a presentation Nichiporchik gave at the Develop: Brighton conference on July 12, which was titled "AI in Gamedev: Is My Job Safe?" The summary blurb says the presentation would "go in-depth into how the publisher adopted AI in daily practices to exponentially increase efficiency. From a 10x increase in key art production efficiency to using AI to gather data and verify community sentiment, we're in the industrial revolution of the digital age. Those who embrace it already have an advantage."

In addition to how AI can be used as a tool for game dev disciplines like art and code, the session signup notes that "there will be a human resource reallocation that'll change and optimize how teams work." It's mostly this element that ignited such a fire, with discussion centered around the idea of using AI to single out employees rather than, I don't know, check in on how they're doing. 

(Image credit: tinyBuild)

Slides from Nichiporchik's presentation propose an "I, Me analysis" measuring how many times employees refer to themselves as individuals rather than part of the team (with plurals like "we," etc.). This could be done by, for example, feeding Slack or other messaging app conversations, such as automatic transcripts from Google or Zoom calls, through AI like ChatGPT after trimming them of any identifying details. Nichiporchik reportedly joked about copyrighting this method during his talk, "because to my knowledge, no one has invented this." 

He also discussed the search for employees taking up a "disproportional amount of meeting time talking compared to colleagues" – people he dubbed "time vampires," which is coincidentally the same term that I use for video games with a lot of content. Finally, the talk folded in 360-degree feedback from human resources, which would involve asking employees to list coworkers with whom they've had positive interactions, and then looking into people who don't come up in many or any of these lists. 

By combining this data, the presentation summarizes, "you may just be able to prevent a burnout – from both the person, and their team." Nichiporchik also added that you must "not ever make your team members feel like they are being spied on," and the gap between this warning and the methods proposed did not go down well with game developers. 

(Image credit: tinyBuild)

If you're thinking this sounds like a dystopian nightmare straight out of Black Mirror, you may be perplexed to hear that the presentation even said: "Welcome to Black Mirror." The assertion that "toxic people are usually the ones about to burn out," with a slide in the presentation equating burnout with toxicity, has also caught some flak. To put it mildly, people don't like it. To put it less mildly: 

"Just a thought, maybe there are less dystopian ways to figure out if your employees are fkn miserable," said Thirsty Suitors narrative designer Meghna Jayanth. 

"Someone who is burnt out is not 'usually toxic.' People can certainly be burnt out and toxic, but to imply that burnout is most often like a failure of character is bizarre," added Dora Breckinridge, director at Armor Games. 

"If you have to repeatedly qualify that you know how dystopian and horrifying your employee monitoring is, you might be the fucking problem my guy," says WB Games writer Mitch Dyer. 

Dan Ahern, QA lead of Radical Forge, argued "you don't need AI to find 'problematic employees' - you can find them simply by looking at who's trying to do horrendous, torment-nexus level bullshit like this. Really disappointing that Develop would allow such an anti-game developer talk to even be hosted there." 

(Image credit: tinyBuild)

This morning, July 14, Nichiporchik took to Twitter to call out the original story, which he describes as coming from "a place of hate," for being "taken out of context" and "conveniently" omitting key slides. WhyNow Gaming has also updated its story with a statement from the CEO clarifying that:

"The HR part of my presentation was a hypothetical, hence the Black Mirror reference. I could’ve made it more clear for when viewing out of context. We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory."

"This is not about identifying problematic employees, it's about giving HR tools to identify and prevent burnout of people," Nichiporchik reiterated on Twitter. 

"I could've emphasized this much more: We do not use AI tools for HR, this part of the presentation was hypothetical," he said in a reply. 

One Twitter user said they "despise the idea of having my Slack messages collected and fed into any kind of program," and to the confusion of some, Nichiporchik shot back "100% agreed" with no further explanation. 

Asked if tinyBuild employees gave permission to have their messages analyzed this way, Nichiporchik said nobody needed to give permission because "they weren't fed anywhere" – again, I gather, because this was all "hypothetical." 

"The ethics of such processes are most definitely questionable, and this question was asked during Q&A after the presentation," he said in another tweet. "This is why I say 'very Black Mirror territory.' This part was HYPOTHETICAL. What you could do to prevent burnout." 

Hypothetical or not, the very idea has rubbed a lot of people the wrong way, especially pitched at a conference like this and from an executive at the top of a sizable indie studio. I've reached out to tinyBuild directly for an updated comment on the proposal, its stance on this sort of monitoring and AI-led employee intervention, and whether it's considered implementing such a system itself. 

A Persona 5 VA was recently driven off Twitter after criticizing a video cloning her voice with AI, just days after AI voices in NSFW Skyrim mods sparked a separate discussion over the same sort of cloning technology. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.