Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Lifestyle
John Harris

‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases

Timnit Gebru.
‘AI impacts people all over the world, and they don’t get to have a say on how they should shape it’ … Timnit Gebru. Photograph: Winni Wintermeyer/The Guardian

‘It feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”

Gebru is talking about her specialised field: artificial intelligence. On the day we speak via a video call, she is in Kigali, Rwanda, preparing to host a workshop and chair a panel at an international conference on AI. It will address the huge growth in AI’s capabilities, as well as something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.

This gathering, the clunkily titled International Conference on Learning Representations, marks the first time people in the field have come together in an African country – which makes a powerful point about big tech’s neglect of the global south. When Gebru talks about the way that AI “impacts people all over the world and they don’t get to have a say on how they should shape it”, the issue is thrown into even sharper relief by her backstory.

In her teens, Gebru was a refugee from the war between Ethiopia, where she grew up, and Eritrea, where her parents were born. After a year in Ireland, she made it to the outskirts of Boston, Massachusetts, and from there to Stanford University in northern California, which opened the way to a career at the cutting edge of the computing industry: Apple, then Microsoft, followed by Google. But in late 2020, her work at Google came to a sudden end.

As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images. The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.

In response, senior managers at Google demanded that Gebru either withdraw the paper, or take her name and those of her colleagues off it. This triggered a run of events that led to her departure. Google says she resigned; Gebru insists that she was fired.

What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”

Gebru speaking at the TechCrunch Disrupt conference in 2018.
Gebru speaking at the TechCrunch Disrupt conference in 2018. Photograph: Kimberly White/Getty Images for TechCrunch

Gebru, who is 40, sometimes speaks dizzyingly quickly, as if the rich details of her life might outrun the hour or so we have to talk. She tends to use the precise, measured vocabulary of a tech insider, leavened with a sense of the absurd that is focused on one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.

One of the subjects she returns to repeatedly is racism, including experiences of prejudice in the US education system and Silicon Valley. While she was at high school in Massachusetts, she says, her gift for science was treated bluntly (one teacher said: “I’ve met so many people like you who think that they can just come here from other countries and take the hardest classes”) and passive-aggressively: despite high grades in physics, her request to study the subject further was met with concerns that she might find it too difficult.

“The thing that was very confusing to me as an immigrant was that liberal type of racism,” she says. “People who sound like they really care about you, but they’d be like: ‘Don’t you think it’s going to be hard for you?’ It took me a while to really figure out what was going on.”

Later on came a watershed experience of even more brazen prejudice, when she and a friend – a black woman – were attacked in a bar. “That was the scariest encounter I’ve ever had in the US,” she says. “It was in San Francisco – again, another liberal place. I was being attacked by a bunch of guys and nobody helped me at all. That was the scariest thing to see: being strangled and people just walking by and looking at you.”

She called the police. “And that was worse than not calling them, because first they accused me of lying a number of times, and kept on telling me to calm down. And then they put handcuffs on my friend, who had just been attacked.” Her friend was also detained in a police cell.

At Stanford, although she was often condescendingly asked by some of her white peers if she had got in thanks to an affirmative action programme, her undergraduate years were spent in an environment where senior people at least “talked about diversity a lot, and they had different people from different places”. But after working as an audio engineer for Apple between 2005 and 2007, she went back to Stanford to study for a PhD and had very different experiences.

Her life, she says, became all about “going to an office every day with the same bunch of people – it’s kind of like work. And there was nobody who looked like me at all. It was just shocking.”

Portrait of Timnit Gebru
Gebru … ‘I’m not worried about machines taking over the world, I’m worried about groupthink, insularity, and arrogance in the AI community.’ Photograph: Winni Wintermeyer/The Guardian

Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.

Soon enough, though, she began to think deeply about how big tech’s innovations often embodied the same inequalities evident in its offices, labs and social activities. In 2015, Google had to apologise when the AI systems that served its Photos app mistakenly identified a black couple as gorillas. The year after, the thinktank ProPublica found that software used across the US to assess prison convicts’ chances of reoffending was heavily biased against black people. Meanwhile, Gebru was becoming even more aware of aspects of the tech industry’s culture that lay behind such stories.

Around this time, she attended a big AI conference in Montreal where, at a Google party, a group of white men openly harassed her. “One of them kissed me, one of them took a picture. And I was kind of frozen: I didn’t really do anything. They were having a party at an academic conference with limitless drinks at a bar and they weren’t even making it clear that this was a professional event. Obviously, you should never harass women – or anybody – like that. But that was rampant at these conferences.” The organisers of the conference say that their code of conduct has since been “elaborated”; they now have “a new one-stop contact point for concerns and complaints, which is monitored closely”.

The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”

In that context, it might seem surprising that, after a year spent working in Microsoft’s fairness, accountability, transparency and ethics in AI lab, Gebru took a job at Google. In 2018, thanks to Margaret Mitchell, a recently hired specialist in algorithmic bias, she was recruited to co-lead a team dedicated to the ethics of AI. “I was full of trepidation,” she says. “But I thought: ‘Well, Margaret Mitchell is here – we can work together. Who else can I work with?’ But that was how I went into it: I was like: ‘I wonder how long I can last here.’”

“It was a difficult decision,” she says. “Because, by the time I was going to Google, I had heard from several women about sexual harassment, and other kinds of harassment, and they had actually said: ‘Don’t do it.’”

When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over claims of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.

Google employees’ protest signs read ‘Worker’s rights are women’s rights’ and ‘Time’s up tech’
Google employees in New York stage a walkout in November 2018, . Photograph: Bryan R Smith/AFP/Getty Images

In its quest to highlight some of the moral and political questions surrounding AI, her team hired Google’s first social scientist. She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube. A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals, targets someone with prolonged or malicious insults based on intrinsic attributes, or reveals someone’s personally identifiable information.”)

Then, in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.

These sources are usually scraped from the world wide web and inevitably include material usually subject to copyright (if an AI system can produce prose in the style of a particular writer, for example, that is because it has absorbed much of the writer’s work). But Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.

When the paper was submitted for internal review, Gebru was contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.

She says she told the company that she would not retract it and would remove the authors’ names only if Google specified its objections. If this didn’t happen, she said, she would resign. She also sent a number of emails to women working in Google’s AI division, saying that the company was “silencing marginalised voices”.

Then, in December 2020, while she was on holiday, one of her closest colleagues texted her to ask if an email they had seen saying she had left the company was correct. Subsequent accounts said that Google had cited “behaviour that is inconsistent with the expectations of a Google manager”.

How, I wonder, did she feel? “I was not in thinking mode. I was just in action mode, like: ‘I need a lawyer and I need to get my story out; I wonder what they’re planning; I wonder what they’re going to say about me.’” She pauses. “But I was fired. In the middle of my vacation, on a road trip to visit my mom, in the middle of a pandemic.”

Google HQ in Mountain View, California.
Google HQ in Mountain View, California. Photograph: Anadolu Agency/Getty Images

In response to what Gebru says about workplace harassment and toxic behaviour at Google, her experiences at the party in Montreal and the nature of her departure, the company’s press office emails me a set of “background points”.

“We are committed to building a safe, inclusive and respectful workplace – and we take misconduct very seriously,” it says. “We have strict policies against harassment and discrimination, thoroughly investigate all concerns reported and take firm actions against substantiated allegations. We also have several ways for our workforce to report concerns, including anonymously.”

Five years ago, it goes on, the company overhauled “the way we handle and investigate employee concerns, introducing new care programs for employees who report concerns and making arbitration optional for Google employees”.

On questions about AI systems using copyrighted material, a spokesperson says that Google will “innovate in this space responsibly, ethically, and legally”, and plans to “continue our collaboration and discussions with publishers and the ecosystem to find ways for this new technology to help enhance their work and benefit the entire web ecosystem”.

After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”

The institute’s fellows, she tells me, include a former Amazon delivery driver, plus people with experience of the monotonous and sometimes traumatic job of manually labelling online content – including illegal and toxic material – to train AI systems. Much of this work happens in developing countries. “There’s a lot of exploitation in the field of AI, and we want to make that visible so that people know what’s wrong,” she says. “But also, AI is not magic. There are a lot of people involved humans.”

Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.

“That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can abdicate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”

How does she feel squaring up to her old employers in Silicon Valley? “I don’t know if we’ll change them or not,” she says. “We’re never going to get, like, a quadrillion dollars to do what we’re doing. I just feel like we have to do what we can. Maybe, if enough people do small things and get organised, things will change. That’s my hope.”

• This article was amended on 23 May 2023 to correct a misquote. Gebru spoke of the abdication, not aggregation, of responsibility.

• Join a Guardian Live online event on the future of AI on Tuesday 23 May at 8pm, chaired by technology editor Alex Hern. Book tickets here

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.