Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Business
Emma Sheppard

Why are virtual assistants always female? Gender bias in AI must be remedied

Teenage girl talking on phone using digital tablet by brother standing at home
Alexa, Siri and Google Home are female by default – a defining problem in robotics. Photograph: Maskot/Getty

Asked whether artificial intelligence (AI) has a gender problem, Ivana Bartoletti, founder of the Women Leading in AI network, is in no doubt. “An algorithm is an opinion expressed in code,” she says. “If it’s mostly men developing the algorithm, then of course the results will be biased … You’re teaching the machine how to make a decision.”

There is some expectation that AI and machine learning will reduce bias (pdf) – in situations such as recruitment, for example – but there have been a multitude of examples where that is not the case. Researchers from MIT and Stanford in the US recently tested three facial-analysis programmes and found the software was good at recognising white males but not females, especially if they had a darker skin tone. Similarly, Google News algorithms have been found to automatically associate certain words with “female” (such as nurse).

The prevalence of feminised machines, such as Alexa, Google Home and Siri – which all have female voices by default, although Google Home and Siri can be switched to a male voice – has alarmed some who are watching the evolution of the sector. A sociology professor at the University of Southern California recently described it as “a powerful socialisation tool that teaches about the role of women, girls and people who are gendered female to respond on demand”. In comparison to the simple tasks these virtual assistants are asked to do, the question-answering computer IBM Watson is used to play the US quiz show Jeopardy and make complex business decisions. Funnily enough, he has a masculine voice.

Alan Winfield, co-founder of the Bristol Robotics Laboratory at the University of the West of England, Bristol, regards AI’s “gender problem” as “one of the top two ethical issues in robotics and AI” (the other being wealth inequality). Winfield was one of the authors of the principles of robotics, published by the Engineering and Physical Sciences Research Council in 2010. One of the five rules states that robots should not be designed to deceive.

“Whether we like it or not, we all react to gender cues,” he explained in a blog post on the issue. “So whether deliberately designed to do so or not, a gendered robot will trigger reactions that a non-gendered robot will not.”

This belief that robots don’t need to be gendered, which some argue is a kind of deception in itself, led Kriti Sharma, vice president of AI and ethics at software company Sage, to propose a gender-neutral robot assistant called Pegg that “doesn’t pretend to be human”, she says. As a woman working in AI, she is disappointed and frustrated that gender bias is still such an issue in the sector.

“There is some kind of fascination in the tech industry that we want to create AI that is as human-like as possible,” she adds. “[But AI] learns from historical data sets, and therefore it can learn a lot of bias. Not just gender bias but also racism, sexism and other opinions … We cannot allow for technology to create more division in society, and more inequality than we already have.”

Research by PwC estimates that AI will contribute $15.7tn to the global economy by 2030 across a multitude of industries. As machine learning is increasingly incorporated into decision-making processes, it’s imperative these biases are addressed.

Maria Axente, AI programme driver at PwC, says part of the firm’s approach has meant building an end-to-end view of the design and implementation process, to identify where bias is introduced and address it early. In the longer term, the firm also aims to attract more women and younger people into technology careers via initiatives such as the Tech She Can Charter and programme, and to move female employees on to AI development teams.

“We also need to continue to foster a culture of inclusion and diversity in the workplace and make sure that diverse points of view are brought into the design and monitoring of these solutions,” says Axente. “[These are] business analysts, ethicists and experience designers [that] bring a diversity of skills, as well as gender diversity.”

While the majority of machine learning engineers are now male, computer programming was historically the realm of women, she adds. During the second world war, 75% of the codebreakers working at Bletchley Park, were female (initially because the men were deployed to fight). But in the decades that followed, programming shifted from a “low-status, feminised task”, to a job that was seen as central to the control of corporate and state resources. “Women were edged out,” she says.

That’s something Winfield and the others at the Bristol Robotics Lab also want to improve. There were no women working in the lab 20 years ago; today, he estimates up to 40% of the team are female. It’s had a big impact. “The different values, perspectives and contribution of the women in the lab has undoubtedly enriched and improved the work [it] does,” he says.

A report by the World Wide Web Foundation (pdf), recommended G20 governments take a number of steps to address gender inclusion in AI, such as requiring companies to disclose the gender balance of their design teams, and promoting transparency in machine learning and AI-powered systems. In the UK, the new Centre for Data Ethics and Innovation has launched a consultation to decide where its remit and priorities should lie.

Bartoletti would support more transparency in the sector, particularly around the algorithm and data sets used in the machine-learning process. She is planning to campaign on the need for a quality assurance process, or independent kitemark, that will be awarded to algorithms that promote fairness. She says customers also need to feel more confident to ask the right questions. “If there’s [an automated] decision that impacts your life, you should have the right to say: ‘How did you come to that conclusion?’” she adds.

Axente has been involved in the Centre for Data Ethics and Innovation consultation and believes there are steps organisations can and should take, before regulation is required. However, she does agree the public should be empowered to demand certain behaviour.

“I think it’s better to incentivise companies and individuals to display diverse behaviour, rather than going through the legislative route,” she says. “We should empower civil society to nurture this behaviour.”

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.