Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Input
Input
Technology
Tom Maxwell

Twitter’s photo-cropping algorithm is as racist and sexist as you thought

Twitter has confirmed that its photo-cropping algorithm is, essentially, racist. Specifically, the winning entry in a competition to test its algorithm found that it favors faces that are “slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits.”

Open competition —

The findings come a year after complaints first surfaced that Twitter disproportionately cropped out the faces of Black people from image previews. Twitter responded by disabling automatic photo cropping and began showing photos in full to some users instead.

Last week it opened a competition in which researchers were given access to the algorithm and asked to test whether it really did treat Black people differently. The first-place winner, who was announced at DEF CON 2021, used machine learning to generate faces with varying skin tones and other qualities. He then applied beauty filters to game the algorithm’s scoring model.

Algorithmic bias —

Twitter’s algorithm for photo-cropping was designed with the best intentions. The idea was that showing images in full would take up too much space on a user’s smartphone screen, so the app would instead crop an image to show whatever parts might be deemed most “interesting.” But clearly, that’s subjective.

Ruman Chowdhury, director of Twitter’s META team (which studies Machine learning Ethics, Transparency, and Accountability), said during DEF CON that algorithms are trained against the types of photos commonly shared online. People use filters to smooth their skin and whiten their teeth, and since these types of photos tend to get disproportionate engagement online, the algorithm comes to recognize those traits as “most interesting.”

Beauty standards —

This narrow set of characteristics excludes a great many of Twitter’s users. And if it’s persistently cropping them out of their own photos, they probably won’t be pleased. If we’re talking about Twitter’s bottom line at the very least, that’s not good for the company. On a human level, favoring features like light skin tones is very European-centric, emphasizing the way the media has treated attractiveness for a long time. Maybe it’s an issue of “un-teaching” certain standards that everyone has been taught to identify as beautiful.

There’s also another issue in machine learning models in that the data they’re trained on has historically been biased based on the people who provide data to train the models. Companies have begun trying to improve their algorithms by submitting more images of Black people, for instance, but Twitter also said that its cropping algorithm favors Latin languages over Arabic, perhaps because the company is Western and English speaking and therefore hasn’t put much attention on the issue.

“When we think about biases in our models, it’s not just about the academic or the experimental [...] but how that also works with the way we think in society,” said Chowdhury. “I use the phrase ‘life imitating art imitating life.’ We create these filters because we think that’s what beautiful is, and that ends up training our models and driving these unrealistic notions of what it means to be attractive.”

At least we can say Twitter is trying to be more transparent about its faults.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.