Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Kari Paul

TechScape: Are social media giants silencing online content?

People march from Freedom Plaza to the White House to hold a pro-Palestine demonstration and condemn Israeli attacks on Gaza.
People march from Freedom Plaza to the White House to hold a pro-Palestine demonstration and condemn Israeli attacks on Gaza. Photograph: Anadolu Agency/Anadolu/Getty Images

As the ongoing conflict between Israel and Hamas and its devastating effects play out in real time on social media, users are continuing to criticise tech firms for what they say is unfair content censorship – pulling into sharp focus longstanding concerns about the opaque algorithms that shape our online worlds.

From the early days of the conflict, social media users have expressed outrage at allegedly uneven censorship of pro-Palestinian content on platforms like Instagram and Facebook. Meta has denied intentionally suppressing the content, saying that with more posts going up about the conflict, “content that doesn’t violate our policies may be removed in error”.

But a third-party investigation (commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility) had previously determined Meta had violated Palestinian human rights by censoring content related to Israel’s attacks on Gaza in 2021, and incidents in recent weeks have revealed further issues with Meta’s algorithmic moderation. Instagram’s automated translation feature mistakenly added the word “terrorist” to Palestinian profiles and WhatsApp, also owned by Meta, created auto-generated illustrations of gun-wielding children when prompted with the word “Palestine”. Meanwhile, in recent days, prominent Palestinian voices say they are finding their content or accounts limited.

As the violence on the ground continues, emotions are higher than ever – intensifying frustration with these decisions and building pressure on an already volatile situation, digital rights groups and human rights advocates say.

“When it feels like platforms are limiting certain viewpoints, it fans the flames of division and tension because people on all sides of the issue are worried their content is being targeted,” said Nora Benavidez, senior counsel at media watchdog group Free Press. “This kind of worry and paranoia played out across communities helps to create environments that are electric and combustible.”

The moderation disaster unfolding around the Israel-Palestine conflict is renewing calls for more transparency around algorithms, and could bolster support for related legislation. There have long been legislative efforts to address the issue, though none have been successful. The latest attempt is the Platform Accountability and Transparency Act, first announced in 2021 and reintroduced in June 2023, which would require platforms to explain how their algorithmic recommendations work and provide statistics on content moderation actions.

A similar bill in the US, Protecting Americans from Dangerous Algorithms Act, was introduced in 2021 but was not passed. Such legislation is in keeping with recommendations from experts and advocates, like Facebook whistleblower Frances Haugen, who in 2021 urged senators to create a government agency that could audit the inner workings of social media firms.

Groups including 7amleh – the Arab Center for Advancement of Social Media and the Electronic Frontier Foundation (EFF) have also called on platforms to stop unjustified take-downs of content and to provide more transparency around their policies.

“Social media is a crucial means of communication in times of conflict – it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity,” said the EFF. “Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.”

Twitter’s moderation problem

X owner Elon Musk.
X owner Elon Musk. Photograph: Jaap Arriens/NurPhoto/Shutterstock

While Instagram, Facebook, and TikTok have been under fire for their handling of Palestine-related content, X is facing its own issues after Elon Musk supported an antisemitic tweet and the platform has been criticised for anti-Islamic and antisemitic content.

Musk came under fire for publicly agreeing with a tweet accusing Jewish people of “hatred against whites” – a move that will not only impact the company itself but also represents “major societal danger”, said Jasmine Enberg, principal analyst at market research firm Insider Intelligence. “Twitter’s influence has always been larger than its user base and ad revenues and, while the platform’s cultural relevance has declined, Musk and X are still very much a major part of public conversation,” she said.

Meanwhile, a study from advocacy group Media Matters showed that advertisements from companies including Apple and Oracle were placed on X next to antisemitic material. It also showed advertisements from NBC Universal and Amazon were placed next to white nationalist hashtags. A separate study from the Center for Countering Digital Hate (CCDH) found that of a sample of 200 posts on X containing hate speech towards Muslims or Jews, the company removed just four – or two per cent.

On Monday, X responded to Media Matters and its report with a lawsuit claiming the group had defamed the platform. As Reuters reported, X is claiming that Media Matters “manipulated” the platform by cherry-picking accounts known to follow fringe content “until it found ads next to extremist posts”. The social media platform is also in dispute with the CCDH, filing a civil complaint against the group alleging that it scared off advertisers. Last week the CCDH filed a motion to dismiss the claim.

Experts say the platform’s actions surrounding the current conflict could hasten the downfall of X – as advertisers including IBM, Apple, Disney and Lionsgate flee or pause spending. “The damage to X’s ad business will be severe,” Enberg said. “A big-name advertiser exodus will inspire other advertisers to follow suit.”

US ad revenue on the site has dropped more than 55% year-on-year since Musk took over, but its newish managing director, Linda Yaccarino, claimed in September that X would be profitable next year and that engagement was up “dramatically”. (Dan Milmo goes into much more detail about the company’s business issues in this piece.)

The OpenAI soap opera

Sam Altman, former OpenMind CEO.
Sam Altman, former OpenMind CEO. Photograph: Kevin Dietsch/Getty Images

Last week, the board of the company behind Chat GPT AI chatbot abruptly fired its star CEO, Sam Altman. Few knew why. Then Microsoft, a major investor in the company, hired Altman and some other illustrious people to work on its surprise new advanced AI team. Oh, and OpenAI’s staff has threatened a mass walkout if he’s not brought back to the artificial intelligence research company.

Kevin Roose and Casey Newton, the very well-informed folks behind Hard Fork, meanwhile, rushed out a now-outdated (but still fun) “emergency pod” about just how little is known about the firing – followed by their interview with the tech leader recorded days before the sacking.

Can’t keep up? Dan Milmo has a very digestible explainer on what happened and what it means, noting that the disruption may not slow down AI development: “Elon Musk’s latest venture, xAI, has shown how quickly powerful new models can be built. It unveiled Grok, a prototype AI chatbot, after what the company claimed was just four months of development.”

Altman, who is well liked in Silicon Valley all the way back to his Y Combinator days, is still trying to return as OpenAI’s CEO, according to the Verge.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.