
We face a profound challenge. Polarisation on the web and mental health problems arising from social media use must be addressed. Until they are fixed, it will be hard to get anyone to think about the web in a positive light.
There are already a multitude of good things on the internet, like email, calendars, podcasts, which all interoperate. Email apps all work together because they use the same email protocol, which is an open standard. Social networks don’t.
Why? Facebook, LinkedIn, Instagram, X, are all silos with no interoperability. Accordingly, we need a new means of connecting and powering the social media we use. We need new open protocols and apps — a pro-democratic layer that empowers users and combats disinformation. Not only that, but to attract users, the new layer has to be much better than the old approach, unlocking fantastic new functionality as well.
Currently, the web is lacking in compassion, but this is not a human failing. This is a design issue. Back in 2016 I realised that there was something wrong with the technical side of the web that was encouraging toxicity. I identified two symptoms, stemming from the same disease.
The web is lacking in compassion, but this is a design issue
The most egregious symptom is polarisation. Social media, as currently built, leads users to take extreme political positions and demonise the opposing side. This makes constructive engagement difficult, allows outlandish conspiracy theories to flourish, and promotes demagoguery over deliberation. Soon, civilised discussion about important issues becomes impossible. Polarisation, I fear, might have dire outcomes for humanity, with consequences on a global scale.
The second symptom is more individualised. Many social media users report suffering mental health issues after prolonged usage. The catalogue of ills related to social media is alarming: anxiety, depression, jealousy, inadequacy, feelings of isolation, body image issues — even suicidal thoughts. This mental health epidemic is especially acute among young people.
What is the common design issue that leads to these unfortunate symptoms? Web scientists analysing the information sphere have concluded that there is a direct link between social media and polarisation. Social media companies are using machine-learning techniques to make users addicted to their platforms. These systems are designed to be addictive, feeding people more and more extreme content, making them alternately angry and sad.
So what is the answer? I agree with recent comments made by Yuval Noah Harari, author of the books Sapiens and Nexus. “If a social media algorithm recommends to people a hate-filled conspiracy theory, this is the fault not of the person who produced the conspiracy theory, it is the fault of the people who designed and let loose the algorithm,” he has said.
Harari has called for such algorithms to be regulated by the government. While I generally oppose the regulation of the web, in this instance I agree. My feeling is that regulation should be minimal, and only used when absolutely necessary. But, it is clear that, among all the wonderful things on the internet, social media is a particular phenomenon which is causing harm. I want to be clear that I do not think we should get rid of social media in general.
Social media is a fantastic innovation with tremendous potential to bring people together. And given the vast amount of information uploaded to social media every day, we need algorithmic agents to organise the media that we see. All we need is to regulate the addictive algorithms, the ones that guide people into rabbit holes.
People ask me if their kids should be allowed to have a smartphone. My answer is yes
Of course, adults are susceptible to the toxic effects of algorithmic feed manipulation, just as teenagers are. This negative engagement cycle has poisoned much online discourse, and in the US, this has led to a broken political environment that just a decade ago would have seemed impossible.
Many people ask me if their kids should be allowed to have a smartphone. My answer to this is, basically, yes. Kids should learn to collaborate, and they should learn to collaborate online with their friends. What we have to block is not smartphones, nor even social media, but the harmful algorithms often used by social media — the psychological equivalent to giving kids access to a slot machine.
What if we trained algorithms for constructive engagement, and to maximise the joy people got out of discussing things, rather than feeding off hatred? Some already exist — Spotify and Apple Music’s algorithms do a pretty good job of selecting music that listeners will enjoy. Pinterest also does a better job of promoting positive content.
One way to do this would be to craft algorithms specifically for vulnerable users. One for teenagers, which steered them towards healthier content; another for people who seem to be falling into conspiratorial rabbit holes.
A new age of machine intelligence is arriving, and the immense power of such systems will be as good or evil as we permit. We can still build the future we want.
Extracted from Tim Berners-Lee’s memoir, This Is for Everyone (Pan Macmillan, £25). He is the inventor of the World Wide Web