Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic
The Atlantic
Technology
Jaron Lanier

How to Fix Twitter—And All of Social Media

The Atlantic; Getty

Those debating the future of Twitter and other social-media platforms have largely fallen into two opposing camps. One supports individuals’ absolute freedom of speech; the other holds that speech must be modulated through content moderation, and by tweaking the ways in which information spreads.  

It sounds like an old-fashioned confrontation between the idealists and the realists, but in this case both sides are peddling an equally dismal vision. While the current major social-media platforms generally try to moderate speech, their efforts never seem to be enough. Their content has been efficient at spreading both personal and societal harm, and has fueled authoritarian movements in Hungary, Brazil, and the U.S., among other places. Meanwhile, social-media platforms that exert more aggressive, centralized control of speech contribute to the success of authoritarian regimes, such as in China.

My purpose here is to point out a logical third option, one that can and should be tested out on a platform such as Twitter. In this approach, a platform would require users to form groups through free association, and then to post only through those groups, with the group’s imprimatur. Why this simple, powerful notion could help us escape the dilemma of supporting online speech might not be immediately obvious. Let me explain.

Think about a seemingly unrelated problem: How can we use finance to improve the lives of deeply impoverished people? Finance depends on trust, but very poor people do not have a credit history. Banks do not have the resources required to evaluate each individual with no starting information at all.

Muhammad Yunus, the pioneer of microlending, found the answer: Ask people to find one another. Groups created through free association—not individuals—applied for loans. The members of these groups demonstrated trust based on knowing one another. The creation of what we might call “quality” arose from below instead of from above. When a member of a group ran into honest trouble, the other members were motivated to help out.

Microlending has been a qualified success. It helps people rise out of abject poverty, but it doesn’t do much more than that. However, we are interested here only in the mechanism of grassroots quality control through having a shared investment in a group. In that sense, microlending works: Loans are repaid more reliably than they are in traditional finance.

Microlending used to be a trendy topic in idealistic tech circles, and a constant trope at TED and Davos conferences. I believe that it gave rise, in part, to the idea that user reviews should guide online commerce. But one of microlending’s core ideas, to get people into groups, was lost along the way. How might groups take shape on a social-media platform? It would be like starting up a zine, a band, or a partnership. You’d find some people with whom you feel compatible, people whom you trust, and then you’d work together to create a brand—a name for your group to be applied to a common feed of posts. Only groups like this would be able to post, not individuals, though individuals would still identify themselves, just like they would when playing in a band or writing in a magazine. Individuals could join multiple groups and groups would self-govern; this is not a heavy-handed idea.

[Read: People aren’t meant to talk this much]

Platforms like Facebook and Reddit have similar structures—groups and subreddits—but those are for people who share notifications and invitations to view and post in certain places. The groups I’m talking about, sometimes called “mediators of individual data” or “data trusts,” are different: Members would share both good and bad consequences with one another, just like a group shares the benefits and responsibilities of a loan in microlending. This mechanism has emerged naturally to a small degree on some of the better, smaller subreddits and even more so on the software-development platform GitHub. A broader movement incorporating this idea, called “data dignity,” has emerged in spots around the world, and in nascent legal frameworks. My proposal here is to formalize the use of data trusts in code, and bake them into platforms.  

Groups, as they appear on existing platforms, can be of any size. Some number in the millions. The sort of groups I have in mind would be much smaller as a rule. The point is that the people in the groups know one another well enough to take on the pursuit of trust and quality, and to rid their groups of bots. Perhaps the size limit should be in the low hundreds, corresponding to our cognitive ability to keep track of friends and family. Or maybe it should be smaller than that. It’s possible that 60 people, or even 40 people, would be better. I say, test these ideas. Let’s find out.

Whatever its size, each group will be self-governing. Some will have a process in place for reviewing items before they are posted. Others will let members post as they see fit. Some groups will have strict membership requirements. Others might have looser standards. It will be a repeat of the old story of people building societal institutions and dealing with unavoidable trade-offs, but people will be doing this on their own terms.

What if a bunch of horrible people decide to form a group? Their collective speech will be as bad as their individual speech was before, only now it will be received in a different—and better—social-cognitive environment. Nazi magazines existed before the internet, but they labeled themselves as such, and were not confused with ambient social perception.

We perceive our world in part through social cues. We rely on people around us to help detect danger and steer attention. (Try pointing at something imaginary on a crowded street and you’ll see the effect.) Facebook’s scientists famously claimed, in a peer-reviewed journal, that they could make people sadder just by tweaking the algorithm that generates their content feeds—and that those affected did not know what was going on. Though the data from such experiments are not fully available for public scrutiny, the evidence suggests that content-feed tweaks more easily generate negative emotions (e.g., vanity or paranoia) than positive ones (e.g., optimism or self-esteem). This is why social media is such a tempting tool for psychological warfare: It can be used to poison a society, perhaps with the help of a bot army.

The extreme number of people who can post things overwhelms our individual and institutional abilities to understand the context for the speech that floods around us. When horrible speech is mixed into an ambient feed, the world feels horrible. But when online experience comes only from branded sources—and, once again, these groups would be formed through free association—then we can compartmentalize what we see. The number of groups of people on a social platform of the type that I’m imagining might be one-hundredth of the number of individuals. Groups will restructure the experience of online society so that it is a closer match to the cognitive abilities of individuals.

Groups will encourage better posting too. When individuals post online, they are motivated to seek attention—or, more charitably, to seek relevance—but that requires constant posting. The virtual hamster wheel tends to make people more abrasive, digging into enmities among one’s followers and opposing groups. After all, you have to keep your followers hooked every day. As a member of a group, one could post less often—and spend more time thinking—and still see the brand succeed.

When someone in a group starts getting cranky or weird, other members of that group will have motivation to speak up. We all act like jerks online every now and then. In a group, though, our fellow members would pay a price for our behavior. To have your friends bug you about how you’re acting may not sound appealing—but it appears to be the least annoying or coercive plan for making online society less malicious. If you get too annoyed, you can leave the group and join another one; or else you might be willing to self-moderate in order to stick with a well-regarded brand.

Groups would also be incentivized to make sure that their members are real, and to purge the bots, because any benefits of membership would be shared by all who joined. I have my own hopes for how this would work: I’d like to see people in groups agree to smooth out the uncertainty of fate by divvying up rewards—money from micropayments, subscriptions, or patronage, for example—such that everyone within the group has some benefits to help them get by, while still amplifying compensation for individual members who contribute a lot. That’s how we manage rewards in tech companies.

[Read: What if social media could tell you when you’re mean]

But each group will have to work out its own terms. Whatever the reward, whether money or something less tangible, it will have to be distributed among the members according to a zero-sum logic. Because anything that goes to a bot will not go to real members, individuals will, in most cases, be motivated to eject the fake accounts from their groups, instead of hoping that the platform will provide that service.

In suggesting all of this, I am arguing against my own character. I don’t want to be a member of anything. I want to be unique and hard to classify. And yet, even though that’s what I say I want, in practice I always seem to find that my stuff gets better when I’m in some established group. I release books via publishers, get my tech designs out through tech companies, publish scientific papers in established journals, and so on.

Getting involved with groups has not blunted my individual weirdness, and it has improved the quality of what I do. You can rely on other people without losing your identity. It might seem strange to have to make this point, but tech culture is rooted in a cowboy mythology that celebrates the individual. (I grew up around real cowboys, in rural New Mexico, and they worked in teams, so this myth is about movie cowboys.)  

Tech culture has created a Wild West of real and simulated individuals, and infested its terrain with manias, biases, and irritability. I’m not suggesting that data dignity is a perfect or complete solution, or that it should replace all of the other ideas in play. But no idea is working well enough right now, despite an urgent need, and data dignity is similar enough to social structures that existed—and were tolerable—in the pre-internet world that we need to give it a try. Would rearranging a platform such as Twitter into small, self-governing groups lead to something better? Let’s find out.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.