Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Elizabeth Nolan Brown

The SAFE TECH Act Is Anything but Safe

The latest anti-tech legislation in Congress (S.560) would seriously threaten free speech online and creators' ability to monetize content while also subjecting tech companies to a flood of frivolous or unfair lawsuits.

The bill—dubbed the "Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act"—comes from Democratic Sens. Mark Warner (Va.), Mazie Hirono (Hawaii), Amy Klobuchar (Minn.), Tim Kaine (Va.), and Richard Blumenthal (Conn.). It has a companion in the House sponsored by Reps. Kathy Castor (D–Fla.) and Mike Levin (D–Calif.).

This year's SAFE TECH Act is a redux of a bill first introduced in 2021. That version—which Techdirt Editor in Chief Mike Masnick called "a dumpster fire of cluelessness"— failed to go anywhere (thank goodness). But now the SAFE TECH Act is back, and it doesn't appear to be any better this time around.

The SAFE TECH Act is yet another stab at undermining Section 230 of federal communications law. As it stands, Section 230 protects tech platforms—large and small—and their users from civil liability for content created by others. It does not protect against liability for content that a tech entity or user creates, nor does it protect against liability for federal crimes.

Significant Changes to Section 230 

Section 230 has two main sections, one which protects against liability for third-party speech allowed (or overlooked) on a web platform and one which protects against liability for content moderation (that is, blocking certain speech). The first main part of Section 230—sometimes referred to as c(1)—says "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." The second main part—c(2), or the "Good Samaritan clause"—says internet platforms and users won't be held liable on account of "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

The first change the SAFE TECH Act would make is to say c(1) doesn't apply when "the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech."

This would open up a huge range of tech companies to more liability. Blogging platforms like WordPress and newsletter and podcast distributors like Substack would be vulnerable, as would any social media platform that provides a paid tier level (like Twitter Blue).

So would all sorts of web hosting services—creating huge incentives for providers to cut off web hosting access to any person or group even slightly controversial.

And this change "would also threaten liability on any service that shares its advertising revenue with creators, for instance as YouTube does," as law professor and blogger Eugene Volokh pointed out when the SAFE TECH Act first came out. This would create incentives for platforms to cut off or severely limit creator monetization schemes, meaning "creators would thus be less likely to earn money from their works."

In addition, "the section would threaten liability whenever any providers provide grants to support local journalism or other such projects (something like the Google News Initiative)," noted Volokh. "Providers would thus become less likely to directly or indirectly support journalism and other expression."

So, already, the SAFE TECH Act would usher in an array of negative incentives—and that's just with its first change. Alas, the bill would also change a lot more.

Throwing Fuel on the Speech Suppression Fire

In essence, the SAFE TECH ACT "takes nearly every single idea that people who want there to be less speech online have had, and dumped it all into one bill," noted Masnick of the (nearly identical) original version (emphasis his). "Everything about the bill is designed in a way that opens it up to abuse by the rich, powerful and privileged. Everything about the bill allows them to file costly lawsuits (or threaten to do so) and pressure websites to pull down all sorts of criticism."

The SAFE TECH Act also says that Section 230 c(1) protection wouldn't apply (regardless of whether payment or funding was involved) "to any request for injunctive relief arising from the failure of a provider of an interactive computer service to remove, restrict access to or availability of, or prevent the dissemination of material that is likely to cause irreparable harm."

Injunctive relief means someone bringing a lawsuit is not asking for monetary damages but simply for some sort of action to be taken—in online speech cases, likely that the speech in question be removed.

Keep in mind that this provision isn't about lawsuits stemming from illegal content, just content likely to cause "irreparable harm." And while "irreparable harm" sounds serious, it simply means harm that couldn't be compensated for with money, including harm to someone's reputation.

It's a vague phrase that could open a floodgate of lawsuits over anything and everything objectionable on social media—perhaps particularly speech that is unflattering to the rich and powerful.

The SAFE TECH Act "would not protect users' rights in a way that is substantially better than current law," warned the digital rights advocacy group Electronic Frontier Foundation (EFF) back in 2021. "And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole."

Carveouts, Carveouts, Carveouts

Lastly, the bill would carve out a bunch of exceptions to Section 230 protection, including for "any action alleging discrimination on the basis of any protected class, or conduct that has the effect or consequence of discriminating on the basis of any protected class, under any Federal or State law."

As Masnick wrote about a similar provision in the 2021 version of SAFE TECH, "while it may sound good to say this can't be used to block civil rights cases, in actual practice a bunch of recent 'civil rights' cases have involved white supremacists, out-and-out misogynists, and other terrible people claiming that their civil rights were violated by being kicked off of platforms. Enabling such lawsuits seems incredibly short sighted."

And in some states, political affiliation is a protected class, meaning Section 230 wouldn't apply to cases where someone claims their content was blocked or restricted because of their politics. Again: floodgates.

And that's still not all.

The SAFE TECH Act would also amend Section 230 to state that it "shall be construed to prevent, impair, or limit any action brought under Federal or State antitrust law," "any civil action for wrongful death," any action brought under international human rights law, or any "action alleging stalking, cyberstalking, harassment, cyberharassment, or intimidation based, in whole or in part, on sex (including sexual orientation and gender identity), race, color, religion, ancestry, national origin, or physical or mental disability."

If you look at that and think, well, tech companies shouldn't be allowed to violate laws with impunity—well, no, and they aren't. If they create illegal content, they can already be held liable. If they are guilty of federal crimes, they can already be charged just like anyone else. What the SAFE TECH Act would do is open up internet companies to civil lawsuits from individuals and governments if third parties use their services in the course of causing certain harms.

Even if many lawsuits against tech companies over user speech would not stand up to the First Amendment, the absence of Section 230 protection would make these suits more labor- and resource-intensive to fight—upping the likelihood that platforms may decide to crack down on more speech rather than defend themselves in more lawsuits.

The SAFE TECH Act is a dangerous bill that would have far-reaching consequences for content creators, activists, people exposing police violence, whistleblowers, citizen journalists, and basically anyone who uses the internet. Not to mention how it would burden our courts with questionable lawsuits and make life miserable for tech companies large and small.

The post The SAFE TECH Act Is Anything but Safe appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.