Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Input
Input
Technology
Tom Maxwell

Facebook is suing a dev for software that evades its restrictions on COVID-19 ads

Facebook is suing a developer for allegedly selling software that tricked the company's ad review system into publishing misleading COVID-19 ads. The company has been working to stop the spread of misinformation surrounding the deadly virus, but has been stumbling in some instances.

The idea behind the software, from a company called LeadCloak, is that it shows Facebook's ad review system an innocent looking website and then shows users something completely different. Facebook specifically says in its lawsuit that the software was used to conceal websites peddling scams related to the coronavirus. As deaths have risen in the U.S. and the CDC begins advising people to wear masks, it's no surprise some people are trying to exploit the ongoing fear and uncertainty.

Some dangerous COVID-19 ads were recently spotted on Facebook by Consumer Reports, including one that recommended consuming small daily doses of bleach to stay healthy. They were removed by Facebook, but not before being seen by some users.

Setting examples —

Facebook has filed quite a few lawsuits against malicious developers in recent years. The company aims to make an example of bad actors so it can avoid taking responsibility for another scandal like Cambridge Analytica, where it was found that a developer of innocuous quizzes was collecting data from Facebook users and then selling it to a political consulting firm. Facebook ended up paying a $5 billion settlement over that, not accounting for all the public trust it lost. It hopes that by suing LeadCloak it can also track down some of the businesses or individuals who used the software to post misleading ads.

The platform problem —

The LeadCloak situation highlights a direct problem with Facebook's fundamental model as a platform. Most advertisements sold on Facebook are never actually reviewed by a human but rather by automated systems, because it's far cheaper and far more scalable. That means people are incentivized to find new and inventive ways to trick the review system. It doesn't help that Facebook sent its contract moderators home and is instead paying full-time employees to take up some of the slack. That means there is more room for errors.

CEO Mark Zuckerberg said in a recent press call, “Our goal is to make it so that as much of the content as we take down as possible, our systems can identify proactively before people need to look at it at all.” He went on to say that by the time a user flags a post, “a bunch of people have already been exposed to it, whereas if our AI systems can get it upfront, that’s obviously the ideal.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.