Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newsroom.co.nz
Newsroom.co.nz
Technology
Marc Daalder

Shooting livestream shows 'grey space' of online harm

"You call it grey, some academics call it borderline content, but effectively it is dog-whistling. That is going to be the defining content of our times," Sanjana Hattotuwa says. Screenshot: YouTube

Analysis: A livestream by a bystander of the mass shooting in Boulder, Colorado raises questions about the "grey space" of online harm, Marc Daalder reports

Livestreamed footage of the aftermath of a mass shooting at a supermarket in Boulder, Colorado - which included at the start some shots of victims' bodies in the parking lot and store - went viral even as police engaged in a shootout with the alleged gunman.

The video, which remains accessible on YouTube, is reminiscent of the livestreaming of terror attacks in Christchurch and Halle, Germany, in 2019. Those streams, however, were filmed by the gunmen and contained much more graphic content. This one - which was criticised as "tasteless" by one counter-extremism researcher - was filmed by a bystander.

While social media platforms have done their best to scrub the footage of the 2019 terror attacks from the internet, YouTube has simply placed age restrictions on the Boulder video and turned off commenting. More than 750,000 people have viewed the video.

New Zealand's Chief Censor David Shanks says he sees considerable differences between the Boulder footage and that from March 15, and he has no plans to rule the US video "objectionable" - which would make it unlawful to access or view in New Zealand. However, he said it raises important questions around borderline content, which may not be explicitly harmful like the March 15 video but still has capacity for harm nonetheless.

The footage also highlights questions around the journalistic value of some graphic content. The video of the killing of George Floyd by a Minneapolis police officer in 2020 was crucial to exposing an injustice that might have otherwise been covered up - the official police report of the incident merely noted that Floyd "appeared to be suffering medical distress" while being handcuffed - but does the Boulder livestream have the same benefits?

Different from March 15 and Floyd

"This is a different context and a different sort of livestream from what we saw in March 15 and Halle," Shanks told Newsroom. "It' is, however, not dissimilar to many, many other recent livestreams filmed by bystanders of horrific world events. As a reference point, I would mention the George Floyd video which wasn't livestreamed at the time but was posted up on internet platforms pretty quickly thereafter.

"We're very much alive to the fact that this is a reality of our world today. Horrific events, attacks will happen. Given that everybody is pretty much carrying around an internet-connected high definition video camera in their pockets, the prospects for content of this kind has always been very much in our viewfinder."

Shanks said one of the key differences was that "the livestreams filmed by perpetrator or the attacker are essentially propaganda for attack. They are carrying out an attack, they are killing people and they are livestreaming it in order to promote whatever twisted cause they are looking to conduct these acts in the name of."

This was "uniquely harmful" in mobilising other viewers to violence. This happened with the March 15 attack, which was replicated in several terrorist strikes overseas against marginalised communities in Western countries.

"That is not to say that filming a horrific attack [as a bystander] cannot have harmful effects in terms of the viewers. It is a potentially traumatising thing to view, particularly for young people, vulnerable people or people who have been impacted by violence," he added.

"If you think about the George Floyd example, it was a really difficult and potentially traumatising video to view but in fact had some manifest effects in documenting and bringing to light a horrific crime. There's a balance point to be struck here and I think traditional media regulatory responses have tried to hit that balance point or meet those competing concerns by saying, if you've got material that may have value, is not unlawful in and of itself, but could be impactful for the young and the vulnerable, then you put age restrictions or warning on that sort of material."

The 'grey space'

Shanks said the Boulder footage inhabited something of a grey area between the March 15 video and the George Floyd one. It didn't have clear journalistic value in uncovering an injustice, but also didn't necessarily have the capacity to inflict the same level of harm as the March 15 livestream.

Kate Hannah, an expert on extremism at the University of Auckland, told Newsroom she also felt the Boulder footage was in a grey area.

"There is a grey space that this seems to be in where there is, to my mind, very limited journalistic or historical or contextual value in it remaining on the platform. The platforms have put in place some restrictions, but at the same time the measures we have taken in New Zealand, around restricting through censorship the testimony and the livestream of the shooter but also societally deciding that we weren't going to discuss the perpetrator. Those haven't necessarily been things that are at play in this circumstance in the United States," she said.

Hannah said the Covid-19 pandemic and the corresponding gap in high-profile mass shootings could increase the impact of the Boulder footage. She also said that some bad actors might seek to take advantage of laxer regulation of this "grey space".

"What we're also seeing is much more sophisticated ability by those who wish to share and spread harmful or hateful extremism to play in the grey space and not meet those criteria that are clearly set for what needs to be taken down. They will play more in that grey space with intent, for that very reason that it means they are more likely to leave that material up to be shared, to go viral and cause harm."

'Defining content of our times'

Sanjana Hattotuwa, a special advisor at ICT4Peace who studies the interaction between extremism and the internet at the University of Otago, agreed extremists might seek to weaponise borderline loopholes.

"You call it grey, some academics call it borderline content, but effectively it is dog-whistling. That is going to be the defining content of our times," he said.

"In 2019, Mark Zuckerberg put up a post around borderline content, which essentially is content that goes right to the edge of what is permissible and is thus not subject to immediate flagging by whatever mechanisms that are engaged in the respective platforms. But of course it does what it is intended to do because it effectively communicates what needs to be communicated by the producer to specific audiences who understand it as such, even though it doesn't contravene the platform's guidelines."

Content in this grey space can also be made more harmful through edits or additional commentary. Hattotuwa said the livestream would have been saved and redistributed numerous times even if it had been taken down early and could be edited to be more graphic.

Shanks said he began viewing the livestream of the shooting's aftermath about 45 minutes after the streamer began to film. At that stage, it didn't have age restrictions and a comment stream of anti-Semitic, racist, extremist and violent invective was still rolling past.

Age restrictions were added shortly after, but the comments were still up until Shanks himself reached out to YouTube and Google about the issue.

"The comments on the livestream were scrolling by at a phenomenal rate. It was very hard to track and read them. But I saw enough to see anti-Semitism, racism, extremist, false flag theories. It was incredibly disturbing," Shanks said.

"From a consumer's point of view, from a viewer's point of view, they are watching a real-time, anxiety-inducing stream while also seeing this constant flow of invective and hatred. That, to me, is another quite concerning dimension to this particular incident that we'll be giving some thought to."

Hattotuwa agreed the comments had potential for additional harm.

"The video itself is fairly banal. Typically American in that sense - tasteless and reckless. But the comments are what YouTube historically has been fairly bad at."

Livestreams and YouTube a challenge

And YouTube, in particular, often escapes scrutiny from regulators, he said. In the United States last week, heads of big tech companies were called to testify before Congress, but the CEO of YouTube, Susan Wojcicki was absent.

"In light of Boulder, you would have thought Congress would have called Sue to testify with other CEOs. But for an unfathomable reason, YouTube always escapes legislative scrutiny in the US."

New Zealand has been more willing to take aim with YouTube. After the Royal Commission of Inquiry into the March 15 terror attack concluded the terrorist was radicalised on YouTube, Jacinda Ardern said she would take up the issue with Wojcicki personally.

To some extent, however, the features on social media platforms are structurally challenging to regulate for online harm. Livestreams in particular pose difficult questions for policymakers and platforms, as a stream may be innocuous one moment and unlawful the next.

"This material is being put up in real time and I was watching it going, well, what is going to happen next? At any moment, something truly horrible could happen and no one would have any restriction or control over that and there was 30,000-odd viewers all watching that as it unraveled," Shanks said.

Hattotuwa added: When these companies put these features out, they haven't thought through what is the consequence of the misappropriation and the abuse and the unintended consequences of these things."

"New Zealand made good out of a horrible event [on March 15] but the record is blemished on these platforms for far longer, around suicide, self-harm, gore, violence and even homicide livestreamed on these platforms and on which nothing or little was done."

How to tackle the grey space

The issue poses what Hattotuwa calls an "enduring question": "How do you deal with this kind of content with a speed and effectiveness of response that minimises risk and harm?"

He said social media platforms and governments alike were just beginning to think this through. The guidelines that YouTube relied on to determine the Boulder livestream could remain online, for example, were only launched in December.

"They had rudimentary stuff before. You can see that these platforms are only thinking about these things only now. It's ridiculous."

In order to act faster, Hattotuwa said platforms had to rely on AI and machine learning, but these were imperfect tools.

"Both, however, don't offer an accurate model, particularly in the Global South contexts and in contexts of protracted violence, to distinguish and separate out what is clearly glorifying violence and what is actually bearing witness to. That is a fine line and these companies have sometimes got it very wrong," he said.

"Then that requires human oversight which AI can flag, and then it's entirely dependent on each platform's investments in human moderation - around how many people they have and what context expertise those moderators have as well. For example, you can't necessarily ascertain the merits of something that's happened in Myanmar. You need somebody from the area and versed in the politics and the history and the communities."

Hannah agreed one-size-fits-all approaches to content moderation weren't effective and more human oversight - particularly from affected communities - was crucial.

"Platforms, as all companies do, like to have a set of practices that work for everything. In actual fact, each individual case needs to be taken seriously on the circumstances of the case," she said.

"The space that we need to start moving into is having a far wider area where conversations are had about content. Rather than having a very clear black-and-white line - this meets this mark and therefore it gets taken down or it doesn't get taken down - having people-led places for conversations to happen inside platforms and organisations like [the Global Internet Forum to Counter Terrorism] and the Christchurch Call, etcetera."

This issue would only become more relevant as time passes, Hannah said. The well-meaning desire to record shocking - and sometimes unjust - events dates back to the Arab Spring and can be traced through the Black Lives Matter movement to today. In combination with intentional bad faith actions by extremists, the grey space is something we'll be grappling with well into the future.

"It's definitely going to become more common," Hannah said.

 Hattotuwa said: "David Shanks and his comparable officers in the Commonwealth and globally are really trying to figure this out."

"There's no guarantee that they're going to figure it out at the pace at which these are being abused. These are problems that are metastasising at pace. It is very likely that you're going to see borderline content increasing at pace and the companies and regulators struggling to keep pace with it."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.