The Christchurch Call: Social Media's Role In Fomenting Extremism

Published in Online Spin, April 26th, 2019

This week, French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern announced that they will be co-hosting a meeting in Paris in May, at which they are expecting other world leaders and top executives from companies like Facebook, Twitter, Google and YouTube.

The meeting comes in response to the horrific terror attack that killed 50 people last month in Christchurch. Its ambition is for the participants to sign onto the Christchurch Call, a pledge to prevent violent extremism and terrorism online.

And while the details of the pledge aren’t yet determined—that's what the meeting is for—its explicit target is the sharing of violent content. In her announcement Ardern reaffirmed the right to freedom of expression, but said, “"I don't think anyone would argue that the terrorist had the right to livestream the murder of 50 people. That is what this call is very specifically focused on."

First: good luck and godspeed. As you may recall from my column immediately following the attack:

“[I]f you are a social media platform, and—as happened yesterday—a mass shooter livestreams his horrific act, get the footage off your site.

Get it off. Get it off. Get it off. Use your algorithms and your human moderators and your community flags and whatever you have to use to get it off.”

So, yeah, I’m onboard with what Ardern and Macron are trying to do. I also recognize the scale of the challenge involved, given the scale at which these platforms operate. We don’t yet have technology that can automatically distinguish between breast-feeding and pedophilia. We don’t yet have technology that can recognize sarcasm, or that can understand the nuance between a person of color using the N-word versus a neo-Nazi. And we don’t yet have technology that can automatically identify a live shooting or a terror attack.

Difficult, but not insurmountable. We could embed ID info into every video that gets uploaded, so taking down one version of it takes down all versions. We could build delays into livestreams until the user’s account is verified in some way. We could require Facebook to spend that $3-$5 billion it has prepared to pay in fines on improving its ability to detect this stuff in real time.

I’ll leave the details to the meeting, and to the hopefully many more meetings to follow. In the meantime, let’s discuss what doesn’t appear to be on the table:

  • The way filter bubbles only show us content that reinforces our existing beliefs, creating a self-reinforcing cycle that lends itself to strong views in every direction.

  • The way those filter bubbles create a warm and wet breeding ground for extremist bacteria, long before there’s enough of it to culminate in a public event.

  • The way in which social media platforms get rewarded for provoking shock and outrage, meaning their incentives are aligned against what Ardern and Macron are trying to accomplish.

Ardern said the Paris meeting is about how online platforms were used “during” the terror attack. The issues I just listed were in play well before it, and created environmental conditions that were ideal for ideology.

For the May meeting, I don’t want these issues to be on the table. I want concrete outcomes, and the way we get those is by being specific and focused, by picking the thing we’re least likely to disagree on (“people shouldn’t get to live-stream murder”) and working out how we’re doing to deal with it.

But for our ability to engage in any kind of civil discourse, for our chance at continuing to have functioning democracies, for the sanity and wellbeing of our world, we will have to deal with these issues, and soon.

Kaila Colbin