The Difference Between Free Speech And Amplification

Published in Online Spin, June 21st, 2019

In 2012, a man named Nick Hanauer gave a TED talk about inequality. The folks at TED declined to publish the talk online, saying that it was politically partisan and also that the live audience had only given it mediocre reviews. The decision not to publish turned out to be catnip for Hanauer, who engaged a PR firm to promote his “banned” TED talk.

It was a clever move—after all, the surest way to invoke desire in someone is to tell them they can’t have something. Naturally, everyone wanted to see the talk TED supposedly didn’t want you to see, and the video got many more thousands of views than it would have if TED had simply left it alone.

I’m sure the PR company (along with Hanauer) knew they were being misleading in their characterization of the incident. TED, as a private media company, is under no obligation to publish any talk, ever, in much the same way the New York Times is under no obligation to publish a letter you write to them.

Hanauer has his right to free speech. He can speechify in his own videos, on his own blog, on a soapbox on the corner. No one can stop him from doing this, including TED. But Hanauer’s right to free speech does not extend to forcing TED to amplify what he has to say.

It is a critical distinction—not because Hanauer’s talk is particularly important, but because we now live in an era where the vast majority of speech happens online. And the vast majority of that vast majority happens on social media platforms that are private media companies in much the same way TED and the New York Times are.

We’ve also moved well beyond innocuous arguments over whether a talk was a little too partisan. We’re now in an era where it is a trivial matter to create hyper-realistic video content that is misleading or downright false—video content that can wreak havoc on the effective functioning of a free and fair democracy.

Recently, when Facebook banned Louis Farrakhan, Alex Jones, and other extremists, a First Amendment lawyer called it a “dangerous proposition,” adding, “The idea that a single company will decide these questions will create a risk that others whose opinions we value more than say Alex Jones will also get silenced.”

But is it silencing? Alex Jones still has the right to speak. He still has his show. He still has his websites. He’s still selling his dietary supplements. He just can’t use that particular distribution channel anymore. He is not silenced; he is simply failing to be amplified by a certain website.

Shortly thereafter, a doctored video of Nancy Pelosi was circulated on Facebook and other platforms. It had been slowed down to make her look like she was slurring her words. According to The Guardian, Facebook didn’t remove the video, but it stated that it will “downgrade its visibility in users’ newsfeeds and attach a link to a third-party factchecking site pointing out that the clip is misleading.”

This week, COO Sheryl Sandberg defended the decision, saying, “When something is misinformation, meaning it's false, we don't take it down, because we think free expression demands the only way to fight bad information is with good information.”

Maybe—as long as Facebook is significantly downgrading the visibility of the misinformation at the same time. Bad information—information that is shocking, horrifying, that triggers disgust, that builds on existing biases and preconceptions—is far more likely to go viral than good information.

People are entitled to freedom of expression. But surely sites like Facebook don’t have to amplify it?

Kaila Colbin