It was the spring of 1960, and a group of military officers had just seized control of the government and the national media, imposing an information blackout to suppress the coordination of any threats to their coup. But inconveniently for the conspirators, a highly anticipated soccer game between Turkey and Scotland was scheduled to take place in the capital two weeks after their takeover. Matches like this were broadcast live on national radio, with an announcer calling the game, play by play. People all across Turkey would huddle around their sets, cheering on the national team.

Canceling the match was too risky for the junta; doing so might incite a protest. But what if the announcer said something political on live radio? A single remark could tip the country into chaos. So the officers came up with the obvious solution: They kept several guns trained on the announcer for the entire 2 hours and 45 minutes of the live broadcast.

It was still a risk, but a managed one. After all, there was only one announcer to threaten: a single bottleneck to control of the airwaves. (italics added)

This is from Zeynep Tufekci, “IT’S THE (DEMOCRACY-POISONING) GOLDEN AGE OF FREE SPEECH,” Wired, January 16, 2018.

This quote from early in the piece is about how relatively easy it was, before the Internet, for an oppressive government to engage in censorship. But now, she explains, it’s much harder.

That’s good, right? Maybe to you, and certainly to me, but not to Professor Tufekci.

Why? She explains why in the whole rest of the article and so if you’re really interested, read it.

But what follows are some excerpts from her case for censorship.

Here’s the first part:

But today that playbook is all but obsolete. Whose throat do you squeeze when anyone can set up a Twitter account in seconds, and when almost any event is recorded by smartphone-­wielding mem­­bers of the public? When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.

Again, that seems good to me, although there are potential downsides, but remember that this is part of her case for censorship. Myself, I’m not big on squeezing throats.

She goes on to argue, correctly, that there’s much fake news out there and that big outlets like Facebook and Google are gatekeepers.

Moreover, she notes, again correctly, that it’s hard to get people to hear your viewpoint if you have one that is contrary to the one they’re seeing/reading and you have no very good way of reaching them:

And the famous American saying that “the best cure for bad speech is more speech”–a paraphrase of Supreme Court justice Louis Brandeis–loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

But is that really much different from the fact that before the Internet, you didn’t have ways to enter people’s living rooms where conversations that contradicted what you knew were taking place? The Internet has definitely expanded the living room that you’re not invited to. But it has also made it much easier to know that there is such a living room. So maybe what you need to do is figure out how to get invited into that living room. Could your success at that have anything to do with how you approach people in the living room? My gut feel is that the answer is yes.

Also, I think this problem shouldn’t be overstated. I have acquired friends on Facebook whom I know of, or who know of me, for reasons unrelated to my political views. Here’s what I notice when I post on FB one of my EconLog posts or something else that has political or economic content: crickets. No loves, no likes, no sads, no angries, nada. But it’s not that they’re blocking me. When I post nice pieces about cats, then boom, lots of loves and likes. In short, they’re choosing not to respond and, probably, not even to read. Would I like that not to be so? Of course. But my point is that these people who don’t share my views have clearcut ways of seeing my views.

Here’s her ending paragraph:

But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs–and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-­channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.

So it’s not Facebook and Google, responding to public outcry that should fix this situation. Whom does that leave? It’s “deeply political.” In other words, Tufekci wants some government intervention to fix it. Which means, almost certainly, censorship.

Notice a little irony with her phone company example: political decisions “determined how much a phone company can surveil its customers.” Political decisions also determined determined how much the federal government can surveil its residents. How is that working out?

Roman emperor Caligula once said, “Would that the Roman people had but one neck!” The idea was presumably that then he could easily control it, break it, threaten it, or strangle it. Tufekci leads off with a discussion of how easy it was to censor when there was one bottleneck. I wish I were 100% confident that she doesn’t share Caligula’s wish.

HT2 to Brian Doherty whose analysis I slightly overlap. He makes a lot of other good points.