It has been commonplace lately to complain about censorship conducted by Twitter, Facebook, and YouTube.
Here’s the problem: they can’t censor. What they can do, and do do, is prevent users from posting things.
Do they have an agenda? Sure they do. And it often stinks.
But that doesn’t mean that what they’re doing is censorship.
George Washington University law professor Jonathan Turley, whom I respect a lot, writes:
Xi’s coughs came to mind as Twitter and Facebook prevented Americans from being able to read the New York Post’s explosive allegations of influence-peddling by Hunter Biden through their sites.
Notice how Turley misstates the issue to make his point. Twitter and Facebook did not prevent Americans from being able to read the New York Post. The New York Post has a presence on the web and people who are not particularly facile with the web, and that includes me, found it easily. Moreover, what Twitter didn’t take account of is the “Streisand Effect.” My guess is that even more people saw the Post article because of Twitter’s thumb on the scale.
But even if the reference had been to something not on the web so that people would have had huge difficulty in finding it, that doesn’t mean that Twitter censored.
I gave a talk at a Hillsdale College event in Omaha earlier this month and on the panel with me were Dr. Jay Bhattacharya and Dr. James Todaro. Both were excellent. Dr. Todaro was a last-minute replacement and a fine replacement he was. He laid out how he had smelled a rat in the Lancet study of hydroxychloroquine. That was the study that purported to show that hydoxychloroquine actually was unsafe (as opposed to ineffective) for people with COVID-19. He investigated and, lo and behold, found a rat. Lancet retracted the study.
Dr. Todaro noted that after Elon Musk posted Todaro’s findings on Twitter, he had tens of millions of views. Then Twitter took it down.
Dr. Todaro noted other similar incidents, specifically the Bakersfield doctors and the White Coat Summit. Both were on YouTube and YouTube took them down. Dr. Todaro said that this was censorship.
Each speaker was given a minute to respond to the other speakers and I took my minute to take issue with Dr. Todaro’s claim. I granted that what happened sucked and that the social media decision makers who took these things down acted badly. But, I said, they weren’t censoring. Then I said:
Hillsdale College did not invite a Marxist to be on this panel. Does anyone hear think that Hillsdale is censoring? No. Hillsdale is using its private property as it wishes. Moreover, if James is saying that he wants the government to step in to deal with this censorship, I can almost guarantee that he’ll like the result even less.
READER COMMENTS
Mark Brady
Oct 20 2020 at 8:23pm
But, David, nowhere in your post do you define the word censorship.
Kevin
Oct 21 2020 at 4:35pm
You got it. This seems like a pointless semantic debate, but according to what I could find…he’s wrong about his definition of censorship.
Loquitur Veritatem
Oct 20 2020 at 9:04pm
David seems to believe that only a government can act as a censor. If that’s his premise, I take issue with it.
From https://www.thefreedictionary.com/censor:
A censor is (n. 2) “any person who controls or suppresses the behaviour of others, usually [but not always] on moral grounds”. Parents do a lot of censoring (or they did when I was growing up.)
A censor (v. 2) is therefore anyone who “act[s] as a censor of (behaviour, etc)”.
Twitter, YouTube, Facebook, etc., are most certainly acting a censors, that is, censoring things like the Post’s story. Whether readers can find the story elsewhere is beside the point.
Phil
Oct 21 2020 at 5:39am
I think being able to find the story at the Post matters. Can Twitter “control or suppress people”?
When the left complained about censorship,” Thomas Sowell wrote: “Usually some school or library officials decide to buy a particular book and then some parents or others object that it is either unsuitable for children or unsuitable in general, for any of a number of reasons. Then the cry of “censorship” goes up, even if the book is still being sold openly all over town.” The government, with its monopoly on force, controls or suppresses.
zeke5123
Oct 20 2020 at 9:09pm
Are you concerned that sufficient network effects can create censorship? Presumably, if twitter or Facebook was less heavy handed they could’ve seriously minimized readership.
Agreed that I am far from convinced that government intervention is the solution, but there is a problem here.
GRC
Oct 21 2020 at 9:40am
Because of this reason some people think that a solution could be to modify the first amendment and extend its scope to cover social media. If you are taking government into the game this is probably one of the least invasive ways to do it.
kingstu
Oct 20 2020 at 10:09pm
Most people consider this censorship even though it is perfectly legal based on private property rights.
The more important issue is what do we do when 90%+ of Americans get 90%+ of their news from “walled garden” social media sites.
As a Libertarian I will never argue for more government regulations … BUT … I do hope for more balanced treatment of conservative leaning stories.
Biden family political influence peddling has been an open secret for years so I’m not sure why this story is so controversial.
David F
Oct 21 2020 at 12:07am
Ownership indicates control of a bundle of rights with respect to something. If I own Hillsdale College I can exclude Marxists but I can’t exclude you based on your race. I can’t claim ownership of your shoes just because you wore them on my campus (even if it’s in my “terms of service”). I can’t build a factory there, or suspend habeas corpus. I do have property rights but they are subject to constraints and limits, for various reasons.
Proponents of net neutrality believe all data carried over the internet should be treated equally. Why should content not be treated the same? Why should not the same principles apply?
I think the founders would be amazed to discover that the biggest threat to free speech came not from the government but from corporations. I bet they would have crafted the first amendment accordingly if they had been able to foresee the likes of Twitter and Facebook.
IronSig
Oct 21 2020 at 10:39pm
I’m not asking exactly what language you imagine that the Framers would have inserted or changed into the First Amendment. What I’m asking is what general phrases or types of clauses would the ’76~’91 generation would have placed in there.
Remember, printing was a hotly contested industry at the time. Several of the Framers made their fortunes by spreading their slanted reporting and contesting accounts from other papers. British government meddling from the time is the reason why “prior restraint” is part of the Free Speech lexicon. How people are influenced about what they believe past their senses is an old concern.
And yet the First Amendment reads as is.
Reconsider your opinion.
Niko Davor
Oct 23 2020 at 4:20pm
Big Tech companies like Google, Twitter, Facebook, Netflix, and others lobbied and advocated for the government to enforce net neutrality regulations against rival ISP companies. They have likely used past net neutrality regulations to stop ISPs from eating into their revenue streams using free market tactics. They are crying deregulation for me but not for thee. I don’t think free market ideologues owe them much sympathy in this.
Jens
Oct 21 2020 at 4:06am
I think that one should group and classify content restrictions in terms of different aspect vectors. E.g. if you restrict on
1.) State pre-censorship (short: PC – what has to be published has to be approved by the censor agency in advance),
2.) Platform filtering (short: PF, the platform decides according to internal criteria what fits its guidelines and what doesn’t)
and 3.) hate speech laws (short: HF, utterance criminal law)
then one could say regarding temporal aspects that PC takes place before something is published, HL happens after something has been published, and PL can be both (especially when automated mechanisms, code-based filters take effect, things get interesting).
With regard to the (content-related) scope, PC and PL are not limited (in the end, almost everything can be affected), HL is bound by certain criminal law principles (procedural issues: “in dubeo pro reo”, “nulla poena sine lege” and possibly restrictions through fundamental rights). (All legal topics are of course somewhat overshadowed because PC does regularly not take place in constitutional states).
With regard to territoriality, the PC and HL usually only work nationally, although there are also states that prosecute crimes outside their territory. PL can be both filtering based on territorial criteria or unrestricted international filtering. In the case of criminal regulations, the separation of the place of the criminal act and the place where the criminal act leads to its unlawful success may also play a role. It can be complicated.
The entity that PC and HL does is the state. However, in some countries HL is also an “application offense” (dunno the correct english word, not my mother tongue), i.e. the public prosecutor’s office does not act there of its own accord. And the outcome of criminal proceedings can also have an impact on subsequent civil proceedings. PF is done by private companies, but there is also cooperation with governments, e.g. when copyright infringements are filtered in search results or when Google does not display certain content in China in order to gain market access there. PF can be enforced always or on application. PC usually is comprehensive.
There are differences with regard to the legal process. HL is criminal law, i.e. there must be an initial suspicion. A public prosecutor can theoretically use this to attack unpleasant opinions, but the public prosecutor can also fend off senseless complaints and reports in advance. In constitutional states, evidence must usually be provided that the act was criminal. In the simplest case, a defendant can successfully defend himself through silence. With PC it’s all very different because the censorship takes place in advance. I don’t know if there is a state in which PC takes place and where there is a legal recourse against it. Considering PF there will theoretically also be (civil law) legal action options, but the path should be arduous and the burden of proof should lie with those who want to bring their content to the platform or who want compensation for unlawful filtering, I can only guess.
Another aspect is consequences. HL can result in punishment. That can go very far, but ultimately it can also be confined to warnings of proceedings or minor penalties, especially if the principle of proportionality is maintained in a constitutional state. PC ultimately prevents punishment, but it is to be expected that in states in which PC takes place regularly, anyone who messes with the censor too often can expect other reprisals. PF initially only means that individual utterances are deleted, filtered or flagged, but in extreme cases it can also lead to lifelong bans for individuals.
There are certainly many errors and misjudgments in my clustering, but I think that one should analyze and compare content constraints along such lines.
IronSig
Oct 21 2020 at 10:53pm
One implication of your analysis is that the Platform has a delay and several hobbles before the operators try Punishment. However, the operators have to offer a reward (most likely the reach of their network) to entice users in the first place. One or more critical masses has to be reached before the platform operators start creating standards (assuming that the normal course of conversation that the users try doesn’t disgust the operators so that they shutdown).
So if the platform is virtual and accounts are opened all the time, then the operators themselves have limited resources to double-check that the user that just started an account isn’t someone who recently was locked out.
Don Boudreaux
Oct 21 2020 at 5:44am
I’m dismayed by some of the comments on David’s post.
Many words have multiple meanings. “Censorship” is one of these words. And this particular word’s most energizing meaning, one well-established by historical use, is the practice of the state to prevent communications of which it disapproves. When people accuse a private company such as Google of censorship, they tap into the long-standing, and wholly justified, liberal hostility to this use of state power. But in doing so, the accusers illegitimately imply that the private company is akin to a state and should, therefore, be treated as such by law.
The irony here is as sad as it is great. By calling the communications choices made by private companies “censorship” and then summoning the state to prevent this so-called “censorship,” the long-standing hostility to state control over the choices that private people make in their communications is subtly tapped into to justify state control over the choices that private people make in their communications. In short, genuine censorship is endorsed in the name of preventing censorship. It’s insidious and dangerous. And those who endorse this use of state power are playing with a flame-thrower that will inevitably be turned on them.
Dylan
Oct 21 2020 at 7:47am
The post conflates multiple issues without making a clear distinction between them.
First up, the definition of the word. Here it is clear that censor is used regularly to not just mean government censorship. Parents, schools, companies, this website…all censor to one degree or another.
Second, do private institutions have the legal right to censor without running afoul of the 1st amendment, again the answer is clear, yep, 100% have that right.
Third question is, should private organizations avail themselves of their right to censor and should that right always be 100%? It seems pretty clear that organizations will want to practice at least some minimal form of censorship, like a website deleting spam comments or comments that are way off topic. Beyond this, I think, things become a little murkier. I’m generally a believer in a “culture of free speech” and think that it is better for bad ideas to be out in the open and debated rather than hidden away out of sight. However, I am aware that online communities that have risen up with a strong view towards welcoming all people and ideas, seem to quickly turn into a cesspool of the worst ideas that humans have, and that very few people want to hang out there.
This then brings me to the final portion of my comment, should private organizations always have the 100% right to censor? Seems fine when there are a lot of options of places a person can go and express their ideas. Seems less fine if you have a single private entity that controls access to the “town square.” Even a world where we have multiple, competing platforms that form their own echo chamber reinforcing their specific world view with no tolerance for dissent, seems like a bad recipe for democracy.
I don’t see us fully in either of those worlds just yet, but I think having the conversation about how we should think about different scenarios before they play out is valuable.
Vivian Darkbloom
Oct 21 2020 at 2:29pm
Very good comment, Dylan.
Dylan
Oct 21 2020 at 6:35pm
Thank you! I take that as very high praise, since I’m a big fan of the way you typically approach a topic.
AMT
Oct 21 2020 at 11:01am
The problem is that David’s post pretends there is only his definition of the word, and that Turley used it incorrectly. To paraphrase Scott Alexander,
(slightly edited)
https://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/
https://www.dictionary.com/browse/censor?s=t
Turley quite obviously did not use the meaning which relates solely to government action, and is easily justified in his use of the word censor. A censor includes “any person who supervises the manners or morality of others.”
Michael
Oct 21 2020 at 6:36am
I agree completely with this. But I just wanted to point out that there is something else they do beyond just not allowing certain things to be posted. Via their algorithms, they also have some control over what gets amplified and thus seen. I don’t know where that fits into the whole debate beyond the obvious: it, too, is not censorship, but it does affect what type of content people see on the platform.
Phil H
Oct 21 2020 at 8:23am
This is a great post. I see the way the big social media platforms as moving in very much the same direction as the mainstream media moved about 100 years ago, towards a set of standards for credibility that will prove sustainable in the long term.
In the meantime, here’s an example of actual censorship from the British government: schools are no longer allowed to use materials published by anti-capitalist groups: https://news.cgtn.com/news/2020-09-30/Why-has-UK-gov-t-moved-to-ban-anti-capitalist-materials-in-schools-UcM9cyqcA8/index.html
robc
Oct 21 2020 at 9:09am
As schools have to make some choices about materials used, the easy way to avoid that censorship issue is separation of school and state.
A private school would be like google or twitter, and not able to censor (as defined by Henderson). Some schools would probably avoid materials published by anti-capitalist groups. Other might avoid materials published by pro-capitalist groups.
Mark Z
Oct 21 2020 at 8:46pm
Good point, and there’s no settling this issue with state run schools that inevitably deal with politically related content. There’s no intuitive definition of neutrality. Presumably, Britain should ban racist materials from schools? Well, if someone thinks racism and socialism are both bad and harmful, what’s the argument against banning socialist content that isn’t also an argument against banning racist content? Maybe that one is worse than the other? But that depends on the specifics and doesn’t clarify how bad something has to be to warrant being banned.
Mark Z
Oct 21 2020 at 8:21pm
Those standards are already proving to be pretty unsustainable. It’s also worth considering whether one should applaud constraints on discourse simply because they line up with one’s views.
RPLong
Oct 21 2020 at 9:10am
I guess I’m caught in the middle here. I’m sympathetic to Don Boudreaux’s point that we shouldn’t allow this kind of bad behavior to convince us that the government needs to step in. I don’t know, but I can easily imagine that this is one reason why David Henderson went out of his way to stipulate that this is not censorship; and if so, I agree with that impulse.
At the same time, this very definitely does align with at least one valid definition of the word “censorship,” and more importantly, I think this kind of closure is antithetical to the liberal tradition. So, I think it ought to be condemned as such, completely and unapologetically.
But maybe that’s because I often try to draw a clear line between my own personal ethics (speech and debate should be mostly free and unfettered) and my beliefs about what governments should have the power to do (governments shouldn’t compel private entities to publish or not publish a thing).
Jon Murphy
Oct 21 2020 at 9:26am
RPLong-
There doesn’t have to be a conflict here. One can state that it is well within Facebook and Twitter’s rights to behave as they did and still find the behavior blameworthy. But just because it is blameworthy does not imply government should get involved.
There are three claims being made:
First: Facebook et al have the right to forbid views on their platform.
Second: Using their right to forbid certain views may be blameworthy.
Third: The government should not force Facebook et al to allow certain views they find distasteful.
robc
Oct 21 2020 at 9:59am
I agree with all three of those points AND also think there is no issue with calling it censorship. It may be perfectly legal (but blameworthy) censorship, but I don’t think there is an issue with calling it censorship.
David Henderson
Oct 21 2020 at 9:37am
I don’t think you’re caught in the middle.
You and I agree. As I wrote, the decision makers acted badly. And they should be allowed to do so.
Notice that as a bonus, this is being talked about and at least in this case the Streisand Effect is coming into play.
Knut P. Heen
Oct 21 2020 at 11:40am
Thinking as a financial economists here. Twitter and Facebook should act in the interest of their shareholders. This may imply giving up some customers to keep others. All businesses do this when they decide their product line.
On the other hand, given the behavior of Twitter and Facebook recently, it seems to be a market for social media that cater to the groups that Twitter and Facebook discriminate against.
I can paint LIBERTY on my house. My neighbor can neither paint MARX nor LIBERTY on my house. It is my house.
KevinDC
Oct 21 2020 at 11:59am
There seems to be a debate going on in the comments about the definition of words – David is using a narrow definition of censorship, and according to that definition private entities cannot censor. Others are protesting that according to a wider definition of censorship, private entities can censor. So to me, the interesting discussion isn’t who’s behavior matches censorship by what definition – I’m more interested in which definition makes more sense.
Ultimately, I think the strict definition of censorship makes more sense. I think the broader you define a term, and the more different situations you can make it apply to, the less useful a word is. After all, the whole point of words is to help us distinguish one thing from another. If we wanted to, we could define “purple” to mean the entire spectrum of visible light, but then purple be useless as a word because it doesn’t help us differentiate things.
Many of the people pushing back against David are employing a definition of censorship that’s so broad it doesn’t really seem useful for differentiation anymore. Loquitur Veritatem employs a definition that includes parenting as censorship, for example. AMT cites a definition that says anyone is a censor who “supervises the manners or morality of others” – which is so vague that pretty much everyone can be called a censor under this definition at some point or other. (Ever tried to calm a friend down when they were losing their temper? Congrats, you’ve engaged in censorship because you were supervising their manners!) My favorite movie theater has a rule that people aren’t allowed to talk during movies. I guess you could call that “censorship” by these definitions. But if your definition commits you to saying that “if you want to watch a movie in our theater you need to stay silent during the film” and “if you express views which the state doesn’t agree with, we will lock you up in jail” should be described using the exact same word, then I think your definition should be rejected on that basis. It’s not a virtue to describe very unlike phenomenon the same way.
Since one person has already been kind enough to cite Scott Alexander, I’ll do the same and mention his essay Against Lie Inflation as a generalized example of what I’m trying to say. He argues that we shouldn’t be using ever expansive definition of certain words with highly negative associations, because in doing so, we undercut the value of those words. His main examples are people who would define lying to include a person who makes a genuinely believed but false statement because they’re overly optimistic or caught up in cognitive biases, and those who would expand the definition of “abuse” to mean any behavior that makes your partner feel sad or unhappy.
And of lying, he makes a similar point:
I feel like this is what’s being done with the word “censorship.” By expanding it’s definition to include behavior that virtually everyone engages in, you’re just making the word less useful. When parents supervising the manners of their children qualifies for the label “censorship,” nobody is going to care about censorship anymore.
AMT
Oct 21 2020 at 3:57pm
TL:DR: If people just used a different word, or used the term “private party censorship” that would be optimal because things would be clearer, but unfortunately that is not the situation and you have to figure it out through context.
Unfortunately, the English language uses the same word for vastly different meanings, and you just have to figure out from context which meaning is used. E.g.:
https://www.dictionary.com/browse/murder?s=t
I think there is a strong argument that we should just use a different word for a flock of crows, but as the language currently exists, I cannot conclude it is incorrect to use the word in that manner.
While I agree it is best to keep our definitions of words consistent so people are not talking past each other, I have to disagree with this point because it implies that nobody could even tell which definition of the word is being used. Yes, ideally we might use separate words, or just specify whether we are talking about “government censorship” versus a “private party censorship,” but people will certainly still care whether the government is censoring people, and react differently from private party censorship. The downside of using broader terms is that people need a bit more information to understand what kind of “censorship” we are talking about, because the term itself is not perfectly clear.
KevinDC
Oct 21 2020 at 8:47pm
Hey AMT –
Thanks for the reply. From what I can tell, you and I don’t really seem to disagree with any factual matters of substance – we simply disagree about how language ought to be used in this case. For example, when you say “as the language currently exists, I cannot conclude it is incorrect to use the word [murder] in that manner,” I don’t really disagree. I never claimed that one particular definition of any word or concept was “correct” (I’m not a semantic externalist!), I was instead talking about what kinds of definitions for words were more useful.
Some people use the word “censorship” very broadly, while others use it very narrowly. You and I agree that much is true. Similarly, some people use words like “liar” and “abuser” narrowly or broadly. In all of those cases, I argued that the narrower definition of the word is more useful, and when the definition is being disputed, we ought to insist on the narrow definition on that basis. You point out (correctly) that many vague and broad uses of that term exist, and many people use it in the vaguer and more broad way. But I already knew that – otherwise it wouldn’t have made much sense for me to argue that we should insist on sticking with the narrower definitions. My argument wasn’t about how the word is being used – I was arguing for how the word should be used. I don’t see anything in your reply that gives me any reason to reconsider anything I said along those lines.
AMT
Oct 22 2020 at 3:36pm
Regarding how society should use words, of course clearer, more concise communication is better.
I agree you never said it was incorrect, just suboptimal to use a broader definition. My point about word usage not being incorrect is a comment on David’s perspective, since he incorrectly claims “private firms cannot censor,” and that Turley “misstates” the issue.
Mark Z
Oct 21 2020 at 8:38pm
I think the point about ‘lie inflation’ only applies to expanding the definitions of words with intrinsic negative connotation (e.g., no one is going to admit that something is racist, but it’s an ok form of racism; there are innate moral implications to meeting the definition of the word, in most people’s minds). I don’t think this is the case for censorship. A television station censoring swear words is morally neutral, imo. But perhaps to others there are strong negative moral connotations associated with the word censorship, a sense that all censorship is equally bad for the same reason (failure to recognize the moral distinction between private and state censorship, even if one thinks both are bad), and that’s why defining censorship as equal to state censorship is important to many commenters.
Jon Murphy
Oct 21 2020 at 12:14pm
To the question of definition, I do not think (despite the objections of many) that David Henderson is being handwavy with the definition of censor here. It seems to me that he is aligned quite well with the definition and common understanding of the term.
According to Merriam Webster, a censor is:
Furthermore, the verb censor means:
The second sense of “suppress” is:
So, we have an official person who is trying to keep something from public view.
As David Henderson notes, that is not what is going on here. Facebook, Twitter, etc are not trying to keep the NYP post from public view. They cannot. They are just preventing it from being shown on their sites. Anyone can wander over the the NYP site and see the article. Anyone can email it, print it, cross-publish it, etc. There is no suppression going on.
Now, I will say that some people use “censor” in such a way as to describe the behavior of Facebook et al. I suspect this is done in the same way people use the word “injustice” for many things that are not injustices. Or “oppression” for many things that are not oppression. Or fascism, socialism, etc for many things that are not fascist, socialist, etc. These words carry strong connotations and invoke strong passions. But their uses are incorrect. One should not treat such rhetoric as proper uses of the words. Therefore, I reject the arguments put forth here that David is ignoring the colloquial use of the word.
J Mann
Oct 22 2020 at 10:07am
Thanks – I appreciate where you’re coming from, but in my opinion, that’s a pretty aggressive reading of that dictionary definition.
(i) The noun refers to officials only in the “such as” section, which doesn’t necessarily mean it’s exhaustive. If Facebook hires a person to supervise conduct and morals, is that person a censor? (They fit the larger definition of (1), but not (1)(a) or (1)(b)).
(ii) On its face, the verb doesn’t incorporate the noun at all – you are reading the verb so that the act of “censoring” can only be performed by a “censor,” but that’s not necessarily the case. Certainly we agree that the government can censor, even though it would be a strained reading to conclude the government is “an official who examines materials,” etc.
Finally, there’s the proscriptive vs descriptive problem of language. If censorship is commonly used to describe things like, e.g., a parent going through a kid’s draft letter to Grandma and ordering that objectionable materials be removed, then that’s what it’s understood to mean, at least by some users of the term.
Jon Murphy
Oct 22 2020 at 12:10pm
Good points. Let me respond:
To point I: I meant a broad definition of “official.” Facebook has officials. Rather, the objection was regarding “suppress.” Facebook officials cannot suppress information since they cannot keep it from public view.
To Point II: I say the verb and noun are indeed related. The verb defines the noun. A censor is someone who censors. A butler is someone who buttles. A firefighter is someone who fights fires. To call someone a censor who does not censor makes no sense.
This is more tricky since the affair 1) involves a child and 2) is between entirely private parties. I would argue it is not censorship. The letter is an entirely private affair; it is not meant for public consumption. I’ll agree that people may use that example to mean censorship, but it is not (see my point about rhetoric).
J Mann
Oct 27 2020 at 11:17am
Thanks!
Peter Gerdes
Oct 21 2020 at 1:30pm
Ok, sure they can’t censor. We’ve defined that to mean the use of state coercion to punish speech or otherwise suppress it.
What they can do, when they exert substantial practical control over the means by which citizens recieve information and express themselves is to practically suppress the expression of certain viewpoints in ways that (to a lesser extent) mirror the effects and harms of censorship.
Now you may disagree with that claim. Maybe the reason you think that censorship is wrong depends uniquely on the role of government coercion. However, many people believe many harms of censorship don’t depend on the unique role of the government.
It seems to me that charity demands you interpret their argument as making the claim that these private actions have censorship like harms and not dismiss it out of hand by standing on a definition.
Peter Gerdes
Oct 21 2020 at 1:33pm
I mean I ultimately think that in this case Turley is blowing this out of proportion. However, I do think there are also important societal goods we lose when people stop believing societal media gatekeepers are at least somewhat nuetral with respect to matters of controversy in our society.
E. M. Crenshaw
Oct 21 2020 at 3:07pm
This is why Facebook, Twitter and a few others should be converted into public utilities. Libertarian logic doesn’t apply here any more than it applies to money, electricity, or the gauge of rail lines. Once something has become (in everything but name) a “common,” someone has to provide some rules, period.
robc
Oct 22 2020 at 8:46am
Private money worked just fine.
Non-utility electric companies worked just fine.
Varying rail gauges work just fine. Standards evolve and tend to, well, standardize over time.
Mark Brophy
Oct 21 2020 at 5:30pm
Google, Twitter and Facebook were given immunity from libel lawsuits based on the idea that they’re neutral common carriers. Since they’re not neutral, Congress should withdraw their immunity from libel lawsuits.
David Seltzer
Oct 21 2020 at 6:27pm
Again very robust, well argued by the commenters. From an econ POV, incentives abound for a platform that does not restrict, I use that term on purpose, comments, opinions or even contemptable discourse. Oh wait. There is one. It’s called PARLER. 2.8 million or so followers. In today’s capital market environment of zero rates, innovators and challengers to the FANGs are raising funding from primary and secondary sources. I suspect we’ll see a new FB come to market at about 36 bucks a share, like FB did in February 2004. In 16 years it has garnered 2 billion users. What an incentive. Of Course if Zuckerberg captures some regulators, it may take longer.
Anders
Oct 22 2020 at 6:36am
The recent hype around the New York Post article illustrates the basic point here perfectly: by trying to block it, it got much more attention that it otherwise may have, regardless of how well founded it was.
BUT I submit that there is something much more iniquitous going on here through the algorithms that they use.
Take this example: I watched ONE video of Bjorn Lomborg, an economist who was slammed as a climate change denier although he did little more than take the IPCC report for granted and went through the analysis (concluding that some pet technologies like wind might not be the best option we have), and ONE video of the five on Fox news because I wanted to see another take on the Trump Biden debate.
The next day, about a third of the recommendations I received came not only from Fox News, but outlets like Rebel Media, the Heartland Institute, and someone called Louder with Crowder. And these were not only on Trump and climate change skepticism (which does not even apply to Lomborg), but also on what I would consider immigration alarmism and stories that compared Portland with Somalia.
It seems like people are systematically being pushed into extreme directions systematically. Had my views been just a little bit more partisan, it is easy to imagine how I would end up in a world where I find little reasonable opposition to my views.
THIS, I submit, is indeed an area where even liberals should consider if some kind of meaningful constraints make sense to make sure we are exposed to more diverse viewpoints. The thought irks me; but the current trends irk me more.
Vivian Darkbloom
Oct 22 2020 at 7:56am
Yes, there is no doubt that the algorithms are giving readers and viewers what they think those customers want. Most (80 percent?) likely willingly succumb to this and the result is to reinforce existing biases and partisanship, to be sure. But, I wonder, is this worse than before the internet days? On the one hand, I think the then major sources of news, such as television, radio and newspapers did try to present more opposing views than they typically do today and readers were “involuntarily” or at least automatically exposed to those opposing views. But, today, you have the option to ignore those recommendations and more easily seek out more challenging, opposing views, if that is what you want.
Perhaps some enterprising and publicly-minded entrepreneur will employ algorithms which expose subscribers automatically to things that challenge their normal patterns of consumption or from which they can choose different algorithms—partisan, challenging, etc. This might work something like the “shuffle” option that usually is a feature of your music library. But, would the majority of people really want that and would those things be commercially feasible?
Barring that, I think it is relatively easy for you to confuse those algorithms. Choose yourself to visit a variety of sites and sources with different ideological viewpoints.
J Mann
Oct 22 2020 at 9:58am
I find this kind of semantic debate kind of tiresome. It’s like arguing about whether anti-Semite’s are racist, given that Jews are not commonly understood as a “race.”
In the case of anti-Semitism, at least we have a word. Can we have a word for when private entities prevent people under their control from expressing specific viewpoints, so we can discuss whether it’s offensive?
For what it’s worth, I think that word is currently “censor.” That’s the word I’d use if you asked me to describe an employer who prevents employees from campaigning for Joe Biden on company property or an email provider who prevents users from advocating for Trump.
Jon Murphy
Oct 22 2020 at 12:21pm
Semantics can be tiring, but I argue it is important in this case. People are calling for the government to step in. Thus, we need an understanding of what censorship is. If government is to pass some legislation, then the definition needs to be precise and accurate. But, as Don points out above, there are all sorts of issues with that.
Jon Murphy
Oct 22 2020 at 12:59pm
Let me expand upon what I mean in my earlier comment.
Let’s take a broad, and what some are calling “commonly understood” definition of censorship to include the suppression of any information. If we are asking government, in the name of free speech, to prevent such censorship by powerful entities, then we play a dangerous game. Such definition of censorship is too broad.
-Let’s say I am driving in my car and my crazy Uncle Rex starts spouting his latest 9/11 conspiracy theory in the seat next to me. I tell him to shut up or I’ll kick him out of the car. Is this not censorship? I am suppressing his views. I am a very powerful entity in that situation, even threatening him. Should the government step in and compel me to let him keep talking?
-When my brother forbade me from interacting with his kids because “they should not be infected with your racist, fascist, misanthropist capitalist lies” does that not constitute censorship? He has complete authority in this situation. Should the government step in and compel him to let me preach to his kids, my niece and nephew?
If we are to argue that Facebook (or Google) are very power influencers, then we need to be more precise. Influence and power are subjective.
This is not as precise and accurate an issue that people think. Once we start digging, things get quite loose and vague. That is just begging for real, actual censorship to come about.
Dylan
Oct 22 2020 at 1:53pm
The second part doesn’t follow from the first. Sure, some people may make first amendment arguments aimed at private entities, but those are bad arguments. It has nothing to do with how the language is commonly used. As someone else used the example of murder above, just because one definition of it is illegal everywhere, doesn’t mean that we’re going to suddenly outlaw groups of crows (much as I would have liked that after they demolished an unattended picnic lunch one day).
There’s a nuanced discussion to be had about what does free speech look like in a world where the commons are controlled exclusively by private companies…but that’s not the discussion I’m hearing.
Niko Davor
Oct 23 2020 at 4:12pm
Google, Twitter, and Facebook lobbied and advocated for “net neutrality” regulations on ISPs. Net neutrality advocates argued that ISPs could censor political views they disliked, which was immoral. Now, Google, Twitter, and Facebook are doing the exact type of suppression of political viewpoints that they decried as immoral and called for regulations against. The more plausible thing that ISPs wanted to do is use free market techniques to capture a larger slice of the revenue streams that Google, Twitter, and Facebook currently enjoy, which those latter companies understandably don’t want to share.
The better argument Henderson could make is that private companies have a right to censor. Not to play games with definitions and say that what normal people would call “censorship” shouldn’t be called censorship.
The Hillsdale College analogy is bad. Twitter and Facebook and YouTube claim to be politically neutral platforms available to all. The right to be a featured speaker at Hillsdale college is well understood to be available to a select few at the full discretion of Hillsdale College. It’s not a public, undifferentiated service that claims to be open to all.
I agree that Turley’s single quote is wrong: people aren’t forbidden from seeing the story and allegations of corruption against Joe Biden. Facebook and Twitter strangled the circulation of those stories, but the news stories are still accessible to determined users. Henderson that the actions of Facebook and Twitter will backfire politically. I’m sure there will be a reaction, but I find it plausible that this will reduce the impression this makes on key swing voters who don’t actively follow news and nudge the election in Biden’s favor.
Dick White
Oct 23 2020 at 11:28pm
Many posts so I apologize if this observation already made. Remember the government has already put a finger on the scales in favor of Twitter etal by giving them protection from liability. We’re these firms exposed to that liability it would likely cause them to be more cautious in both restricting and promoting content. The present condition (no liability) is likely not censorship but as you look at it, you just feel something’s not right. Exposure to liability, like all other firms face, would, I believe, lessen the “not right” perception.
Dylan
Oct 24 2020 at 5:34am
Exposure to liability for content that someone else creates would break the internet as we know it. Community forums and comment sections like this one would likely disappear or be radically reimagined without Section 230 protections. Web hosts, cloud storage, webmail, photo sharing sites, and online reviews are just a few of the services that would be at risk if Section 230 was dismantled.
Ironically, the very big tech companies this would be targeted at, would likely become further entrenched, as they have the resources to comply with the new rules, while potential new upstarts would not.
Comments are closed.