Almost everyone I meet is unhappy with the way that Twitter moderates content. But they are unhappy in different ways. Why isn’t there a generally accepted approach to content moderation? What makes it so difficult?
I suspect that many people have an overly optimistic view of how easy it is to moderate content. I get the impression that many people have the following view:
1. They see lots of cases where Twitter makes the wrong decision.
2. They think that if they owned Twitter, they’d stop making those wrong decisions.
I’d like to argue that this is much more difficult than it looks, with one exception. If you believe there should be no content moderation at all, then content moderation is easy. But what if you agree with both Elon Musk and the former management of Twitter that there should be some content moderation. What then?

In that case, you “draw a line” and don’t allow content so objectionable that it falls over the line. This approach is almost inevitable, as advertisers won’t become involved in a company that allows highly objectionable content such as child pornography.
Unfortunately, while line drawing is almost inevitable, the difficulty in doing so makes it almost inevitable that most people will be unhappy with the result. Line drawing creates two problems:
1. The content moderator must decide the degree to which objectionable content will be allowed. Thus if you imagine a scale of 0 to 100, where 100 is the most objectionable, you could imagine a moderator saying that anything above 80 is banned. One way of thinking about Elon Musk’s recent decisions is that he is trying to raise the cutoff point, relative to the relatively strict moderation of the previous management. Say from 75 to 90.
2. The content moderator must determine whether specific content crosses the line, and thus is too objectionable to be allowed. Thus it’s not only a question of whether to ban the 10% worst tweets or the 25% worst tweets, you also have to determine which tweets are above the line and which tweets are below the line.
The first decision has to do with tolerance for bad tweets. A progressive friend of mine supports Elon Musk because he’s an old school liberal with a high tolerance for offensive speech. The second decision has to do with various forms of bias. People on the left tend to be more offended by fascism, anti-black racism, and denial of the efficacy of vaccines. People on the right tend to be more offended by communism, anti-white racism (or bigotry if you prefer), and the denial of the science of innate differences between genders.
Elon Musk seems to be more right wing than the previous Twitter management, so he’s less likely to put right wing tweets into the “highly offensive” category. He favors less strict standards and less bias against conservative tweets.
So why do I believe that people underestimate the difficulty of content moderation? Here an analogy might be useful. The Twitter debate reminds me of debates over basic ideas in epistemology. Richard Rorty has argued that it is not possible to draw a clear line between different types of knowledge such as subjective/objective, fact/opinion, or belief/truth.
Many people find Rorty’s view to be counterintuitive. There is a common sense view that it is possible to draw a line between things we believe and things that are actually true. In debates, people will often cite obvious examples that fall on each side of the line, to make this point. But those obvious examples don’t prove the utility of the line itself.
With content moderation, people can easily find examples of tweets that they are confident should be allowed, and they can easily find examples of content that should not be allowed. But when you get close to the line, things get much more difficult. And this is partly because offensiveness is a matter of degree, but content decisions are all or nothing. Thus for tweets that are right near the line, decisions will inevitably look arbitrary and unjust. And that’s true even if the world contained no political bias, and people merely differed in their toleration for controversy.
Go back to the hypothetical scale of offensiveness, from 0 to 100. Imagine Elon Musk decides that anything above 90 is too offensive, and thus gets banned. In that case, a tweet with an offensiveness of 90.1 will be banned and a tweet with an offensiveness of 89.9 will be allowed. Most people won’t be able to spot the difference, and thus at least one of the two decisions will seem arbitrary and unfair. “If you banned Joe for saying X, why did you allow Fred to say Y?”
And that’s assuming everyone holds the exact same political views. Now imagine a world where people also disagree about what is objectionable, and they strongly believe that their political views are correct and the other side is wrong. Now the perception of unfairness in content moderation will look far worse, an order of magnitude worse. It will become a thankless task.
I’ve been doing blogging for 13 years, and I discovered early on that there is no simple way to moderate comments. Wherever you draw the line, there will be complaints.
The decisions made by big companies such as Twitter usually tend to reflect market forces, at least to some extent. But these companies often have a semi-monopoly position in their market niche (due to network effects), which gives them some ability to override market forces. The next few years will provide a test of how much market power Elon Musk possesses. My own preference is for a relatively high tolerance of objectionable tweets, and as little political bias as possible in content moderation. So I wish him well. On the other hand, I’d encourage Musk to delegate this responsibility to others. While his strategic vision may be correct, he doesn’t seem to possess the judicial temperament that you’d like to see in a content moderator.
READER COMMENTS
Andrew_FL
Dec 6 2022 at 6:12pm
This assumes that Twitter content moderation was strictly acting on content over their arbitrary line. Twitter in fact acted against content well within their lines.
Scott Sumner
Dec 7 2022 at 2:58pm
You misunderstood the post. I specifically discussed TWO problems.
Thomas Lee Hutcheson
Dec 6 2022 at 6:47pm
But the higher the bar for offensiveness, the fewer cases will fall close to the hard to draw line.Another approach would be not to ban offensive content at all but to “tax” it by training the distribution algorithm to distribute it less widely, allow fewer re-tweets, etc.
Jon Murphy
Dec 7 2022 at 9:59am
Does your arrpoach solve the problem of moderating content? It’s not obvious to me it does. The problem of identification remains.
Thomas Lee Hutcheson
Dec 7 2022 at 12:27pm
It mitigates the arbitrariness of binary choice of punishment for marginal differences in offensiveness.
Jon Murphy
Dec 7 2022 at 3:05pm
No, I don’t think it does. We’re still having the problem of identification. That’s where the arbitrariness lies.
Jim Glass
Dec 7 2022 at 11:55am
Another approach would be not to ban offensive content at all but to “tax” it by training the distribution algorithm to distribute it less widely
They already do that. It’s called ‘shadow banning’.
Jim Glass
Dec 6 2022 at 8:31pm
Also, most of the content moderation is being done by AI, not humans. There aren’t enough humans to do it. (The appeals go to humans.) So the AI has to be trained to deal with all this. Adds to the challenge.
robc
Dec 6 2022 at 10:48pm
There is a means that seems simpler to me, but doesnt eliminate the problem. Instead of offensiveness, only ban illegality. Like child porn.
Jim Glass
Dec 7 2022 at 12:32am
… only ban illegality.
The problem with that is the primary purpose of content moderation is to keep advertisers happy. Social media is overwhelmingly advertiser supported.
On YouTube there are a number of excellent educational channels about military history that are demonetized because advertisers don’t want to be associated with wars. They pay their bills via money-donating subscribers. I really enjoy a channel that relates strange stories and urban legends, then debunks them (or not). The Why Files. It’s totally G-rated, yet reports continual issues with demonetization. See if you can figure out why, I can’t.
Andrew_FL
Dec 7 2022 at 9:13am
That’s not necessarily a problem, as long as alternative platforms not beholden to advertisers exist-subscription or supporter based.
I wouldn’t be surprised if you could find some of the channels you’re talking about have content on Nebula, for example.
Josh S
Dec 7 2022 at 12:52am
Your idea that legality has a simpler line is charmingly optimistic. Content moderation decisions take a lot less time than legal ones. Organizations generally want a safe distance from doing anything illegal, in large part because of the relative wide gray area.
robc
Dec 7 2022 at 8:47am
Its simpler in that there is a lot of offensive stuff that is clearly legal, so it frees up time to focus on the gray areas of illegality.
Jim Glass
Dec 7 2022 at 2:39pm
Yes, they have subscriber support and other platforms – but there is no substitute for that YouTube ad revenue. And they’d be best off with both, of course.
China Uncensored is a pretty good channel that just got hit for publishing video of the recent protests against Xi in Shanghai. They appealed twice, lost both, explain the appeals and $$ impact, and conclude with some anger that Google is politically censoring them (and other China-watching channels) from reporting the news, to please the CCP. But the history channels about WWII are demonetized, and the many channels covering the Ukraine war daily can’t show any actual violence on YouTube (they do on their other platforms) all for the exact same reason — YouTube guidelines prohibit graphic violence, because advertisers don’t want to be associated with it.
Never assume political censorship when advertiser unhappiness will do.
nobody.really
Dec 7 2022 at 12:51am
If you’re trying to write Anna Karenina, it’s already been done.
Majid Hosseini
Dec 7 2022 at 1:33am
In addition to all the points you mentioned, in the context of a platform the size of Twitter, there’s also the issue of scale. There’s absolutely no way that humans can moderate the entire stream of Tweets getting generated. The best one can do is to use machine learning algorithms, combined with some heuristics, in order to find and filter the majority of “objectionable” tweets, and use human moderation for a small subset. But this is much easier said than done, as you can see from how difficult it is to correctly classify spam in email.
Nicholas Decker
Dec 7 2022 at 2:18am
If I may put my two cents in, you should probably moderate a bit more aggressively on your home blog of the money illusion. There are a few individuals who are consistently obnoxious about much of a degenerate you supposedly are, and I struggle to see the value they are contributing. Certainly it alienates me from commenting on your posts.
Henri Hein
Dec 7 2022 at 12:47pm
Same here. I will add that it’s an excellent example of Scott’s point. I don’t have any suggestion for a moderation policy. A change in moderation that would make it better in some sense, with no one complaining about the change, probably doesn’t exist. The recent two-comment limit seems like a good first-order attempt, if it could be enforced.
Kenneth Duda
Dec 7 2022 at 3:47pm
I second the suggestion. The trolls who just insult Scott and/or make nonsense arguments that “prove” Scott is operating in bad faith drive me nuts. I wish Scott would just make them go away. (I’m fine with bona fide disagreement.)
Scott Sumner
Dec 9 2022 at 2:50pm
Without content moderation, my comment section would be 10 times worse.
Nicholas Decker
Dec 10 2022 at 3:33am
Honestly, I’m baffled at the level of dedication they have. I’m not convinced it isn’t just one guy with a hell of a vendetta.
Jo VB
Dec 7 2022 at 4:26am
As you might know, the FIFA soccer world cup is ongoing right now. The off-side rule requires a line. With VAR and camera technology, they now draw the line much more precisely than before and they routinely disallow goals because a player was a few mm over the line with a piece of their arm. It gave no material advantage at all, but once you have a line, you need to enforce it to the mm. If you tolerate a 10mm margin, the debate will move to 9mm versus 11mm over the margin. It feels totally arbitrary, but it also feel utterly unsolvable. Once you have a precise line, close decisions will feel arbitrary.
Michael Sandifer
Dec 7 2022 at 9:19am
This is a good post, but only scratches the surface of the complexity of content moderation at Twitter. In addition to the challenges of satisfying users and advertisers, Twitter must also follow an FTC consent decree and laws which are beginning to diverge a bit by state in the US, and which can vary widely globally. Promoting Nazism, for example, is illegal in Germany. Hate speech generally is illegal in some countries, or political subdivisions thereof.
Then, there are countries like India, with a government that increasingly seeks to limit the speech of minorities and political opponents via their legal process. When Musk says he’ll follow all laws, does that mean even those that suppress free speech for political reasons?
Finally, to cover just one more point, there are a host of actual and potential conflicts of interest regarding Musk ultimately being in charge of Twitter, and he various other business dealings around the world. SpaceX is a big NASA and Pentagon contractor, but Tesla is set up to sell a lot of cars in China, for example. China bans Twitter, but many in China know how to get around those bans, and China does not like tweets that are negative about their regime.
Musk has given no indication that he’s thought about all of these issues in anything like a comprehensive way.
Alex
Dec 7 2022 at 10:22am
How about drawing the line on the first amendment? Child porn is not protected by the first amendment. If these platforms edit out content that is protected I fail to see how they are any different to a publisher and why they should have liability protection.
As for Rorty, his views are a self contradiction. If there is no objective truth, then that also includes him. The statement that there is no objective truth is also not an objective truth. And he makes his point using language, which he learned using the senses, to tell us that the senses are subjective. Then how does he know that we can understand him?
Jon Murphy
Dec 7 2022 at 11:27am
The 1st Amendment only protects against the government. A private actor may draw the line wherever they wish. Perhaps a private actor will choose to only block illegal speech, but that still leaves a whole host of other undesirable speech.
For example, this is a blog dedicated to economic and liberal ideas. The owners have decided to not allow off-topic comments. If I were to come here and start advertising my services, they would kick me off. Now, advertising is free speech and protected by the 1st Amendment. But permitting such free speech here would undermine the blog and weaken the value the owner has.
Alex
Dec 7 2022 at 8:41pm
But this blog doesn’t have liability protection nor it claims to be a neutral platform. This blog is like the editorial pages of a newspaper. Only a few specific people create original content, the rest of us can only comment on the specific issues that they raise. Also, they are liable for what they say, as is the blog itself. If a journalist of the new York Times accuses you of murder you can sue both him and the NYT. A platform like twitter or YouTube is open, anyone can go and upload content. If someone accuses you of murder on twitter you can sue that person, but you cannot sue twitter precisely because twitter claims to be a neutral platform. In that regard is like a telephone company. The telephone company is not liable for what is said on their lines, but they cannot cut your line because they don’t like what you said.
Alex
Dec 7 2022 at 10:15pm
Why?
If a platform edits content out how is it any different to a publisher? Why does it deserve liability protection?
Jon Murphy
Dec 8 2022 at 7:16am
Again irrelevant. My point is that there are many reasons one may wish to moderate beyond the legal restrictions imposed on government. Restricting moderation to government levels is a poor normative choice.
Scott Sumner
Dec 7 2022 at 3:07pm
“As for Rorty, his views are a self contradiction. If there is no objective truth, then that also includes him.”
You completely missed the point. Rorty agrees with you that his views are not objectively true. You need to do a bit more research before assuming that you know more that an famous epistemologist who devoted his entire life to the subject. Do you really think Rorty never thought of that argument?
Johnson85
Dec 7 2022 at 12:00pm
Moderation is hard, but Twitter’s problem is that they did not make a good faith effort to moderate without bias.
And I also think that pre-Musk Twitter made it much harder for Musk Twitter by picking a side to begin with.
If Twitter had always generally supported free speech. Not talking about from an absolutist position, but at least of the position of “we’re not going to censor people for claiming that sex is biological or for claiming sex is a social construct; we are a place for debate and sometimes debate will hurt people’s feelings”, I think it’d be much easier to get companies comfortable with advertising on the platform provided there is maybe some algorithm that keeps ads away from those type tweets. Once they waded in on one side of the debate, now going to a neutral free speech position feels like “taking sides” to the “sex is a social construct” group, which of course makes advertisers more wary.
That would still leave some really difficult line drawing, particularly with respect to things like race where there can be a relatively short jump between verifiable fact (e.g., differences in distribution of scores on certain aptitude/intelligence tests between racial groups) and racist opinions (race x is superior in whatever way because of those different distribution of scores), but the fact that there would still be some difficult line drawing is not a good reason to just throw up your hands and do it in a nakedly political manner.
Mark Barbieri
Dec 7 2022 at 12:38pm
I think we could solve a lot of these problems by moving away from a banning/censoring approach to more of a tagging/filtering approach. You’ll always need to remove content that will put the host in legal jeopardy – threats, libel, infringement. For the rest, tag it – fascist, communist, racist, or whatever and let people set up filters so that they can block the sort of content they don’t want to see.
You could get more sophisticated and score content on how offensive it seems to be so that people could tolerate different levels. Or you could set up a process that allows 3rd parties to tag content.
Using a tagging approach should satisfy people that don’t want to be badgered by trolls and haters. It won’t satisfy people that want to suppress speech that they don’t agree with. I think the biggest problem with the approach is that it might further enable people to craft information bubbles for themselves that allow them to avoid uncomfortably different viewpoints.
Rajat
Dec 7 2022 at 2:39pm
It’s not a point of principle, but another factor is that moderation at large scale is likely to involve inconsistencies around that 90 level unless a single unchanging algorithm does the sorting. Even Musk himself before and after lunch (or is that for wimps?) is likely to apply different standards, like judges apparently do when adjudicating on cases.
I’m intrigued by this passage:
By ‘market forces’, you seem to be referring to something like ‘what is preferred by the majority of users and advertisers by value’. Market power is not usually seen as about the ability to deny customers what they want, but the ability to extract value from customers. I guess one could see Musk as extracting value from making Twitter more open, even at the cost of profits, so long as the loss of custom is not so great as to lead to the irrelevance of the platform. I would tend to put that down to his ability (as controlling owner) and willingness to sacrifice or at least gamble his wealth for other forms of utility rather than directly relating to Twitter’s market power – after all, we don’t see much bigger companies like Google or Apple do this sort of thing. Similarly, Meta has less market power than it did 5 years ago, and yet Mark Zuckerberg is making a huge gamble on VR that a more diversely-held company almost certainly wouldn’t make.
JoeF
Dec 7 2022 at 8:19pm
Somehow, the post and all the comments do not address the dreaded “misinformation” (sometimes called disinformation), which is at the heart of Twitter/Facebook/YouTube censorship. Here is Politico (https://www.politico.com/news/2020/10/19/hunter-biden-story-russian-disinfo-430276), informing us in 2020 that “more than 50 former intelligence officials” claimed the Hunter Biden laptop story was disinformation. Was it disinformation? How does Twitter or any entity possibly identify actual misinformation? Is it OK to tweet that Jesus walked the earth? Or that carbon emissions will lead to catastrophic climate change? Or that inflation will probably increase or decrease? Or that wearing masks protects people from Covid? These are all conjecture, but so is just about everything. Any line that is drawn between information and disinformation is purely editorial. I’m with Rorty.
Michael Rulle
Dec 8 2022 at 10:53am
I am thinking of Twitter in this comment.
One could write a book on this and still get nowhere. Everything ultimately does come down to judgement at the edges.
One thing I dislike is crudeness (as if that solves the problem!—-must define it—-but as I said everything is on the margin or at the edges—so to speak) but pretend it’s definable to some degree ——if we eliminate that we get rid of the commenters who rely on crudeness——and therefore a lot of the nonsense we often see that goes along with that.
All opinions should be allowed. All statements of fact——whether wrong or right—-should be allowed (there is likely room in Scott’s “10%” room to moderate certain fake statements of fact)——
After all—-it is a town square so to speak—-let readers do the arguing.
JdL
Dec 8 2022 at 12:09pm
Why not let other users do the work? Give everyone an option to rate any post they see. Then let each person set a value for how low-rated they’re willing to view, with other posts filtered out.
Dylan
Dec 9 2022 at 11:55am
Arstechnica started doing something similar to this a number of years back. Every comment can be voted on, and comments where the proportion of votes is too negative first get lightened, and if they go more negative they get hidden, but a user can choose to show low voted comments, and then eventually they just disappear completely.
Sounds good in practice, but I’ve found that what it did was take what was a pretty epistemologically curious crowd and quickly turn it into an echo chamber of low quality comments and shallow one liners. Anything that even remotely shows support for free markets gets quickly down voted to oblivion. Some people seem to really like that, but as someone who likes to be exposed to different points of view and debate things and hopefully learn something and where the comment section used to be the biggest draw for me, it is now something I try and avoid.
Jim Glass
Dec 9 2022 at 2:52pm
Anything that even remotely shows support for free markets gets quickly down voted to oblivion.
Real democracy really sucks.
TGGP
Dec 8 2022 at 11:29pm
This is a good opportunity to bring up Scott Alexander’s Moderation Is Different From Censorship. A fully decentralized system could involve lots of moderation tools so people never have to encounter any social media posts they don’t want to… but the people who DO want to access even the most offensive material would still be able to. There are laws even in the US that limit that sort of freedom and instead impose censorship on illegal material, but there’s a large scope for it. And I’ll say that it strikes me as a reasonable presumption to not censor (remember, different from moderating) material which is not illegal.
Comments are closed.