Almost everyone I meet is unhappy with the way that Twitter moderates content.  But they are unhappy in different ways.  Why isn’t there a generally accepted approach to content moderation?  What makes it so difficult?

I suspect that many people have an overly optimistic view of how easy it is to moderate content.  I get the impression that many people have the following view:

1. They see lots of cases where Twitter makes the wrong decision.

2. They think that if they owned Twitter, they’d stop making those wrong decisions.

I’d like to argue that this is much more difficult than it looks, with one exception.  If you believe there should be no content moderation at all, then content moderation is easy.  But what if you agree with both Elon Musk and the former management of Twitter that there should be some content moderation.  What then?

In that case, you “draw a line” and don’t allow content so objectionable that it falls over the line.  This approach is almost inevitable, as advertisers won’t become involved in a company that allows highly objectionable content such as child pornography.

Unfortunately, while line drawing is almost inevitable, the difficulty in doing so makes it almost inevitable that most people will be unhappy with the result.  Line drawing creates two problems:

1. The content moderator must decide the degree to which objectionable content will be allowed.  Thus if you imagine a scale of 0 to 100, where 100 is the most objectionable, you could imagine a moderator saying that anything above 80 is banned.  One way of thinking about Elon Musk’s recent decisions is that he is trying to raise the cutoff point, relative to the relatively strict moderation of the previous management.  Say from 75 to 90.

2. The content moderator must determine whether specific content crosses the line, and thus is too objectionable to be allowed.  Thus it’s not only a question of whether to ban the 10% worst tweets or the 25% worst tweets, you also have to determine which tweets are above the line and which tweets are below the line.

The first decision has to do with tolerance for bad tweets.  A progressive friend of mine supports Elon Musk because he’s an old school liberal with a high tolerance for offensive speech.  The second decision has to do with various forms of bias.  People on the left tend to be more offended by fascism, anti-black racism, and denial of the efficacy of vaccines.  People on the right tend to be more offended by communism, anti-white racism (or bigotry if you prefer), and the denial of the science of innate differences between genders.

Elon Musk seems to be more right wing than the previous Twitter management, so he’s less likely to put right wing tweets into the “highly offensive” category.  He favors less strict standards and less bias against conservative tweets.

So why do I believe that people underestimate the difficulty of content moderation?  Here an analogy might be useful.  The Twitter debate reminds me of debates over basic ideas in epistemology.  Richard Rorty has argued that it is not possible to draw a clear line between different types of knowledge such as subjective/objective, fact/opinion, or belief/truth.

Many people find Rorty’s view to be counterintuitive.  There is a common sense view that it is possible to draw a line between things we believe and things that are actually true.  In debates, people will often cite obvious examples that fall on each side of the line, to make this point.  But those obvious examples don’t prove the utility of the line itself.

With content moderation, people can easily find examples of tweets that they are confident should be allowed, and they can easily find examples of content that should not be allowed.  But when you get close to the line, things get much more difficult.  And this is partly because offensiveness is a matter of degree, but content decisions are all or nothing.  Thus for tweets that are right near the line, decisions will inevitably look arbitrary and unjust.   And that’s true even if the world contained no political bias, and people merely differed in their toleration for controversy.

Go back to the hypothetical scale of offensiveness, from 0 to 100.  Imagine Elon Musk decides that anything above 90 is too offensive, and thus gets banned.  In that case, a tweet with an offensiveness of 90.1 will be banned and a tweet with an offensiveness of 89.9 will be allowed.  Most people won’t be able to spot the difference, and thus at least one of the two decisions will seem arbitrary and unfair.  “If you banned Joe for saying X, why did you allow Fred to say Y?”

And that’s assuming everyone holds the exact same political views.  Now imagine a world where people also disagree about what is objectionable, and they strongly believe that their political views are correct and the other side is wrong.  Now the perception of unfairness in content moderation will look far worse, an order of magnitude worse.  It will become a thankless task.

I’ve been doing blogging for 13 years, and I discovered early on that there is no simple way to moderate comments.  Wherever you draw the line, there will be complaints.  

The decisions made by big companies such as Twitter usually tend to reflect market forces, at least to some extent.  But these companies often have a semi-monopoly position in their market niche (due to network effects), which gives them some ability to override market forces.  The next few years will provide a test of how much market power Elon Musk possesses. My own preference is for a relatively high tolerance of objectionable tweets, and as little political bias as possible in content moderation.  So I wish him well.  On the other hand, I’d encourage Musk to delegate this responsibility to others.  While his strategic vision may be correct, he doesn’t seem to possess the judicial temperament that you’d like to see in a content moderator.