Last week’s issue of The Economist featured a few articles about disinformation, which it defines as “falsehoods that are intended to deceive.” More precisely, I would define it as the intentional publication or spreading of fact-related information that is nearly certainly false, by a person or an organization whose self-interest is to spread the lie.

The article “The Truth/Lies Behind Olena Zelenska’s $1.1m Cartier Haul” (“Anatomy of a Disinformation” is the title of the shorter printed version) details a recent case. Clemson University researchers retraced, step by step, the story of the wife of Ukrainian president Volodymyr Zelenski supposedly spending $1.1 million on Fifth Avenue in New York City. The false story, recycled from a previous one, hopped from an Instagram video (probably from somebody in St. Petersburg) reposted on YouTube, to African news sites repeating it often as “promoted content” (that is, paid-for promotion), to Russian news outlets, to a fake American publication called DC Weekly, and to its reposting as a credible piece of news. Ultimately, it was shared at least 20,000 times on Twitter and TikTok. Many people now think it’s proven old “news.”

This sophisticated instance of disinformation was nearly certainly an operation of the Russian government. Such operations by foreign governments are especially difficult to uncover: no journalist can go, find, and interview in St. Petersburg the woman who is believed to have launched the lie on Instagram. In a freer country, the free press can more easily uncover and publicize government disinformation conspiracies, which makes such operations more risky and less likely.

As The Economist notes, disinformation from rulers has always existed. What has changed is the extent of private disinformation and private amplification of government disinformation. The dramatic drop in the cost of producing and disseminating disinformation has multiplied it. The danger comes as much from the left as from the right, notably from their populist wings. If you are “the people,” your lies become true.

Thirty years ago, an observer of human affairs knew that anything he read or heard on TV had been privately verified by some gatekeepers. A news item and its source had been vouched for by at least a journalist and his editor, not to speak of the media’s owners who had a brand name to protect. Similarly, the ideas and authors of books had to pass by private gatekeepers in the publishing industry. Self-publishing was very costly and identified the author as unknown and potentially unreliable (or uninteresting for novels or poetry). Since Gutenberg, much material of questionable value was published (think of Marxism), but its dissemination faced high costs, and the reader actually had to buy publications or go to a library to read the stuff. Even after the invention of radio and television, where the likes of Father McLaughlin were numerous, some private gatekeeping services were provided by station owners or those who financed the maverick broadcasters. While not preventing the circulation and challenge of ideas, the cost barrier eliminated much snake oil.

Nothing was perfect, of course, but what followed carried new dangers. What the World Wide Web did from the mid-1990s and social media from the first decade of the 21st century was to allow anybody to broadcast to the world ideas and disinformation alike at very low cost (at the limit, only the speaker’s or repeater’s time). AI is further reducing this cost: one does not even need to know how to write (that is, to put words one after the other in a coherent discourse) to produce disinformation. From the reader’s or listener’s viewpoint, distinguishing serious heterodox ideas and pure disinformation has become more costly—although AI will also provide tools to uncover fakes.

What is the danger? Past a certain point, no free (or more or less free) society could be maintained. An auto-regulated social order must collapse when a certain proportion of its members become hopelessly confused between what is true and what is false, or come to believe that truth does not exist. Even free agreement among individuals (trade is a paradigmatic example) becomes too costly as the probability increases that anyone is a liar and a fraudster. Where the tipping point is, we do not know. But we know it has been reached in countries like Russia (and the former Soviet Union) or China (despite a glimpse of hope after the demise of Maoism and its Red Guards). At that point, only an authoritarian if not totalitarian government can coordinate individual actions, manu militari.


Demonstration for disinformation

By Pierre Lemieux and DALL-E