In an article titled “The expert bias toward Covid catastrophe has been exposed,” The Telegraph, February 15, 2022, economist Ryan Bourne of the Cato Institute writes, as the title suggests, about the many expert failures during the Covid pandemic.
He has a very good quote from Tyler Cowen. Bourne writes:
Specialist expert failure these past two years has been pervasive, whether in epidemiological modelling or macroeconomics. In fact, “the people who reasoned best across multiple domains, and made a lot of the right calls, were often generalists with significant experience talking to both political decision-makers and the educated general public,” says American public intellectual Tyler Cowen.
Bourne is right about the huge failures in epidemiological modeling. One of the worst failures was Neil Ferguson’s Imperial College model, a model that Phil Magness has effectively critiqued elsewhere.
I think Tyler’s right that some generalists have done better.
But there’s something missing here in Bourne’s treatment. It leaves out two roles that Tyler Cowen played during the pandemic–roles that both seemed to indicate Tyler’s own reliance on narrow experts and his own unwillingness to reason across multiple domains.
The first was Tyler’s praise early in the pandemic of “expert” modeler Neil Ferguson, and his related action, with his Emergent Ventures project, of sending a large check to Ferguson and Ferguson’s Imperial College team. If you check the link just above, you’ll see that Tyler thought, correctly, that Ferguson had a huge impact on the U.K. and U.S. governments’ responses to Covid. Because Tyler is so well respected, not just in the United States but also internationally, that large payment put the imprimatur on some very bad modeling. It was that modeling that led Boris Johnson and Donald Trump to recommend lockdowns, and Johnson and many U.S. governors to impose lockdowns. Would they have done that without Tyler’s support of Ferguson? Probably. But if Cowen had recognized how little we knew at first and taken more seriously the early seroprevalence studies, such as that done by Jay Bhattacharya, that showed that the infection fatality rate from Covid was a small fraction of the case fatality rate, and if Tyler had publicized that, there might have been more pushback early against the lockdowns.
The second role was Tyler’s attack on the Great Barrington Declaration. The GBD was the ultimate in, if only tersely, “reasoning across multiple domains.” I criticized Tyler on his negative reaction to the GBD (here and here), In a later interview on EconTalk with Russ Roberts, he doubled down, arguing that it couldn’t be any good because it was done at the American Institute for Economic Research, at which place Jeffrey Tucker was an employee. Tyler’s crude guilt by association argument is not even an example of reasoning across one domain, namely that of logical reasoning.
It’s true that Ryan did not say that Tyler did a good job. He left that issue unaddressed. But if he had wanted to give examples of people who, on the Covid issue, did work across multiple domains and recognized tradeoffs, four of the best candidates are economists Phil Magness and Don Boudreaux, and Doctors Jay Bhattacharya and Scott W. Atlas.
READER COMMENTS
AMW
Feb 16 2022 at 7:54pm
What about Alex Tabarrok?
MikeP
Feb 16 2022 at 9:22pm
I would say Alex has behaved like a pretty good technocrat in this scenario.
After all, even the Soviet Union educated and employed economists. While the government prevented the free market from bringing about emergent order, planners still needed people who could specialize in and master the intricacies of production and consumption and resource allocation to at least provide more sane direction than otherwise.
Similarly, since the government’s pandemic response squashed the emergent order humans and respiratory viruses had mutually developed over millions of years and replaced it with a planned pandemic response, they needed technocrats too.
Alex has been doing a better job than most — certainly better than all the really lousy technocrats employed in leadership positions at the CDC, FDA, NIH, NIAID, etc.
But better than a technocrat would be a Bayesian scientist who looked at what the pandemic plans were in 2019 and held every single deviation of governments from that course under strictest test and scrutiny. Those people are the experts we really needed.
MikeP
Feb 16 2022 at 8:09pm
By far the worst example of expert failure is the CDC.
Their observational studies proving the effectiveness of masks would not be welcome in a seventh grade science fair. They are abysmal. Really, really, really bad. Completely unscientific and used to proclaim and promote conclusions that cannot possibly be drawn from the studies. It is literally beyond belief.
I can’t recall using the word unforgivable. But for people who should know better to place the mantle of science on completely unscientific studies and then push to impose their “conclusions” on hundreds of millions of people through public health authorities and private institutions… yeah, that’s pretty unforgivable.
The very worst one is the Arizona schools masking study, which was permeated through and through with astonishing flaws and biases that were unaccounted for. But that wasn’t the worst of it. This study did nothing but measure cases of COVID acquired in the community as detected in school students. That’s it. That’s all it could do. It comically called more than one case so detected an “outbreak” and therefore excluded zero or one case, which due to school size biases biased the purported conclusion even more. But there is no way they could possibly distinguish a case caught at home from a case caught in school, so there is no way they were measuring any effect of masks in school.
Yet that was exactly how the study was sold to the public, though media-ready graphics. “Mask requirements in K-12 schools limited COVID-19 outbreaks. Schools without mask requirements were 3.5x more likely to have COVID-19 outbreaks compared with schools that started the year with mask requirements.”
MikeP
Feb 16 2022 at 8:15pm
Of course I just now got a notification from my kid’s school that they will allow the children to go unmasked outside under certain circumstances… starting February 28.
A big part of the reason for this continuing insanity? The Arizona study and its 3.5x number. They have been pushed by the CDC and Rochelle Walensky to this day.
As I noted when I first saw this study:
Alan Goldhammer
Feb 17 2022 at 8:06am
David would do well to read (or re-read depending) von Clauswitz’s ‘On War’ or pay heed to the Rumsfeld Paradigm. The battle against Covid in the early days was not unlike a war with a lot of unknowns. Of course all of us who have done some type of epidemiological work (in my case drug safety) are aware that decisions made are only as good as the data going in. The more data one has, the better the decision making process is. That the Imperial College model was off in the early days was not a surprise, there were numerous other models that were off in all directions as well. 20-20 hindsight always leads to criticism but one needs to think back to February/March of 2020 and consider what options were available to those in charge of policy.
One can cherry pick studies done over the past two years that will justify almost any conclusion and confirmation bias continues to be very strong even now. Those of us who have worked on infectious diseases (and Scott Atlas is not one of them) realize that the Covid pandemic could have taken a lot of different turns. I’ll leave it at that other than to note that I documented in real time what was happening from late March 2020 until the approval of the Pfizer BionTech vaccine late in the year. My judgements were sometimes correct and sometimes not but at the time they were based on the best possible information.
Christophe Biocca
Feb 17 2022 at 9:53am
While you’re right that good input data is necessary for a model to produce accurate results, good data is not sufficient on its own. It’s not hard to write a model that is given the best possible data and produces nonsense answers, and this seems to have happened in at least 2 high profile cases:
The IHME models, when tested against actual data, had the real number outside the 95% confidence interval 70% of the time (as opposed to the expected 5%).
The imperial college model, meanwhile, just outputs different results from run to run with identical parameters, and that’s after extensive attempts at fixing various issues, issues like internal memory corruption and the RNG being broken. Which means that we can’t recreate the numbers that went into Report 9.
Vivian Darkbloom
Feb 17 2022 at 4:01pm
The fact is that some folks made (and make) better decisions with available information than do others. There was a clear divergence of opinion among “experts” all along the data collection process. Some experts simply drew better conclusions and made better policy recommendations with the same available information. I’m thinking, for example, of the authors of that Great Barrington Declaration, whose judgement was based on the same information everyone else (including you) had or potentially had. I think they did a pretty good job with the information at hand. Much better than most as history is starting to show. I think that was David’s point.
I’m curious as to why you think that Bhattacharya et al didn’t have access to and digest the same data you did (or perhaps even more!)—that is, “the best possible information” as you put it. It seems rather pretentious and presumptuous of you to assume otherwise.
TGGP
Feb 17 2022 at 8:29am
A name you didn’t mention is Greg Cochran, whose success at predicting COVID outcomes Scott Alexander attributed partly to “creepy oracular powers”. He’s a physicist by training, but his one co-authored book “The 10,000 Year Explosion” is on human evolution, so he was previously known for predicting introgression of Neandertal DNA in modern humans. He won every COVID bet he made (and how close he was to actual death counts was indeed creepy), and was right when supposed epidemiological “experts” said not to worry about the virus mutating under the assumption that random changes would just degrade it. And Greg says Jay’s research on seroprevalence was all wrong.
Tyler’s colleague Robin Hanson also made correct bets on COVID, which he helpfully aggregated into a blog post, but I don’t recall him addressing David’s preferred experts.
Ryan M
Feb 18 2022 at 4:11pm
Cowen was one of the many people for whom I lost a lot of respect based on their responses to covid. Covid has been an excellent, though unintentional, litmus test.
Todd Moodey
Feb 18 2022 at 10:46pm
Agreed.
Roman Lombardi
Feb 21 2022 at 12:34pm
I quit reading MR a few months into the pandemic. It was shocking to see an intellectual that I respected so much display such a weird form of denial. Now whenever I see Tyler’s name I’m reminded of his TED talk, “Beware of Storytelling”
Comments are closed.