
Here’s a guest post from the noble Rob Wiblin of 80,000 Hours. Posted with Rob’s permission.
I’ve periodically read commenters online say that with random unprecedented events (e.g. a total nuclear war) one can’t give meaningful Bayesian probabilities and therefore the probability of e.g. a nuclear war over the next 100 years is 50/50.
Tabarrok got this from many people in response to his series of blog posts on the likelihood of nuclear war. It’s hard to believe these people are serious, but they are and insist on it even when pressed.
I don’t know what university course melted their brain, but evidently one of them did!
The fastest way to show this is wrong is to ask them three probability questions simultaneously:
1. What is the probability of a single total nuclear war over the next 100 years?
2. What is the probability of a single total nuclear war between between 2121 and 2221?
3. What is the probability of one or more total nuclear wars occurring over the next 200 years?
Someone with this philosophy must respond 50/50 to all of them which leads to an internal contradiction.
READER COMMENTS
Joe Denver
Mar 10 2021 at 9:50am
Looking at Alex’s post, he seems to brush off the fact that the probabilities are dependent, but that’s a really important point, as it means his calculation very well could be meaningless.
If there is a particularly low chance that a nuclear war starts in one year, it’s also quite likely that the next year will be similarly low if not lower. If you think this would happen over the entire 75 years, then this drastically reduces the resulting probability.
Alex is right to point out that it could also be higher. But that’s the point, his calculation is meaningless. If you think the experts are overestimating the likelihood of war, even by a little bit, then the calculation can be much lower. If you think they are underestimating then you should start prepping.
Where I probably come together with Wiblin is that I think it is possible to calculate the probabilities. Just any accurate calculation requires complexity likely beyond the reach of current human knowledge.
When you have a deck of cards, estimating the dependent probabilities is quite easy, since you can assume only 52 outcomes. However, what are all of the possible outcomes of one full year of US foreign policy and it’s effects on subsequent years?
Still, props to Alex for putting numbers to things, even though I don’t trust them. It’s far too easy for people these days to blow hot air without any quantification.
Christophe Biocca
Mar 10 2021 at 10:38am
I agree that 50/50 is a stupid (and inconsistent) rule of thumb but it has one obvious thing to recommend it: It’s so clearly wrongheaded that no one will actually try to act on it (it even gets prefaced with “one can’t give meaningful Bayesian probabilities” before giving you a probability, which should tell you what weight the people using that approach put on their prediction).
This is superior to the effective altruism article‘s approach of combining garbage data in garbage ways and using that to guide recommendations, which lets a motivated reasoner assemble whatever evidence they want and misinterpret it as needed to get you to update in their preferred direction. Saying “this can’t be meaningfully estimated” lets you push back on that kind of manipulation.
You might think I’m being harsh here, but the article is using GJI’s probability for “Nuclear detonation by a state actor causing at least 1 fatality”, which, according to the same source, is 97% likely to leave Russia completely unscathed, directly as an input probability for a nuclear war. The likelihood that some country at some point will use a nuclear bunker-buster weapon against a hardened target as part of some larger conflict doesn’t get to count towards a nuclear war. Neither does a North Korean nuclear test killing some unlucky people when it has bigger yield than anticipated (as happened in the US with BRAVO).
There are more sophisticated ways to incorporate filtered evidence from motivated arguers, but refusing to play the game is a perfectly acceptable second-best solution.
River (Frank) Bellamy
Mar 10 2021 at 10:41am
I don’t think those three actually lead to a logical contradiction – it could be that in one half of possible futures, there is a single total nuclear war in the next hundred years and a second total nuclear war in the hundred years after that, and in the other half of possible futures, there are no total nuclear wars. That would make the answer to all three questions 50/50. It would lead to some highly implausible conclusions – no futures in which there is only one total nuclear war, no futures in which there are multiple nuclear wars in one of the next two centuries – but it is logically possible. Change the third question to read “exactly one total nuclear war in the next 200 years” and I think you have your logical contradiction.
Kevin
Mar 10 2021 at 11:53am
I don’t think this is a convincing method of argument. In general you are not able to assign probabilities to every question constructed by an adversary and keep them internally consistent. People who say “50/50” are basically saying, they do not think there is any useful model, therefore it is a bad idea to think in terms of probabilities. If you are making plans you should make plans for either one or the other to happen, which is roughly equivalent to treating them as 50/50. You should think of them as rejecting the idea that a probability can usefully be assigned to these events, rather than desiring to establish a consistent probabilistic model.
zeke5123
Mar 10 2021 at 1:44pm
I don’t know if I can assign percentages to nuclear war. Doesn’t mean I think it is 50:50. However, it seems to me the downside is so great that even a 1:99 risk is gravely serious.
That is, the issue is with figuring out what the odds of a nuclear war are — instead, the issue is if there is a nuclear war (i.e., not just one bomb dropped), will there be a human race afterwards?
astew
Mar 10 2021 at 4:18pm
I don’t want to nitpick, but I’m going to. The three bullet points should have been something like:
1. What is the probability of a single total nuclear war between now and 2121?
2. What is the probability of a single total nuclear war between 2121 and 2221?
3. What is the probability of a single total nuclear war between now and 2221?
And then if the person’s reasoning for 1. and 3. requires them to say 50/50 for each, then their reasoning had better also lead them to say 0/50 for 2 or else they’re not being consistent.
AMT
Mar 10 2021 at 5:25pm
I think 50% is too high of an estimate, but I don’t understand this critique.
1st period: 50%
2nd period: 50%
SO… the probability of occurrence one OR more times over the 2 periods must be 50%?!
Is it also true that if I flip a coin twice, the probability it lands heads “one OR more times” only 50%? (HH, HT, TH, TT)? Not 75%?
I’m also not sure if it’s correct that these people would ascribe to a 50% chance of a single total nuclear war. It seems like they would instead say “at least one total nuclear war in the next 100 years.” (and the same for the 100 years after that)
Does this presume that if there is a “total” nuclear war in the first period, there cannot be one in the second? I don’t see why that would necessarily be the case. I initially thought maybe “total” nuclear war meant basically the annihilation of all humanity, but then it wouldn’t make sense for the critique’s point 3 to say “or more.” So the only way it makes sense is to mean total annihilation of the nations involved, and I don’t see why there couldn’t be 4 (or more) different nations that nuke each other into oblivion in 2 different wars, and in two different centuries. So the coin flip analogy seems pretty relevant to me.
cobey.williamson
Mar 11 2021 at 10:24am
We 50:50 people are considering instantiations, therefore your proof is meaningless – you cannot ask your three probability questions simultaneously, as the realizing of any one of them invalidates the rest.
The probability of any single future binary event is 50:50 for that specific instance. Stephan Curry’s shooting percentage has nothing to do with whether or not a particular shot he takes goes in or not. The potentiality for both outcomes exists equally until the matter is decided. Schroedinger’s cat is both alive and dead until we open the box.
Comments are closed.