Famed futurist Eliezer Yudkowsky fears the imminent end of the world at the hands of Unfriendly Artificial Intelligence. I find this worry fanciful. Many people in Eliezer’s position would just dismiss my head-in-the-sandedness, but he’s also genuinely impressed with my perfect public betting record. To bridge the gap and advance our knowledge, we’ve agreed to the following bet, written in the first person by Eliezier. Since I’ve just Paypaled him the money, the bet is now officially on.
Short bet:
– Bryan Caplan pays Eliezer $100 now, in exchange for $200
CPI-adjusted from Eliezer if the world has not been ended by nonaligned AI
before 12:00am GMT on January 1st, 2030.
Details:
– $100 USD is due to Eliezer Yudkowsky before February 1st, 2017 for the bet to
become effective.
– In the event the CPI is retired or modified or it’s gone
totally bogus under the Trump administration, we’ll use a mutually agreeable
inflation index or toss it to a mutually agreeable third party; the general
notion is that you should be paid back twice what you bet now without anything
making that amount ludicrously small or large.
– If there are still biological humans running around on the
surface of the Earth, it will not have been said to be ended.
– Any circumstance under which the vast bulk of humanity’s
cosmic resource endowment is being diverted to events of little humane value
due to AGI not under human control, and in which there are no longer biological
humans running around on the surface of the Earth, shall be considered to count
as the world being ended by nonaligned AGI.
– If there is any ambiguity over whether the world has been
ended by nonaligned AGI, considerations relating to the disposition of the bulk
of humanity’s potential astronomical resource endowment shall dominate a
mutually agreeable third-party judgment, since the cosmic endowment is what I
actually care about and its diversion is what I am attempting to avert using your
bet-winning skills. Regardless, if there are still non-uploaded humans
running around the Earth’s surface, you shall be said to have unambiguously won
the bet (I think this is what you predict and care about).
– You win the bet if the world has been ended under AGI under
specific human control by some human who specifically wanted to end it in a
specific way and successfully did so. You do not win if somebody who
thought it was a great idea just built an AGI and turned it loose (this will
not be deemed ‘aligned’, and would not surprise me).
– If it sure looks like we’re all still running around on
the surface of the Earth and nothing AGI-ish is definitely known to have
happened, the world shall be deemed non-ended for bet settlement purposes,
irrespective of simulation arguments or the possibility of an AGI deceiving us
in this regard.
– The bet is payable to whomsoever has the most credible
claim to being your heirs and assigns in the event that anything unfortunate
should happen to you. Whomsoever has primary claim to being my own heir
shall inherit this responsibility from me if they have inherited more than $200
of value from me.
– Your announcement of the bet shall mention that Eliezer
strongly prefers that the world not be destroyed and is trying to exploit
Bryan’s amazing bet-winning abilities to this end. Aside from that, these
details do not need to be publicly recounted in any particular regard, and just
form part of the semiformal understanding between us (they may of course be recounted
any time either of us wishes).
Notice: The bet is structured so that Eliezer still gets a marginal benefit ($100 now) even if he’s right about the end of the world. I, similarly, get a somewhat larger marginal benefit ($200 inflation-adjusted in 2030) if he’s wrong. In my mind, this is primarily a bet that annualized real interest rates stay below 5.5%. After all, at 5.5%, I could turn $100 today in $200 inflation-adjusted without betting. I think it’s highly unlikely real rates will get that high, though I still think that’s vastly more likely than Eliezer’s doomsday scenario.
I would have been happy to bet Eliezer at the same odds for all-cause end-of-the-world. After all, if the world ends, I won’t be able to collect my winnings no matter what caused it. But a bet’s a bet!
READER COMMENTS
Scott Sumner
Jan 17 2017 at 3:41pm
How is this different from any other loan? If the world ends, do I have to pay back my mortgage?
One possible answer: This isn’t about the money, it’s about two prominent intellectuals going on record with predictions, in a fashion that people will pay attention to. If so, then it’s fine.
But without the “public debate” aspect, it’s really just a loan, isn’t it?
Scott Sumner
Jan 17 2017 at 3:43pm
BTW, I think Bryan’s right—I don’t expect the world to end until sometime in the early 2040s.
Eliezer Yudkowsky
Jan 17 2017 at 5:20pm
Additional backstory:
I made two multi-thousand-dollar bets in 2016 and lost both of them. One bet against AlphaGo winning the match against Lee Sedol, and a bet against Trump winning the presidency. I was betting with the GJP superforecasters in both cases, and lost anyway.
Meanwhile, Bryan won all of his bets, again, including his bet that Donald Trump would not concede the election by Saturday. I was particularly impressed by his winning that one, I have to say.
Thus, to exploit Bryan’s amazing bet-winning ability and my amazing bet-losing ability, I asked if I could bet Bryan that the world would be destroyed by non-aligned AGI before 2030.
So the generator of this bet does not necessarily represent a strong epistemic stance on my part, which seems important to emphasize. But I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.
Jon Murphy
Jan 17 2017 at 5:35pm
I hope you’re right on this bet. I don’t want the world to end. A lot of the people I like live here
Kevin Dick
Jan 17 2017 at 6:07pm
Scott, I think you may have missed some causality subtext here.
My guess is that Eliezer estimates a tiny chance that Bryan’s betting abilities have some causal impact on outcomes. Therefore, making this bet is cheap insurance for him ($100 for a miniscule chance to prevent the loss of almost unimaginable human utility) and the important aspect is to make it as “official” by negotiating terms in his favor as sincerely as possible.
But then maybe I was mysteriously primed to think in a Newcombian manner.
Or maybe you knew this and were playing along for effect and I’ve just doomed us all.
Or maybe my bringing it up will reinforce it through self-reference.
Ted Sanders
Jan 17 2017 at 6:21pm
This is a bet on three things:
(1) That both parties or their estates will both remember and bother to settle in 2030
(2) That real interest rates stay low
(3) That the world doesn’t end
I think (1) is the most relevant factor in predicting the expected value of this bet.
P.S. I think Bryan’s skill is not in being right, but finding people to make bad bets. I wish someone would pay me in 2030 too.
[broken link fixed. Please check your urls when previewing.–Econlib Ed.]
blink
Jan 17 2017 at 6:24pm
Bryan, how about you tell us the present value you place on this bet?
With so many variables, it is difficult to infer much about beliefs or disentangle them from risk preferences. For Eliezer, his preferences — he clearly wishes to lose this bet — make even that much impossible. (Then again, he cannot place much probability on your bet-winning skills actually influencing the probability that the world ends…)
Mark Bahner
Jan 17 2017 at 6:41pm
Geez, it seems like taking easy money from Eliezer to me. Even a Skynet-induced nuclear war wouldn’t be “the end of the world.” (It would be the end of the world I’d want to live in, but not the all-humans-eliminated end.)
If I were him, I’d wait much closer to the hypothetical apocalypse to make any such bet.*** For example, if Watson starts giving medical advice that kills some people, then I might make the bet.
***P.S. I probably wouldn’t make the bet even if AIs killed their first million people. It’s just too depressing. Plus, it would be an admission that my predictions for the year 2100 of world life expectancy at birth of 110+ years, and per-capita GDP (year 2011 dollars) of $1,000,000+ were wrong:
Predictions for 2050 and 2100
Scott Sumner
Jan 17 2017 at 8:33pm
Kevin, Yes, I think that’s consistent with what I said. When we debate ideas we often hope that our ideas will lead to positive policy change. Eliezer may have hoped that publicizing the threat (which I agree is real) will help in that regard.
David Condon
Jan 17 2017 at 8:34pm
I think you’ll win essentially for the same reason as Scott Sumner. Even though a doomsday is a plausible outcome of the technological singularity; the singularity is not likely to occur before 2030.
Sam
Jan 17 2017 at 10:19pm
Bryan, you say
Surely you acknowledge that Eliezer’s credit risk also plays into this calculus? Moreover, I’d posit that his credit risk is nontrivially related to his devotion to the AGI x-risk thesis.
I’m not saying he’s necessarily extraordinarily likely to default. But over the next 15 years, as we get a better sense of how AGI risks evolve, I expect you will re-evaluate his credit-worthiness.
Kevin Dick
Jan 17 2017 at 10:33pm
@Scott. I was thinking a little more directly causal than that. Newcomb’s paradox with Bryan as the Predictor. This is a specialty of Eliezer’s.
Brandon
Jan 18 2017 at 12:53am
I could bet 10 Austrian Economists $100 that the dollar won’t experience hyperinflation between now and 2030. Even though I made a $1000, I don’t think I would be justified in flaunting my perfect betting record. Take a risk! Are you willing to bet on immigration related claims? A good example might be the idea of an immigration backlash. You have expressed skepticism on the claim. Want to bet on it?
@David
I’m pretty sure Scott was joking.
jick
Jan 18 2017 at 5:39am
[Comment removed for supplying false email address. Email the webmaster@econlib.org to request restoring this comment and your comment privileges. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]
entirelyuseless
Jan 18 2017 at 8:45am
I paid Eliezer $10 in 2008 and he will pay me $1,000 inflation adjusted if someone creates nonaligned superintelligent AI and the world is not destroyed.
This seems more one-sided than your bet, although the main issue is whether there will be any such superintelligence at all. That any future AI will be “nonaligned” is quite certain, as is the fact that the world will not be destroyed.
Kim
Jan 18 2017 at 11:19am
I wonder if Eliezer is properly updating his Bayesian priors in making this bet? Afterall, what is so special about AI? We’ve had the technology to end the world for years now, but somehow pesky rational self-interest always gets in the way and votes in favor of extending human life.
Luke
Jan 18 2017 at 1:07pm
Movie idea:
Non-aligned AI do arise some time shortly before 2030 and seem bent on the annihilation of the human species. Bryan Caplan leads a last ditch resistance effort for the express purpose of delaying human extinction until 2030 arrives so that he can technically win the bet and keep his perfect record intact.
Seth Green
Jan 18 2017 at 3:46pm
I think Kevin Dick’s interpretation is right. Yudkowsky places some rather large probability on us living in a simulation (more here) and so his ideas about causal sequencing may be a little foreign to us. That’s how I read the “trying to exploit” language.
Khodge
Jan 18 2017 at 5:18pm
I don’t see how AI gains by destroying the world. Will not a simple cost/benefit analysis allow some humans to survive?
Silas Barta
Jan 18 2017 at 5:33pm
“I signal the sincerity of my beliefs about the most weighty matters of the day by betting trivial sums of money relative to my income over the timespan of the bet.”
Doesn’t seem like a costly bet.
Mark Bahner
Jan 19 2017 at 6:32pm
Suppose AI thinks humans threaten its existence. (Or even inconveniences it to a significant extent.)
If you had a nest of wasps in your house, would a simple cost/benefit analysis show that you should get rid of some of them, but not all of them?
Mark Bahner
Jan 19 2017 at 6:45pm
At the very end, there’s a scene with small group of the last humans on the planet gathered in Times Square, with the 2030 New Year ball coming down…10, 9, 8, 7…they get to 0, and everyone cheers! Then a Monty Python AI foot squashes them all.
Cue the Liberty Bell March, and roll the credits. A surefire blockbuster.
Andy
Jan 20 2017 at 7:21am
I’m sorry, but I think you guys have it completely reversed. Posit a world in which general A.I. never happens; then what’s our solution to the observation that smart people are breeding below replacement, and that lowered infant mortality rates are increasing general fitness reducing (including IQ reducing) mutation load? Only genetic engineering is left. Given what’s at stake, we should probably not try to leave technological solutions off the table. I’d say that the inverse IQ/fertility correlation is far more likely to effectively end civilization than is some general A.I. not being programmed well.
Andy
Jan 20 2017 at 7:24am
^ I mean, I’m aware that this doesn’t somehow negate the legitimacy of the bet; I was referring more to the general topic of discussion.
Comments are closed.