In my latest essay, I wrote,

In my opinion, James Miller is making a bad bet. If you want to bet against Ray Kurzweil, you should look for patterns of prediction errors. As this essay will show, Kurzweil has been systematically overly optimistic concerning some forms of artificial intelligence. But in forecasting developments based on brute computation and nano-scale engineering — which are what is needed to solve the problem of fabricating a large, perfect diamond — Kurzweil has been, if anything, a bit conservative.

Miller emailed me, asking

So the obvious question I have for you is “will you accept my bet?” How much would I have to pay you today for you to promise to give me a10-meter-diameter solid diamond sphere in 2030, 2040 or 2050?

My view is that the cost of producing the diamond has a bimodal distribution. It might be something like this:

Year probability that cost = \$100 probability that cost = \$10 million
2030 .7 .3
2040 .9 .1
2050 .99 .01

I think that from my end, the bet is worth much less than the expected value, because (a) I am risk averse and (b) I think that the value of money will be inversely related to the pace of technological change–if we can fabricate a 10-meter diamond cheaply in 2030 then my “winning” will not mean much because my family will be so fabulously wealthy anyway. In terms of state-preference theory, the state of the world in which I would really value wealth is the state in which technological progress is slow and the diamond is expensive.

But even if I were risk neutral, given those numbers and a 3 percent real interest rate, I would ask for a lot of money. Yet I think that the chances are it will be really cheap to produce the diamond.

UPDATE: In a follow-up email, Miller points out that if I expect that we will be rich in 2030, then I should be eager to take money today in exchange for having to pay out in 2030. In other words, I ought to use a higher discount rate than 3 percent.