A recent Vox post asked 17 experts for their views on the risks posed by artificial intelligence. Bryan Caplan was grouped with those who views might be described as more complacent, or perhaps less complacent, depending on whether you define “complacency” as not worried about AI, or as opposed to letting AI transform our society.

AI is no more scary than the human beings behind it, because AI, like domesticated animals, is designed to serve the interests of the creators. AI in North Korean hands is scary in the same way that long-range missiles in North Korean hands are scary. But that’s it. Terminator scenarios where AI turns on mankind are just paranoid. — Bryan Caplan, economics professor, George Mason University

I’m not sure how I should feel about this comment. Let’s start with the first sentence, suggesting that AI is no worse that the humans behind it. So how bad are humans? It turns out that some humans are extremely bad. One terrorist known as the Unabomber thought that the Industrial Revolution was a mistake. He was captured, but might a similar human pop up in the age of AI? Imagine someone who thought humans were a threat to the natural environment.

The second sentence compares AI to long-range [presumably nuclear] missiles. I find those to be sort of scary, but I am much more frightened of nukes being put inside a bale of marijuana and smuggled into New York. How good are we at stopping drugs from coming into the country?

The second sentence does offer one reassurance, that missiles are controlled by governments. Even very bad governments such as the North Korean leadership can be deterred by the threat of retaliation.

But that makes me even more frightened of AI! A future Unabomber who wants to save the world by getting rid of mankind could be (in his own mind) a well-intentioned extremist, willing to sacrifice his life for the animal kingdom. Deterrence won’t stop him.

So what is the plan here? Is use of AI to be restricted to governments only?

Or is it assumed that AI will never, ever develop to the point where it could be used as a WMD?

And how good is the track record of scientists telling us that something isn’t possible? A while back a professor who teaches biotech told me that the cloning of humans is completely impossible—it will never happen—because we are far too complex. Within 2 years a sheep had already been cloned.

David Deutsch is one of my favorite scientists. In 1985 he came up with the idea of quantum computers. But he also doubted that they could be manufactured:

“I OCCASIONALLY go down and look at the experiments being done in the basement of the Clarendon Lab, and it’s incredible.” David Deutsch, of the University of Oxford, is the sort of theoretical physicist who comes up with ideas that shock and confound his experimentalist colleagues–and then seems rather endearingly shocked and confounded by what they are doing. “Last year I saw their ion-trap experiment, where they were experimenting on a single calcium atom,” he says. “The idea of not just accessing but manipulating it, in incredibly subtle ways, is something I totally assumed would never happen. Now they do it routinely.”

Such trapped ions are candidates for the innards of eventual powerful quantum computers. These will be the crowning glory of the quantum theory of computation, a field founded on a 1985 paper by Dr Deutsch.

And here’s something even more amazing. The Economist’s cover story is on the booming field of quantum computing.

And here’s something even more astounding. Humans don’t even know how these machines will work. For instance, Deutsch says:

“If it works, it works in a completely different way that cannot be expressed classically. This is a fundamentally new way of harnessing nature. To me, it’s secondary how fast it is.”

. . . A good-sized one would maintain and manipulate a number of these states that is greater than the number of atoms in the known universe. For that reason, Dr Deutsch has long maintained that a quantum computer would serve as proof positive of universes beyond the known: the “many-worlds interpretation”. This controversial hypothesis suggests that every time an event can have multiple quantum outcomes, all of them occur, each “made real” in its own, separate world.

One theory is that machines that fit on the top of a table will manipulate more states than there are atoms in this universe by accessing zillions of alternative universes that we can’t see, and another theory says this will all be done within our cozy little universe. And scientists have no idea which view is correct! (I’m with Deutsch–many worlds.) Imagine if scientists were not able to explain how your car moved? Or to take a scarier analogy, suppose scientists in 1945 had no idea whether the first atomic bomb would destroy a building, a city, or the entire planet?

Just to be clear, I don’t think quantum computers pose any sort of threat; I’m far too ignorant to make that judgment. What does worry me is that journalists can put together a long list of really smart people who are not worried about AI, and another list of really smart people who see it as an existential threat. That reminds me of economics, where you can find a long list of experts on both sides of raising the minimum wage, or adopting a border tax/subsidy or cutting the budget deficit. I know enough about my field to understand that this divergence reflects the fact that we simply don’t know enough to know who’s right. If that’s equally true of existential threats looming in the field of AI, then I’m very worried.

AI proponents should not worry about convincing ignorant people like me that AI is not a threat. Rather they should focus on convincing Stephen Hawking, Elon Musk, Bill Gates, Nick Bostrom and all the other experts who do think it is a potential threat. I’m not going to be reassured until those guys are reassured.

I may know absolutely nothing about AI, but I know enough about human beings to know what it means for experts in a field to be sharply divided over an issue.

PS. Please don’t take this post as representing opposition to AI. Rather I’m arguing that we should take the threat seriously. How we react to that information is another question—one I’m not competent to answer.

PPS. Commenters wishing to convince me that AI is not a threat should also indicate why your argument is not convincing to people much better informed than me. Otherwise, well . . . it’s sort of like this . . .

Screen Shot 2017-03-14 at 1.32.32 PM.png

HT: Tyler Cowen