Economists should listen more to techies on what techs will be feasible at what costs, but techies should also listen more to economists on the social implications of tech costs. Alas, just as economists prefer to rely on their intuitive folk tech forecasts, techies prefer to rely instead on their intuitive folk economics.
His point is that I should back off from my claims that artificial intelligence will be very difficult. For Hanson, I am not enough of an authority on technology. I should defer to authorities.
I try to stay away from arguing from authority. When I cite economic theory as an authority, it is not with a view toward saying “Shut up and listen to economists.” It is instead with a view of trying to explain why the economic theory is convincing.
My views on artificial intelligence have been shaped by reading books on it. Here I wrote after reading Jeff Hawkins’ On Intelligence in 2004. Later, in 2005 I looked at Ray Kurzweil’s predictions, and I found a pattern of errors.
There are some engineers who say, “Of course someday we can reverse-engineer the human brain.” That is a non-falsifiable statement. We could go 1000 years without reverse-engineering the brain, and somebody could still be saying that we can do it. If engineers are willing to set milestones for the reverse engineering problem and put money on when we reach those milestones, I probably will be willing to bet against them. Even though I do not qualify as an authority.
READER COMMENTS
Tom West
Oct 30 2009 at 12:29pm
Amen. As a techie, (although not deeply in the AI field) I keep looking for milestones (and progress against those milestones) from the singularity folk. So far, it doesn’t look encouraging.
I’d say the biological equivalent, a stop to aging, looks more likely. And that’s not likely at all.
Robin Hanson
Oct 30 2009 at 1:07pm
Arnold, are there any subjects where you do defer to those who have studied a subject more, even if you do not understand their reasons for their conclusions?
TomB
Oct 30 2009 at 1:27pm
An expert’s conclusion deserves deference only to the extent the conclusion rests on the expert’s expertise. Many conclusions are based not just on the expert’s expertise, but also on statistical modeling and forecasting (even if such modeling/forecasting is not formal). For example, an economist may be in a better position to assess correlation and causality in a dataset than a biologist. The biologist can collect the data and reach a conclusion, but the economist should not defer to the biologist’s statistically based conclusions. An economist may also be in a better position to assess the weight to be given to an expert’s conclusion given the uncertainty in the initial assumptions.
For example, if 467896545 (the comment above) is a software expert, his conclusions as to the algorithms and possibly the hardware deserve some deference. However, his opinions regarding what is needed to emulate the human brain, human cognition not being his specialty, are entitled to no deference.
Given the current state of technology and the uncertainty involved with future tech progress, an economist may be in a better position to predict the emergence of AI than an engineer. Especially if the economist can evaluate the bases for the engineer/scientist opinion.
fundamentalist
Oct 30 2009 at 1:46pm
Hayek wrote that the brain is a classification system and a classification system must be more complex than the system it tries to analyze and classify. As a result, the human brain can never know itself.
Dan Weber
Oct 30 2009 at 2:00pm
If I have to appeal to authority, there were plenty of Phds and professors when I was at MIT who thought Kurzweil was just nutty.
Even the AI department was mostly about trying to come up with better ways of solving problems.
steve
Oct 30 2009 at 2:18pm
There is an old saying among engineers.
If an old and accomplished engineer tells you some difficult task can be done he is probably correct, if he tells you that something can’t be done ignore him.
As far as respecting the argument from authority I don’t.
As for artificial intelligence, in my humble opionion this is the state of the art.
http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html
Daniel Kuehn
Oct 30 2009 at 2:32pm
I’m not sure why this is argument from authority. He’s just asking you to take Hayek’s advice – don’t assume you know more about something than the people who deal with it every day. That seems sound to me.
Sam Wilson
Oct 30 2009 at 3:11pm
I think I see where Robin is coming from. The quote of yours that he used was one where you invoked Hayek’s fatal conceit, and I think it’s possible to allow for the possibility that singularity might emerge from the bottom-up (teaching a robot to learn [i.e. develop tacit knowledge] on its own or something). I agree with you on the milestone issue, but I also tend to think that studying economics has shown me the depths of what I now know I don’t know. One of the things I don’t know is in what form AI is likely to arrive (if at all). However, to be intellectually honest, I would also have to admit that a techie would probably have a better idea about it than an economist.
Then again, Robin’s both, so maybe he’s shining his own apple. I know I would if I could straddle the hedge like that.
Zac Gochenour
Oct 30 2009 at 3:35pm
Arnold says “I try to stay away from arguing from authority. When I cite economic theory as an authority, it is not with a view toward saying “Shut up and listen to economists.” It is instead with a view of trying to explain why the economic theory is convincing.”
But often you cannot do this. That is why anyone bothers to become an authority, because some issues require expertise to fully understand. Perhaps the issue is very complicated and you would need several hours to explain how the economic theory applies, that your assumptions are defensible, that your conclusion follows from the assumptions. Perhaps you need to support your idea with empirical evidence and/or extensive narrative and hence write an article or book. Perhaps numerous books and articles have been written on the subject over time.
There are several issues in economics that to fully understand you need a PhD in economics or equivalent expertise and familiarity with the discussion up to this point. You have read a few books – if someone had read a few books on economics and made statements/predictions that contradicted your own that you arrived to after years of careful thought, building on the work of fellow economists past and present, how much weight should their opinion hold?
Let’s expand upon a great Murray Rothbard quote here. It is no crime to be ignorant of any subject, after all, no one can be a specialist in all disciplines. But it is totally irresponsible to have a loud and vociferous opinion on any subject while remaining in this state of ignorance.
Lee Kelly
Oct 30 2009 at 4:32pm
“Arnold, are there any subjects where you do defer to those who have studied a subject more, even if you do not understand their reasons for their conclusions?” – Robin Hanson
Are you an authority on when people should defer to experts?
Arnold Kling
Oct 30 2009 at 4:41pm
Robin,
I defer to other people’s views in many situations. But in other situations, I think for myself. Global warming, for example, where I don’t understand *all* of the science, but I understand enough to know that the scientists are quite ignorant.
I don’t defer to the AI experts, because they seem similarly ignorant.
For example, Steve links to a TED talk by Markham, who is yet another scientist who treats the brain as disembodied. I don’t think that is going to work.
When I say that experts are ignorant, I am not saying that I know more. But I have less confidence in their knowledge than they do, and I take a position that is robust if they turn out to be wrong.
steve
Oct 30 2009 at 5:15pm
I would say the key to Rothbards quote is “loud and vociferous”. (Part of the reason I like him so much is he writes soo precisely.) At the risk of offending the esteemed author of this article and for most blogs on the internet, I would say:
It is unfortunate if people believe you when you are wrong.
CJ Smith
Oct 30 2009 at 5:15pm
Robin:
Not to gang up on you (much), but I have a conceptual problem with engineers who claim they can map and create an artificial intelligence based on the human brain when we don’t even know how the human brain works. Oh, we can stimulate/suppress various areas through science and medicine, but I have yet to run across a treatise or medical journal that says anything like, “when you put this cluster of neurons with this combination of stimulus, you get this specific psychological/psychiatric result.”
Sounds to me like the engineers are blind men promising to paint a masterpiece – if they could just that whole darn light concept worked out….
steve
Oct 30 2009 at 6:03pm
Arnold, I agree that it may not work. The idea that it might take the whole body to obtain intelligence is not unreasonable.
But, I think the basic approach is a good avenue to explore. I think it is generally accepted that actually understanding the workings of the human mind in any real detail is beyond the human mind.
Nevertheless, the idea that intelligent is a sum of the parts of a human mind, or in your case a whole human body, seems quite likely to me. If this is the case we can at least eventually understand each piece well enough to model it.
I would suggest something like this. If all intelligence needs is the correct chemistry and sufficent processing power then we are close.
If this is insufficient and the whole body is needed at the chemical level we are lookig at a longer timeline. Maybe a century or two.
However, if in addition to mind and body at the chemical level, the mind needs accurate modeling clear down to the quantum level it could take a very long time.
Of course all we will be able to do is prove or discredit the first case in the next couple of decades.
CJ Smith
Oct 30 2009 at 6:07pm
Oops, dropped the end of the first paragraph. Should have read “when you put this cluster of neurons with this combination of stimulus, you get this specific pschological/psychiatric result: sentience, intelligence, creativity, intuition, etc.”
Zac and Steve, while I can appreciate your positions and the paraphase of Rothbard’s quote in the context of an eastern sensei-deshi training relationship, there is also a lot to be said and an equally valid school of though based upon Socratic discourse, so long as your don’t equate “loud and vociferous” with “unwilling to learn.” Put another way, “Question Authority – Authority may have missed something.” Its also a wonderful, fun, and engaging way to learn, as this blog demonstrates.
Robin, to echo my understanding of both TomB and Arnold’s position regarding authority – don’t ask for or expect deference on a subject merely because you’ve spent more time studying/working in the area. Deference, like respect, is earned. Demonstrate that you are due deference on the basis for your assertion of authority because your expertise is truly on point and time critical, and that you have a demonstable history of being right with respect to the matter at hand, and you MIGHT be entitled to some deference. Simply saying, ” Defer to me because I know more than you – look, I even have a degree!” is the recurring vanity of academics, politicians and military officers everywhere.
steve
Oct 30 2009 at 6:29pm
I would also point out that modeling the atmosphere down to this degree of detail would require logorithmicaly degrees of magnitude more processing power. My gut tells me you would have to convert a significant chunk of the earths crust into computers to be able to do it. Sort of like economists explaining how big a trillion is.
I am no expert on climate change but I am very skeptical of their models. Sort of like modeling intellegence in the old AI top down symbolic approach. I think that approach has been largely discreted for producing Intelligence. I don’t expect it to work well for even larger chaotic systems.
steve
Oct 30 2009 at 6:37pm
CJ Smith
If my understanding of Rothbard is correct, he would probably tolerate any amount of questioning and objections as long as you don’t turn to violence or the government to ‘answer’ the question.
Dr. T
Oct 30 2009 at 7:52pm
The experts get many things wrong about the future. They are so tunnel-visioned that they don’t see possible flaws in their thinking. Current examples:
We’ll soon have artificial intelligence that mimics the human brain. [AI researchers are not close to accurately describing the physiology of thinking and remembering. It’s hard to mimic what you cannot describe.]
CO2-based global warming will melt the poles and flood the planet. [Climatology isn’t really a science, it’s mathematical modeling using weather data. Climatologists cannot agree on the best way to calculate mean global temperature. They don’t even know if the greenhouse gas effect can be extrapolated to the whole planet and its miles-thick atmosphere.]
I’m an expert in medical laboratory testing. Can I predict what testing will be like in 2100? No. In 2050? No. In 2020? Perhaps. But, I’ve seen others in my field make bold predictions about genetic test this and molecular test that. In the 1970s the “experts” said PCR testing would eliminate bacterial cultures. Thirty years later their prediction came true, but only for the sexually transmitted diseases gonorrhea and Chlamydia. We still culture all the other bugs.
Matthew C.
Oct 30 2009 at 8:13pm
I’ve been a computer programmer since age 10 and an avid lay student of AI since then as well. And, yes, back then I wrote my own Eliza-like clone and “Zork”-style natural language parsing adventure game.
I’m also a paid professional software developer in my day job. And I agree, wholeheartedly, with Arnold’s analysis. The AI emperor is suffering a severe draft on his nether regions. . .
Tom West
Oct 30 2009 at 9:06pm
The AI emperor is suffering a severe draft on his nether regions
I’m not so certain about that. I’ve been of the impression (possibly now out of date) that the majority of the AI researchers think whole brain emulation is, as one of my AI profs put it, the specialty of those who read too much sf when they were young.
In other words, I’m moderately certain the authorities are mostly of the opinion that this is a wild-goose chase. (Again, things may have changed in the last 5-6 years.)
steve
Oct 31 2009 at 11:31am
Dr T writes “[AI researchers are not close to accurately describing the physiology of thinking and remembering.]”
I think this is true, I think the claim is that they can acurately describe the physiology and behavior of a single neuron. After all, it is much easier to describe the behavior of a single computer in the internet then it is to describe and model the behavior of the flow of data in the internet. That didn’t prevent them from building it.
This doesn’t touch Arnold’s contention that the nuerons are not enough that you need the other cells as well. I don’t see any other way to answer this question then to try it.
Tom West writes “whole brain emulation is, as one of my AI profs put it, the specialty of those who read too much sf when they were young.”
LOL, this is good. Also true I think, doesn’t mean they are wrong.
Zack M. Davis
Oct 31 2009 at 10:42pm
re “If engineers are willing to set milestones for the reverse engineering problem”
Arnold, have you seen the Whole Brain Emulation Roadmap (PDF)?
Tom West
Nov 1 2009 at 3:24pm
Zack, thank you very much for the pointer. I’d never heard of this document, and it’s certainly a big step in keeping the amount of hand-waving down.
It’s now part of my bookmarks for future reference.
Marcus
Nov 2 2009 at 8:15am
What is Arnold’s argument?
Is it 1) that AI cannot be done or, 2) that experts are overly optimistic on how long it will take?
If it is one then I disagree. If it is two then I agree. People in general, not just computer programmers, have a tendency to underestimate how long some project will take. But that’s not much insight.
AI is a complicated subject with many sub-disciplines. Like Read’s pencil, I have little doubt that when we finally do have a working artificially intelligent machine there will not be any single person who understands all of it.
Here’s my prediction, when we do have an artificially intelligent machine which can learn on a scale larger than that of a gnat, there will be those who deny it is learning. And their position will be unfalsifiable because they will always have this nebulous concept of the brain, which no single human being can ever understand in its entirety, to hide in and pretend that the brain is something more than it actually is.
steve
Nov 2 2009 at 9:30am
Marcus,
I think you are correct to a point. If an AI ever gets to the point where it comes up with a better theory for some undeniable discipline. Say a better theory of physics complete with verifiable predictions. Then the “its not a brain” crowd will largely retreat to “its not a human”. I don’t think they will ever have to retreat from that point.
Comments are closed.