The Seasteading Conference is Sept. 28-30, in San Francisco. The Singularity Summit is October 3-4, in New York. I won’t be at either, but Robin Hanson appears to be on the bill for the singularity summit.
Both of these conferences are relevant to issues that I blog about. Seasteading relates to competitive government. The singularity is a scenario under which government debt is not a problem.
Attending either conference is a signal that you are highly open and have high g. You probably are high in conscientiousness and low on neuroticism. It is likely that you are high on extroversion, but low on agreeableness, so that many people think of you as introverted.
READER COMMENTS
Steven
Aug 24 2009 at 6:11pm
“The singularity is a scenario under which government debt is not a problem”
This is only true if the singularity increases the tax base by more than it increases medical and other government spending. Given the historical relationship between technological progress and the size of government, this may prove to be a dangerous assumption.
Patri Friedman
Aug 24 2009 at 8:02pm
Open-minded, high g, high conscientousness, low neuroticism, high extroversion, low agreeableness – these are my people! Arnold fits most or all of these, I would think.
drtaxsacto
Aug 24 2009 at 8:42pm
I read Kurzweil’s book on Singularity when it came out and thought it was one of most odd tomes I’ve ever read – completely devoid of any ethical framework, Malthusian in its thinking on human capabilities and almost Wells like in its estimations on the capabilities of machines. I am not sure why the singularity would positively reduce government debt but then with the other three limitations, I am not sure why an assumption there would be in any way worthwhile.
Mark Bahner
Aug 24 2009 at 10:37pm
NOTE: I’m modifying comments to adjust the hyperlink, to avoid the spam filter
“Malthusian in its thinking on human capabilities…”
I think you need to re-read the book. Ray Kurzweil predicts that humans will increasingly incorporate machines in their bodies: e.g., circuits replacing hydrocarbon brain cells and micromachines repairing bodies. He in fact predicts that in mere decades humans will effectively be immortal and infinitely smart. I don’t think Malthus ever dreamed of such things.
“I am not sure why the singularity would positively reduce government debt…”
Well, when everyone in the U.S. is earning $1 million a year (year 2009 dollars) it doesn’t much matter what government debt is:
See: http://www.econlib.org/archives/2009/08/resolving_us_in.html#81560
[Link now working. Sorry about that filter problem. It’s a serious annoyance.–Econlib Ed.]
Patri Friedman
Aug 25 2009 at 12:30am
Well, when everyone in the U.S. is earning $1 million a year (year 2009 dollars) it doesn’t much matter what government debt is:
Say what? I think it matters an awful lot whether per capita debt then is $500K or $2M.
Historically, governments spend a fixed fraction of our exponentially growing wealth. Our last order of magnitude jump ($4K/yr to $40K/yr over ~100 years) saw government spending as a fraction of GDP going way up. Perhaps on our next order of magnitude jump it will go up again.
Sure, at high income levels taxes don’t affect whether or not we can eat, but those taxes may be the difference between you affording or not affording immortality treatments…or moving to Ganymede.
Matthew C.
Aug 25 2009 at 7:31am
Seasteading is a potentially practical, realizable technology and social experiment that I expect has a good possibility of success (p~0.5 give or take).
The “Singularity”, at least as usually envisioned with nanobotized immortal biology, superhuman AI, “downloaded” human consciousness and the like, is an ubergeek fantasy with no grounding in actual real science or technologies (we have no idea how physical processes might generate conscious experience, we have no nanobots, and no understanding of the organizing principles of biology that drive its self-replication, regulation, repair and growth functions).
Singularitism is religion for reductionistic atheist technophile nerds who love to look down their noses at the religiosity of the plebes.
8
Aug 25 2009 at 11:23am
I always thought the Singularity presented a pretty good case for the communists.
Mark Bahner
Aug 25 2009 at 11:55am
“Singularitism is religion for reductionistic atheist technophile nerds who love to look down their noses at the religiosity of the plebes.”
I don’t agree. I think Ray Kurzweil makes a very good case for the idea that computer intelligence will be essentially equal to human intelligence in the next 1-4 decades.
If so, that’s fundamentally different from anything in the experience of human beings. Using Ray Kurzweil’s estimates of computer power with respect to the human brain (which he estimates at 20 quadrillion calculations per second) and estimates of the number of personal computers built every year, I’ve estimated the number of “human brain equivalents” added by personal computers every year.
The results for 2000 were only 50 human brain equivalents added. And even 2010 it’s only 50,000 human brain equivalents added. But in 2020 it’s 50 million. And in 2030, it’s 100 billion. Even if the number in 2030 is too high by a factor of 100, that would still be adding the equivalent of 1 billion human brains…in that year alone.
Nothing in history has been remotely comparable to adding billions of human brain equivalents every year.
Matthew C.
Aug 25 2009 at 4:58pm
Mark,
I write software for a living.
Yes, computers can perform lots of symbol manipulations quickly. But that’s it. It doesn’t matter how many calculations we can execute, or how fast, since we have no idea what combinations of electronic patterns might in some completely mysterious way manifest in general intelligence, creativity, or conscious awareness.
Yes, many people have lots of faith that the critical breakthrough to create software intelligence is “just around the corner”. I lack that faith, and seriously doubt that information processing is even the right conceptual model for animal and human intelligence.
Richard Hollerith
Aug 25 2009 at 5:35pm
I tested at 3 or 4 of a scale of 10 on conscientiousness. I strongly doubt Patri would score higher than that. I don’t know Arnold enough to have an opinion on his score. Otherwise I tend to agree about the personality of singularitarians and libertarians like Arnold and Patri.
phineas
Aug 25 2009 at 7:56pm
I too write software for a living, and if anybody said that my work exhibited a lot of “artificial intelligence” I’d be insulted. Artificial intelligence is a pejorative word in my vocabulary and I’ve never met a good programmer who didn’t have the same attitude. If a computer program does something useful, it’s not artificial intelligence by definition (or by conventional vocabulary among productive programmers). In other words, in computer programming circles, the word artificial intelligence means reaching for the unreachable stars, and writing prototype programs that never make it into the real world. Hence when the Singularity Conference organizers prognosticate on their homepage: “Artificial Intelligence is probably the most important technology in humanity’s future”, I can immediately conclude that I wouldn’t be in the slightest bit interested in attending.
Mark Bahner
Aug 25 2009 at 9:57pm
“Mark, I write software for a living.”
Yes, Ray Kurzweil has done a bit of that, too. 😉
“Yes, many people have lots of faith that the critical breakthrough to create software intelligence is “just around the corner”. I lack that faith, and seriously doubt that information processing is even the right conceptual model for animal and human intelligence.”
I don’t have any faith in anything. But I am persuaded by the amazing amount of data Ray Kurzweil has collected and analyzed to demonstrate that there is exponential or “double-exponential” growth in the technologies that are needed so that machines can perform tasks that previously only human beings could do.
And I also read as much about technology developments in all fields as probably all but one in 1000 people. MIT Technology Review, Small Times Magazine, Popular Science, Scientific American.
When I read those magazines, I’m constantly saying, “(Expletive deleted)…that is so cool.”
Name one economically important thing human beings do…my guess is that in a few decades machines will be able to do it better and cheaper.
Build houses and buildings? Sure. And do it 24/7, without pay, without lunch, fluids, or bathroom breaks.
Drive vehicles? Sure. Not only drive vehicles, but drive them so much better than humans that cars will be able to drive at 80 mph with less than a car length between them…and probably making 3 lanes out of a 2-lane road, too boot. Passing each other at right angles at full speed, so there’s no need even for traffic lights.
Surgeons? Robots have no hand tremor at all. They’re never tired or distracted.
Reading mammograms? Pap smears? Machines, machines.
http://biomedicalcomputationreview.org/2/3/all.pdf
Of course, as machines replace humans in all these tasks, many or even most people will say, “Well, that doesn’t take intelligence.”
Dan Weber
Aug 27 2009 at 5:01pm
Seasteading seems a lot more possible than the singularity. Of course, like the evolution of life, the singularity only needs to get lucky once.
Daniel Burfoot
Aug 29 2009 at 7:24am
Here’s another link between AI and competitive government: the nearest-term applications of AI are all potentially terrifying tools in the hands of the wrong (government) hands. They are things like computer vision (smart surveillance), speech and language understanding, and robot soldiers.
Comments are closed.