Normal people worry about technological unemployment. Economists keep telling them to relax, but to little avail. You can’t trust a coven of eggheads, can you?
Rather than rehash the textbook arguments, let me propose an easy way for the public to test its own understanding.
Step 1: Create a graph where the x-axis runs from 1948 to the present, and the y-axis shows the overall level of technology.
Step 2: Sketch whatever you personally believe about the evolution of the overall level of technology during this period. Do NOT proceed to Step 3 until you have finished Step 2.
Step 3: Compare your graph to the actual history of U.S. unemployment from 1948-present. I repeat: Do not peek until you’ve completed Step 2.
Step 4: If technology were an important cause of unemployment, the two graphs should look a lot alike: more tech, more unemployment. (If you favor the more sophisticated theory that it’s tech growth, not tech level, that raises unemployment, eyeball that instead).
Step 5: So what do your own eyes tell you?
HT: Inspired by Ben Frey.
READER COMMENTS
Ben Kennedy
Feb 13 2019 at 3:03pm
Unemployment? More relevant is male labor force participation rate. In 1960, it was 97.3%. In May 2014, it bottomed out at 88% and rebounded to 89%. However, it’s a pretty steady slope downward. It’s well documented here:
https://obamawhitehouse.archives.gov/sites/default/files/page/files/20160620_cea_primeage_male_lfp.pdf
With such takeaways as:
“The fall in participation for prime-age men has largely been concentrated among those with a high school degree or less, and participation rates have declined more steeply for black men.”
“In contrast, reductions in the demand for labor, especially for lower-skilled men, appear to be an important component of the decline in prime-age male labor force participation.”
Break it down by educational attainment – in 1971, the gap looked like 97% (HS or less) to 98% (Bachelor’s or more). In 2014, it’s like like 83% to 94%. Why the widening spread based on education? Maybe because cognitive demands for work in advanced economies is high, and people that couldn’t signal their way to a Bachelor’s Degree can’t cut it?
I think it’s perfectly reasonable to be worried about things like the impact of technology and globalization.
Joseph K
Feb 13 2019 at 3:38pm
If you cut the data in enough ways and focus on a narrow enough sliver of time, we can always find a correlation. So, male participation has been falling since 1973. Is that caused by technology? Why was technology not causing it before?
Also, female participation has been pretty steadily rising during that time period? Obviously, you want to say that’s not caused by technological unemployment but by some other factors, such as cultural factors. Okay, then we can say the male participation rate is also caused by other factors.
Also, the male participation rate for men over 55 is up from its nadir in the 90s. Obviously, that’s not caused by technological unemployment. So what’s going one there.
You can cut up the data to support any thesis. But you really have to squint to say technology is the main factor.
Ben Kennedy
Feb 13 2019 at 4:54pm
“Why was technology not causing it before?”
Because tools and job requirements were simple enough that anyone could do the jobs. There was always work for a big beefy dude. Now? It’s a bit harder
“Also, the male participation rate for men over 55 is up from its nadir in the 90s. Obviously, that’s not caused by technological unemployment. So what’s going one there.”
Perhaps because jobs are less manual labor and more cognitively focused, older men can work longer? Maybe they are trying to hang in there until 65.5 for Medicare?
Given the increasing divide between high school and college level LFPR over the last 70 years, I don’t think it a “you have to squint” thing. Clearly it’s the case less educated people today are having more trouble finding work. Is technology the sole reason? Of course not. But it’s surely a factor as to why there are fewer jobs available, along with things like minimum wage laws, globalization, etc.
Mark Z
Feb 16 2019 at 6:15am
There still is – or would be – plenty of work available if the unemployed simply bid down the price of labor. What really distinguishes the that few decades from the previous two hundred years of most work being constantly rendered redundant by technology is that the cost of not working is lower than ever. Since the 1970s, the fraction of prime age men on disability, unemployment, or other entitlements has skyrocketed; moreover, most unemployed, work-age men live in a household with someone who works, especially now as more and more women are working, so the household income (and therefore standard of living) of the average non-working man has gone up enormously in the past half century, reducing the incentive to work. A 50 year old former factory workers can afford to pass up on working a job in food service that he sees as beneath him (or accurately estimates would reduce his standard of living by rendering him ineligible for various entitlements) while earlier generations had to take whatever they could get.
In short, I’m skeptical that earlier generations of workers had an easier time replacing old jobs as they disappeared, and that it wasn’t simply that the fact that starvation was the alternative motivated them to them to find whatever work they could (including moving half way across the world; working class people today seem to be increasingly less willing to move far away) and work for less, and in much worse conditions, out of necessity.
Thomas Sewell
Feb 14 2019 at 1:55am
It’s seems fairly obvious that after the end of the Korean war, the supply of labor increased, primarily over time in the form of the female labor force participation rate increasing. With greater supply, of course the workers on the margin (lower knowledge and skill levels, etc…) aren’t going to work as much.
If there is any mystery, it’d be why the two trends finally diverged after the female labor rate began trending down almost a decade ago.
Mark Bahner
Feb 13 2019 at 9:35pm
On the contrary, y’all (economists) are not sufficiently egghead. Here is a challenge to all of you economists who do not worry about technological unemployment:
Step 1: Same as your Step 1.
Step 2: a) Assume a computer is roughly the equivalent of a human brain if the computer can perform the same number of calculations per second as a human brain.
b) Assume Ray Kurzweil’s estimate of the capability of a human brain at 20 quadrillion calculations per second is a good middle-of-the road estimate. Call that a “human brain equivalent.”
c) Plot what you think are the total number of “human brain equivalents” represented by all the computers of the world each year from 1948 to the present. Compare that to the human population from 1948 to the present, which has increased from 2.4 billion to 7.7 billion.
d) Now continue the projections for both computer human brain equivalent population and human population for the next 20 years.
Step 3: Does your graph show the computer human brain equivalent population exceeding human population at any time from 1948 to 2039? If so, what year? What is your estimate of the computer human brain equivalent population in the year 2039?
Important note: NO CHEATING! Do not look at any of my many previous comments on this subject! 🙂
Step 4: If I told you that your estimate of the computer human brain equivalent population 20 years from now is probably several orders of magnitude low, would that change your opinion about the potential for future technological unemployment? Would any value for the computer human brain equivalent population 20 years from now change your opinion about future technological unemployment?
P.S. These comments aren’t just to make a point. I truly would like every economist who reads this–definitely starting with you, Bryan–to perform these steps and report his/her results. (There are no wrong answers…but I’ll provide the correct answers after a few economists have reported their estimates. ;-))
Thomas Sewell
Feb 14 2019 at 2:09am
I took your test, then checked on the answer. My estimate wasn’t too low, it was actually a little high. (You may consider me to have cheated, though, as besides my economist hobby, I work in technology and are already very familiar with AI.)
To answer your follow-up questions, it didn’t change my opinion and there is no value for the computer human brain equivalent population which would change my opinion about the benefits of potential future technological unemployment.
That indicates to me that perhaps you don’t grasp at a fundamental level the principles involved in the economists side of the discussion.
For one first principle, there is literally an unlimited amount of possible work for someone to do. In other words, get back to me once we’ve remodeled all of the nearby galaxies to our liking if you want an estimate of how much work there is remaining available in changing the entire universe and perhaps even the direction of entropy to suit us. Most people just don’t think big enough.
For a second, work is the price we pay in order to get something/consume, while there is some truth to the “work as play” model, especially as it comes to the arts or sport, it’s not something which we need to otherwise conserve. For example, I’m fine with everyone having access to anything they might want by today’s measures of wealth in exchange for 5 minutes of their time each year.
Mark Bahner
Feb 14 2019 at 5:29pm
I’m pretty sure I do grasp the principles involved in the economists side of the discussion. I don’t think the economists grasp, and/or are aware of, and/or appreciate, the principles involved in my (really “our” because I’m certainly not alone) side of the discussion.
Yes, I’m well aware of and understand that. But suppose chimpanzees were part of the workforce. Now suppose they had to be paid the federal or state minimum wage, just like everyone else. How much of the infinite amount of possible work do you think chimpanzees working at minimum wage would be paid to do?
Yes, I’m well aware of and understand that also. But do you understand that in order to be paid to work, even at the minimum wage, one has to be better than other people/entities that are willing to work at the same or a lower wage?
Suppose, for example, some folks came back from the future in a sort of Terminator-like machine–with all the gooey mess that involves–and were able to do jobs just as well or better than anyone, but were willing to work for less than a dollar an hour? (Also suppose that somehow we don’t have to pay them minimum wage because…well, just suppose.)
Why wouldn’t they take all our jobs?
Mark
Christophe Biocca
Feb 14 2019 at 8:21pm
The minimum wage is arguably doing all the heavy-lifting in your argument. It would also do most of the heavy-lifting in any real-life massive-increase-in-automation scenario, as this would put downward pressure on all prices (including that of labor).
But the minimum wage (like any other price-control) is the economics side of the argument, oft-ignored in the popular account of technological unemployment.
Mark Bahner
Feb 14 2019 at 9:38pm
I assume economists are familiar with federal and state minimum wage laws…? 😉
According to Bryan’s account of this technological unemployment argument, the argument is between economists (“eggheads,” who presumably know what they’re talking about) and “normal people” (aka, the great unwashed masses). Per Bryan:
If the economists’ argument relies on assuming that minimum wage laws don’t exist, I think they’re going to have quite a lot of trouble convincing “normal people” to relax!
Mark Bahner
Feb 14 2019 at 10:02pm
I assume economists are aware of the existence of minimum wage laws?
Per Bryan’s account of this technological unemployment debate, the debate is between economists (aka, “eggheads,” who presumably know what they’re talking about) and “normal people” (aka, the great unwashed masses). Per Bryan:
If the economists are basing their argument on the assumption that minimum wage laws don’t exist, I think it will be very difficult for them to convince people to relax!
Mark Z
Feb 16 2019 at 6:31am
Mark Bahner,
You make a strong case that we should stop calling it “technological unemployment” and start calling it “minimum wage unemployment.” And that would be more accurate. The substitution of technology for labor drives down costs – that’s why it’s substituted for labor in the first place – thereby reducing prices. This means workers will have to bid down wages, but it also means they can afford to do so because of the increased purchasing power of their wages. And the cheaper goods allows consumers to find new things to spend their money on, creating demand for labor elsewhere that leaves the real wages higher than before the substitution.
Automation may cause structural unemployment because it takes time to adjust, but not chronic unemployment; even the effect of wage floors dissipates. It’s not even that we need to repeal existing minimum wage laws for things to work this way; we just need to stop raising them.
There’s an inherent absurdity in worrying about everyone losing their job from automation: automating everything to the point where human labor is unnecessary means the things being produced are free, and therefore no one would need to work. Is the theory that we’re going to end up living in a society where automated factories mass produce goods and services but still insist on charging us money for which they have no use, because they’re just automated factories?
Mark Bahner
Feb 17 2019 at 11:37pm
Mark Z,
You write:
Yes but, for example, no human driver can afford to bid down his/her wages to compete with an autonomous vehicle. For example, let’s say an autonomous car or truck costs $10,000 more than a human driven version. If the autonomous car or truck lasts 10,000 hours, that’s $1 an hour.
And the situation is even worse for people like cashiers and retail salespeople. What happens is the autonomous delivery vehicles eliminate even the need for brick-and-mortar stores. So the cashier or salesperson can’t take a cut in pay to keep his/her job, because the job isn’t even there.
No, the prediction is that in the next 5-40 years when machines begin to eliminate traditional jobs at an increasingly rapid rate, people who have their entire job category essentially eliminated are going to need help if they don’t have pretty substantial wealth to fall back on. Which most people don’t. From thebalance.com for people in the U.S., the bottom 20% of people in the U.S. have a median wealth of -$6,026. (That’s negative six thousand.) And the next 20 percent has a median wealth of $7,263. Even that is not much to sustain a period without a job.
Thomas Sewell
Feb 14 2019 at 9:09pm
As already noted, the minimum wage is the reason people worth less than it can’t get a job, not because there is no work they can do. In the case of a Chimp, sure, if it was willing to work and understand what work for pay is (which is the real reason they don’t work now, not for lack of physical/mental ability to do the actions inherent in labor), I’m happy to employ them in many current capacities.
As for the future traveler hypothetical, they wouldn’t “take all our jobs”, instead, if they remained willing to do work for less remuneration than they could get on the labor market, then the rest of us would get much more rich than we already do, because again, there isn’t a limit to the amount of available jobs, or work, just a limit on the amount of people available to do most of that work.
Personally, I’d try and hire as many of the future travelers as possible and start a business. I imagine there’d be a pretty big demand for their services, making all sorts of businesses possible which weren’t before they reduced the labor costs involved.
If you want to personalize it with a better extreme example, what if we all had a limitless supply of personal servants (maids, yard workers, cooks, etc…) which cost us less than a penny a year in real terms. You can’t see that we’d all be wealthier in terms of lifestyle than we are right now?
Mark Bahner
Feb 15 2019 at 12:51pm
The debate is about technological unemployment. If the “economist” side want to write off technological unemployment as a problem because they think the problem of technological unemployment wouldn’t exist if there weren’t minimum wage laws, well…there are minimum wage laws.
This is a joke, right? (You should probably put an emoji wink afterwards!) If this is not a joke, and you seriously think that chimps have the physical/mental ability to do the actions inherent in labor, then I have a business suggestion for you. Train some chimps as janitors, and start a chimp janitor service. Film them and put it on YouTube. Or even talk to the people who made the movie March of the Penguins. Your chimp janitor service will probably make millions.
Let’s get much more specific and real-world. Suppose you’re one of the 2+ million retail salespeople, 2+ million cashiers, or 2+ million truckers in the U.S. who will be put out of business by autonomous delivery vehicles and the collapse of brick and mortar retail in the next 1-3 decades. You have a high school education. You’re not in great physical condition. You have essentially zero net worth (maybe $0-10,000). Explain to me why you wouldn’t have a technological unemployment problem.
Thomas Sewell
Feb 15 2019 at 8:38pm
I can’t reply directly because we’ve hit the comment depth, but compared to your 2 millions comments, imagine you’re one of the 90% of everyone employed in agriculture. On average, you have an 8th grade education (or less) and mostly no capital for the vast majority who work as labor. You’re told in the future that only 5% will still have a job in agriculture.
Is it time to panic?
Of course, not, because we know how it turns out. The only difference with the current situation is a lack of imagination in some parts of how it will turn out.
Mark Bahner
Feb 16 2019 at 1:42am
You should first ask, “What sort of time frame are we expecting this to happen?”
If the answer was: 1790 = 90%; 1860 = 60%; 1895 = 40%; 1935 = 20%; 1960 = 10%; 1975 = 5%…
….we’re talking about approximately 100 years to drop approximately 50% (from 90% to 40%)…then we’re talking about another 40 years to drop another 50% (from 40% to 20%)…then 25 years to drop another 50%. I would say, “Probably at some point over those 160 years, farmers should be able to see that the trend is away from farming.”
In contrast, how many people are talking about virtually all retail salespeople and cashiers being put out of jobs over less than a two-decade period?
Christophe Biocca
Feb 17 2019 at 12:31pm
Let’s take that at face value:
Cashier/retail sales positions are about 5% of total US employment. Phasing that out over 20 years is 0.25%/year of the workforce that needs to find other work to do.
Doing the same math on the data you gave for farm labor:
Gives you 0.43%, 0.57%, 0.5%, 0.4%, 0.33% per year for each interval between these points. So even 100% automated stores in 20 years don’t have the same impact that agriculture industrialization did. And 100% automated is unlikely given the substantial fraction of employment that is in places with 4 or less employees. They’ll need to keep at least 1 around.
It’s worth noting as well that these jobs have very high turnover rates. This softens the blow of automation because you can let normal attrition get rid of most of the redundant employees anyways.
Jon Murphy
Feb 13 2019 at 10:44pm
I used this in class tonight. The students loved it
Amy Willis
Feb 14 2019 at 11:02am
Awesome! Thanks for sharing, Jon.
Mark Bahner
Feb 14 2019 at 12:04pm
Did any student draw a curve that, for the “overall level of technology” had specific units of “global number of human brain equivalents in existence”?
And did that student tell you, “OK, my own eyes tell me that I should be very worried about technological unemployment.”
And did that student ask you, “Looking at my graph, what do your own eyes tell you?”
Mark Bahner
Feb 14 2019 at 12:25pm
Hi Jon,
In fact, rather than posing that as a hypothetical question, let me volunteer to show that graph to you, for you to share with your students. Then y’all can discuss what your eyes tell you. Check your GMU email–maybe in the spam folder 😉 –in the next hour or so.
Best wishes,
Mark
Jon Murphy
Feb 14 2019 at 12:58pm
Will do!
Thaomas
Feb 14 2019 at 11:05am
The “test” is invalid because it does not have a valid counterfactual. It would be invalid even if the suspect variable were not “unemployment” but medium skill wages.
Hazel Meade
Feb 14 2019 at 3:56pm
Obviously, unemployed workers eventually age out of the workforce.
I think the key issue is not level of technology but the pace at which technological change renders existing skills obsolete. If one’s skills become obsolete 1 year after you retire, then technology has no effect on you. if they become obsolete 5 years after you graduate with a college degree, it’s a big deal.
Honestly, I’m not sure if the pace has quickened. I’m not sure how that would be measured. Also technological development is limited by the skills of workers as well. You can’t produce enough new software developers per year to replace them all with freshly trained new developers – you have to hire people with skills that are “close enough” and train them, or your technology doesn’t progress. It’s hard to market tools that nobody knows how to use, or build something with tools that no-one knows how to use. In other words, technological progress can only lead the workforce so far, before it runs into the limitation of not having enough skilled workers to do the next level of development.
robc
Feb 15 2019 at 11:57am
Your scenario assumes the college grad hasn’t learned new skills in his 5 years after college.
Almost everything I do on my job involves skills I learned decades after college.
Thaomas
Feb 15 2019 at 9:57am
There is no reason that technological change might not make the skills of certain current employed workers less valuable. Assuming that the change is economically productive (if not, why would it be adopted), the “problem” is one of distribution. Policies that have thus far been sufficient to prevent undue concentrations of income and wealth might not be sufficient in the future. But “unemployment” is the wrong way to worry about this problem.
Mark Bahner
Feb 15 2019 at 1:59pm
Hey, economists…where you at?! Bryan? Jon Murphy? Scott? David? Perhaps Robin Hanson?
I got to Step 5 (a long time ago) and my eyes tell me I should definitely be worried about technological unemployment. What do y’all think I’m missing or get wrong.
Here is what my graph in Step 4 looks like, expressed in tabular form. It has the year, and the global total of “human brain equivalents (HBEs)” represented by all the world’s microprocessors. One HBE is equivalent to 20 quadrillion calculations per second, which is a number Ray Kurzweil has often used for the number of calculations a human brain can perform.
(Note to Jon Murphy…my email table to you was off. I think the table below is correct. It doesn’t appreciably change the situation…with total worldwide computer power increasing by a factor of 1000 every decade, being off by a factor of 10 doesn’t make much difference.)
Year –> Worldwide Number of Human Brain Equivalents (HBEs)
1990 –> 0.02 (Yes, that’s correct. All the computers in the world in 1990 were able to perform less than the number of calculations per second that a single human brain can.)
2000 –> 20 HBEs
2010 –> 20 thousand HBEs
2020–> 20 million HBEs
2030–> 20 billion HBEs
2040–> 20 trillion HBEs
So when I get to Step 5 my own eyes tell me that I should be very worried about technological unemployment in the next 5-25 years. (Not for me personally, since I’m old. But for people who are just starting to work.)
And my own eyes tell me that of course there was no effect on unemployment of computer technology from 1948 to the present, because the total number of human brain equivalents represented by personal computers has been a tiny fraction of the total human population.
What do your own eyes tell you from looking at that table?
Christophe Biocca
Feb 16 2019 at 8:27pm
3 main things:
Using a mathematical model that abstracts away most relevant information, and extrapolating from it exclusively.
Ignoring comparative advantage completely, and the consequences thereof.
Using a very strong set of assumptions, and focusing on a rather trivial set of consequences.
These interplay quite a bit, but addressing each of them individually:
1) Human-brain equivalents treats transistor density (as best as I can tell) as the driver of calculations per second per dollar, which then is treated as the determining factor of what machines can do, and thus the singular (or at least main) driver of technological unemployment. But this is a poor model in many ways. Firstly, actual operations per second grow at a slower rate than transistor density. The wikipedia page for FLOPS unfortunately switches between half/full/double precision, making the improvements look larger than they are. An apples to apples comparison (single/full-precision FLOPS per $1000 dollars between the August 2012 and October 2017 benchmarks) gives you closer to a 100x/decade improvement. The second problem is that intelligence isn’t just hardware, you have to make the hardware do something. You can take the “easy” way out and simulate neurons, but only at the cost of a slowdown of multiple orders of magnitude (because the firing of a neuron is driven by various flows of neurotransmitters, plus other local-environmental factors), and we’re won’t be sure how much of neurons’ behavior we can simplify to make the computation faster until we start getting actual results. Otherwise, Kurzweil’s 20-quadrillion FLOPS estimate could be bought today for less than $600,000 if half-precision is sufficient, $1,200,000 otherwise. The lack of AI prototypes running on even much more expensive supercomputers shows it’s a bit more complicated than that. “Neural nets” as used in research/industry are quite capable but are a vastly simplified tool inspired by biological neurons, we will probably hit roadblocks on the way to full AI where they currently fail to capture essential attributes. The final hurdle is that meatspace work requires physical as well as mental attributes. So we’d still be limited by how fast we can churn out actuators, electrical motors and the like to replace physical work even if mental work was trivially replaceable.
2) Comparative advantage is a very powerful force. Almost every job has a range of tasks from the trivially automate-able to the very difficult.
I’m a programmer, thus in the business of replacing humans doing the former with machines. We‘ve successfully replaced the “find/put documents in a filing cabinet”, “remember policyholder contact information”, “keep photos and notes organized” and “Make a word document from the information and convert it to a PDF” parts of an insurance claim adjuster’s job, and I think “Estimate total payout” is achievable this decade, we’re nowhere near replacing “Calm down policyholder who just lost their home to a fire” or “Put pressure on contractor to get this job finished on time even though they’ll have to pay their workers overtime to do it” parts.
Truck drivers, who you might think have exactly one role, aren’t expected to be replaced fully even with massive adoption of self-driving trucks. Which makes complete sense under the comparative-advantage view: machines will take over the part of the jobs humans are relatively bad at (driving for 16 hours nonstop on the highway while optimizing fuel economy and never having an accident), while humans do the rest (inspecting vehicles between trips, driving out to repair a truck with a flat tire, doing final-mile delivery).
Even factories (the linchpin of 20th century automation efforts) have had a hard time going fully to lights-out manufacturing, leaving humans to do QA and machine supervision/repair, and IIRC we’ve only automated the last remaining step of cotton processing (picking up the bales) recently (hundreds of years after the cotton gin started the automation of that industry). This substantially softens the blow of automation on employment.
3) Finally the set of assumptions on display (direct link between transistor density and human-brain equivalence, that human-brain-equivalent -> human job replacement) have much more interesting consequences than mere unemployment effects (if those would even arise):
Machine brains working for you would not suffer from principal-agent problems, so you’d have an incentive to outsource you own decision-making processes a lot.
Machine computing power scales quasi-linearly with money spent, whereas even slightly more skilled humans become exponentially more expensive, and machine intelligence is trivially copyable once trained up, so you’d expect CEOs, radiologists, quantitative analysts, and other high-paid professionals to be the most at risk from encroaching AI (especially in the early days, where the per-brain-equivalent cost is still high).
Computer architectures and software are direct input to AI performance, and AI that’s human equivalent can work on improving those two. The find-improvement-compile-load-into-AI feedback cycle (especially on the software side) is on the order of minutes, can be parallelized aggressively, and run around the clock. This means everyone with capital and the know-how is trying to be the first one to get their AI to go FOOM. After all, once that’s done, you can replace all human labor and all existing machine labor, because you have another 10x advantage.
Henri Hein
Feb 17 2019 at 2:31pm
Mark,
I am not an economist and I cannot speak for them. Like Thomas Sewell, though, there is not a value for the human brain equivalent population which would convince me future technological progress will lead to massive unemployment. The reason is that in order to find employment in this future world, humans do not need an absolute advantage over the computers and robots. They just need a comparative advantage, and they will always have a comparative advantage.
Mark Bahner
Feb 17 2019 at 4:33pm
I didn’t show it, but the value for 2050 is (obviously) 200 quadrillion human brain equivalents. Why would anyone pay a human brain even pennies an hour if there are 200 quadrillion human brain equivalents available?
Henri Hein
Feb 18 2019 at 2:22am
Your question does not make a lot of sense to me. In a $14 trillion economy, why would anyone worry about making an extra buck? In the past, n+1 has always been better than n, even for large n. There is no reason to suspect this pattern will break.
You did not address my main point: as long as lump of labor remains a fallacy, and humans hold a comparative advantage, there will be work for humans.
Mark Bahner
Feb 18 2019 at 1:38pm
As I pointed out in other comments, the median net worth of the bottom 20 percent of people in the U.S. is negative six thousand dollars. Don’t you think that they are and should be worried about making extra bucks…particularly if they happen to be unemployed?
Well, let’s start out by seeing if we agree on what “comparative advantage” is. As I remember it from Intro to Econ I took at Va Tech (go Hokies! Beat the ‘Hoos tonight in b-ball!), the classic example is a boss and a secretary (aka administrative assistant). The boss can type faster than the secretary, but the boss makes so much more money per hour than the secretary, so the secretary has a “comparative advantage” in typing. So the secretary/administrative assistant should do the typing.
In the world of 2050, if there are indeed 200 quadrillion human brain equivalents, I don’t think the humans have a comparative advantage in anything worth doing.
As I pointed out previously, there are no chimpanzees doing significant labor in the U.S. The reason seems obvious to me…it takes more time to train the chimps to do the chore than is saved by having them do it.
So as I see it, everyone will be unemployed in 2050, if current computer hardware trends continue at their past rates of exponential growth. Or you could say that everyone will be “retired.” And don’t get me wrong, that world will be wonderful (if the robots aren’t Terminators…and I don’t think that possibility should be casually dismissed). Everyone unemployed or retired will be fine, because computers can do everything in that situation. The part I worry about, especially regarding unemployment, will be the creative destruction that occurs between now and the robots-can-do-everything world of approximately 2050.
In particular, it really bugs me how economists and their supporters seem to generally ignore the pain that the millions (or billions) of people are going to feel as their present and future jobs are done by computers/machines. There seems to be almost no thought that some sort of assistance ought to be given to those people. It almost makes me look forward to the time when all the economics professors at universities are computers/robots. 😉
Henri Hein
Feb 18 2019 at 11:03pm
Are we talking about the bottom 20 percent, or are we talking about the top 80% or the middle 60%? The added production capabilities of the AI economy you describe should make it easier to take care of the bottom 20%, not harder.
This is probably the core of our disagreement, then. I will attempt to explain in detail why I think they will.
First, economies tend to expand to fill, or use, available resources, including labor. This is why I mentioned the lump of labor fallacy above. It is such a common mistake people make about economies. I understand your point that humans will add a tiny fraction to production, but it doesn’t matter. A well-functioning economy is not going to stop producing just because a labor resource becomes less efficient.
As for where humans will have a comparative advantage , it is impossible to say for certain, so the following is purely speculation.
One thing humans are particularly good at is entertaining each other. We go to concerts, even though studio music is objectively better in every respect, and more convenient to listen to, and for the popular artists, cheaper too. I believe live music will continue to be a thing well into the AI revolution.
We watch improv theater, even though it is worse in every respect than the practiced kind. Some of us enjoy watching talented actors inventing a scene on the fly.
Some humans will take extra pleasure from receiving a foot rub from another person, rather than a robot. Since this is an area that would require special equipment for the robots to perform well at, as long as humans are available to do the foot rubs, it is possible this equipment will not produced. Even in the super-charged AI economy, everything will still have an opportunity cost.
Art lovers might take pleasure in a painting, a poem, or essay or even novel, produced by another human, especially if it was created specifically for him or her. Even after the AIs have learned to paint better than Leonardo Da Vinci, the Mona Lisa will remain highly valuable, because there is just the one copy. Something similar might take place with living artists.
Perhaps, but chimpanzees have not traditionally been self-starters. If you believe chimpanzees will continue to do what they have been doing, why don’t you believe humans will continue to do what we have been doing? That is, adapt, produce, create.
I share Cristohpe Biocca’s skepticism it will be here by 2050. That said, I do take your point. Like any rapid transformation, there is going to be some pain points. It is impossible to predict the nature of those problems, so it is impossible to design the solutions to them now. I wonder if that is what comes across as intransigence. It is not that we don’t understand there will be difficulties in transitioning, it’s that we don’t think it’s possible to do much about it at this stage.
Mark Bahner
Feb 23 2019 at 9:36pm
Yes, it will be easier to take care of them…if anybody tries. If people listen to economists they’re not going to even recognize that the bottom 20% (e.g., cashiers and retail salespeople) are losing their jobs due to rapid technology change. It’s all about recognizing a problem that will clearly–to me, and to many others who recognize how fast computer technology is changing–occur in the near future, and so therefore being able to prepare for the problem beforehand.
I’m not saying the economy is going to stop producing. I have said for over 15 years that economic growth will in fact speed up. I’m saying two things: 1) people will lose their jobs at an unprecedented rate, and many of those people will have little or no wealth to fall back on, and 2) the market wages for many people will fall..where others rise spectacularly…so the gap in wages (the U.S. GINI coefficient) should go way up.
I strongly disagree. I can tell you right now that autonomous delivery vehicles will obliterate brick-and-mortar retail. And the millions of cashiers and salespeople in brick-and-mortar retail will lose their jobs. I can even tell you the timing…more than 80 percent of brick-and-mortar retail stores belonging to Walmart, Target, Walgreens, Kroger, Lowes, Home Depot, etc. etc. will be gone somewhere between 15 and 40 years from now.
Humans can plan for expected future problems today. In fact, at this time, we’re probably the only species that can plan for expected future problems several years in the future. We should do so. And it doesn’t help when people who should know about the potential future problems (I’m looking at you, Bryan! :-)) are seemingly oblivious to them.
Comments are closed.