UPDATE BELOW
Betsey Stevenson and Justin Wolfers write:
In the end, all the corrections advocated by the critics shift the average GDP growth for very-high-debt nations to 2.2 percent, from a negative 0.1 percent in Reinhart and Rogoff’s original work. The finding remains that economic growth is lower in very-high-debt countries (see chart). It has been disappointing to watch those on the left seize on the embarrassing Excel errors but ignore this bigger picture.
Now it’s true that I didn’t seize on those errors and it’s also true that even if I had, that wouldn’t have made me a leftist. Even if all crows are black, not all black things are crows.
Still, while I agree with Stevenson and Wolfers, that one should not ignore the bigger picture, the mistake in R-R is, itself, pretty big. I think I don’t need to tell readers of this blog that the difference between -0.1 percent growth and +2.2 percent growth, compounded over even five years, is quite substantial.
Like Reinhart and Rogoff, I do worry a lot about the growing federal debt/GDP ratio. And the difference between 3 percent growth and 2.2 percent growth over, say, ten years, is substantial also. But the reality is that the correction of their work by Herndon, Ash, and Pollin is very important.
While I like the Stevenson/Wolfers article for laying out the facts nicely, I think they put too little emphasis on the huge impact the R-R error had on their results.
UPDATE: Correcting Myself.
“mobile” in the comments below and Bob Murphy by e-mail have corrected me. Murphy, as always, says things well and so I will just quote his e-mail:
On your EconLog post abut R&R, the casual reader might think that the Excel mistake changed their results from -0.1 to +2.2. But actually, I have seen a few sources say that the Excel mistake by itself only changed the take-away number by 0.3 percentage points.
The causes of the big shift would be inclusion of data that they didn’t have as of 2010, and changing the weighting of the data they did have. I don’t think these latter two things could be called “mistakes.”
Stevenson and Wolfers worded it as “In the end, all the corrections advocated by the critics…”
I don’t think Stevenson and Wolfers were themselves admitting these things were in fact “corrections,” as in, correcting a “mistake,” I think they were just showing that even if you did everything the critics wanted, you’d still get the qualitative result. Indeed, in their piece they explain why the original R&R weighting might make sense, if you thought about the question a certain way.
Suffice it to say that, were I to start this post from scratch, I would have written it differently. But my big point stands: if a reasonable analysis of data moves the key variable, annual growth of real GDP, from +2.2 to -0.1, that’s a big deal. Like Stevenson and Wolfers, I could have done without the “gotcha” tone of some bloggers. But I do think their big 90% result is substantially weakened.
READER COMMENTS
mobile
May 2 2013 at 1:10pm
I thought that the big change in results come from
1. not including postwar data that was not available when the original draft was written, and
2. a choice about how to analyze data where some countries had longer time series of high-debt or low-debt levels
and not from the embarrassing Excel error. Neither of these could charitably be described as an error, much less “the” error in the R-R paper.
Daublin
May 2 2013 at 2:20pm
David, did you change your position on any issue once the correction came out? Do you know of anyone else who did?
David R. Henderson
May 2 2013 at 2:25pm
@Daublin,
David, did you change your position on any issue once the correction came out?
Yes. For an immediate example, see my update above.
Do you know of anyone else who did?
Yes. Julian Simon on population.
Glen S. McGhee
May 2 2013 at 2:31pm
The lesson is simple — peer review doesn’t assure quality. And it doesn’t assure integrity.
Peer “review” amounts to little more than rubber stamping papers if they come from the right schools.
David R. Henderson
May 2 2013 at 2:52pm
@Glen S. McGhee,
peer review doesn’t assure quality. And it doesn’t assure integrity.
True. But as far as I can tell, there was no issue of integrity here.
ChacoKevy
May 2 2013 at 2:56pm
I liked the Stevenson/Wolfers piece. It’s a nice summary. But I was surprised when I read that paragraph. As I read the left critique, they seized upon most things OTHER than the excel error.
1) The questionable weighting (as has already been noted here)
2) The lack of peer review – Herndon was the first person to receive the RR data set
3) The weak case of causation/correlation
4) They allowed themselves to become the academic force behind austerity policies (even though they didn’t start it)
I suppose we read different lefties!
Steve Roth
May 2 2013 at 4:31pm
All of this chatter about the data is chaff and distraction. (Kudos to R&R for the yeoman’s work of compiling it; shame for not releasing it for vetting right away.)
The crux here is the analysis.
From the moment R&R’s paper hit the streets it was obvious that they had not done even the most simplistic lag analysis to try and suss out the direction of cause and effect.
This type of analysis is of course not definitively dispositive. But under the assumption that the cause *usually* precedes the effect, it’s the first place you turn if you’re clear-eyed, open-minded, and actually curious about what causes what.
Again: they made absolutely no effort to do that kind of analysis. Their work in that paper is on the level of an amateur, untutored econoblogger’s first paltry efforts. (I should know.)
Happily, finally, with the data available, somebody has done that journeyman’s work — the kind that any freshly-minted econometrician of any competence would turn to as a matter of course.
Draw your own judgments:
http://rortybomb.wordpress.com/2011/01/17/guest-post-arindrajit-dube-on-zmp-unemployment/
Now, this is no freshly-minted econometrician. This is one very, very skilled, competent, and experienced practitioner.
Many may find Mr. Dube’s work objectionable, since he was the lead author on the rather magisterial study of adjacent-county minimum wage employment and wage effects, results which throw some quite widely held beliefs into serious question.
But I’m sure that would not prevent anyone from viewing this latest work with unbiased eyes.
Hazel Meade
May 2 2013 at 4:43pm
My personal take is that this is all yet more evidence of the fuzziness of macroeconomics in general. Unlike microeconomics, which is grounded and mathematically tractable, macro seems to be largely a lot of playing with models and manipulaton of statistics, with everyone cherry picking data to push whatever politically motivated agenda they are trying to advance.
Obviously, at such a large scale economies are far too complex to predict, much less control, not unlike attempting to predict the weather or even long-term climate change. Macroeconomists seems to make their living pretending, foolishly, that they can do so, and live as the handmaidens of politicians who will pay them, oracle-like, a favorable reading of the entrails, for whatever policies they have already decided upon.
Even the emphirical results seem mushy. Lots of confounding variables, lots of assumptions and models baked into the analyses, weak correlations, large margins of error, and lots of ways in which data can be fudged to produce whatever result the author wants.
Patrick R. Sullivan
May 2 2013 at 6:31pm
It isn’t really from -0.1 to +2.2%. If you use R&R’s median it’s from +1.6 to HAP’s mean of 2.2%. HAP didn’t provide a median number. And, anyway this is just a measurement of one data set by HAP. R&R had three.
James Hamilton has several excellent posts on this at Econbrowser. This one has a nice table showing the differences.
Also, it now turns out that HAP made an error of their own.
Jonathan A. Goff (@rocketrepreneur)
May 2 2013 at 9:09pm
Curiosity question from a non-economist. Has anyone redone the analysis looking at the impact on debt levels and other economic outputs?
Say for instance growth in private GDP? Inflation adjusted median household wealth? Etc?
Basically some articles I read, I think here on this blog over the past week or two, have pointed out that GDP itself may be a poor measure of national wellbeing. That GDP counts government inputs as though they were outputs. I wouldn’t be surprised that a country that had a big budget deficit had a lot of growth in the government part of its GDP, but does that actually reflect growth in the actual wealth of the nation? Or effectively an accounting trick.
Just curious if someone has run this analysis.
~Jon
Keith Eubanks
May 3 2013 at 11:04am
Why in all of the ruckus is the focus on some potential tipping point in Debt/GDP ratios rather than the mechanisms by which debt might slow economic growth?
Debt is not the problem per se; the issue is what did the debt buy? Did it buy capital that can generate income and finance the debt or did it buy consumption goods so that debt financing must come from other sources?
The real issue is that government borrowing overwhelmingly finances consumption. In doing so, it re-directs somebody’s savings back toward consumption rather than assets that can expand production and exchange: i.e., there is no real savings. This lowers the net investment of the nation and that is what slows economic growth.
The debt/gdp ratio represents the accumulation of forgone investment in proportion to the size of an economy; it also represents the accumulation of forgone economic growth. The damage is done in accumulating the debt, not the debt itself. The damage is long done by the time a tipping point is reached.
Nevertheless, at any point in time, a nation can stop digging and start growing (and grow robustly). Its just that governments seem to find it oh so difficult to stop digging.
perfectlyGoodInk
May 3 2013 at 2:01pm
Glen S. McGhee: The lesson is simple — peer review doesn’t assure quality. And it doesn’t assure integrity.
Well, nothing assures anything, of course. I think replication and peer review serve as valuable checks on both errors and ideological bias in research. However, R-R’s paper was published in the “Papers & Proceedings” issue of AER, which is not peer-reviewed.
I am not an academic, but as I understand it, academic economists typically leave such publications from their CV for this reason. For similar reasons, I think economists discussing with policy-makers or the media or writing op-eds
about published work that is not peer-reviewed ought to to ought to disclose this. I don’t know if this is a norm or not, but it ought to be.
Patrick R. Sullivan: Also, it now turns out that HAP made an error of their own.
Gerald Silverburg has an interesting post on the New Zealand data as well as an interesting comment on your linked Bloomberg article:
Roger McKinney
May 3 2013 at 2:36pm
I was taught in statistics to exclude outliers from the analysis. Any subjects that would skew the outcome as much as those excluded by R&R should be considered outliers. But if you do the statistically correct thing, you’re considered biased. Forcing analysts to include outliers demonstrates bias in favor of the outcomes the outliers produce.
CC
May 3 2013 at 5:13pm
Did anyone else notice the irony of DH saying this about a study that reported the wrong results and then saying “Well yeah, but our main point stands” ? 🙂
DH- I’m not picking on you! I just thought it was funny.
genauer
May 5 2013 at 12:44pm
I smiled about all this commotion about the little excel mishap with the 0.3% difference, and found their weighing and exclusion pretty defensible.
I found the attack of HAP unfair, and the attacks of some many others downright vicious.
But then I had to read
“Austerity is not the only answer to a debt problem”
By Kenneth Rogoff and Carmen Reinhart
http://www.ft.com/intl/cms/s/0/cca28c2e-b1a4-11e2-9315-00144feabdc0.html#axzz2RoOunjAj
and it became very clear that they give a lot of advice about things they know very little about and understand even less.
Critical debt is a multi dimensional problem, as they have pointed out in their book themselves.
not just gov debt, but household, corporate, banks as well.
The european imbalance scorecard contains 11 parameter. the IMF DSA about 10.
But Reinhart/Rogoff, 20 years behind the curve on how this is practiced, dish out one-dimensional advice.
“high-return infrastructure projects.” The EU has subsidized plenty of infrastructure in these countries in the last 20 years.
My advice to american economists: stick to an area, where you know something about.
R Davis
May 5 2013 at 4:05pm
It is true that the Excel spreadsheet error changed the results by just 0.3 percentage points. However, the analysis of the data at http://usbudget.blogspot.com/2013/05/is-there-debtgdp-threshold-at-90_4.html shows that the exclusion and weighting of the some data were, in fact, a mistakes. Consumers of such economic studies need to demand that they be peer-reviewed and that all of the calculations (i.e. the spreadsheets) be released to the public. If we ignore studies that don’t fulfill these requirements, I suspect that most economists would start to do both.
Vangel
May 6 2013 at 9:41am
What I get out of this is the failure of empirical economics. It should be obvious to all of us by now that the data that is being used by one side or another to push its narrative is not comprehensive, accurate, or meaningful. In a complex system there are many factors that are in play and there is no way to find a single, or a few variables, that can be analyzed that will give us a valid conclusion about the system as a whole.
In essence empirical economics seems to be little different than dressed up voodoo made to look scientific. Macroeconomics is not and cannot ever be a hard science so let up pretend that is what it is. Not long ago there was an argument here about the failures of the Equation of Exchange where David made the argument that it had to be useful because the tautology (money paid equals money received) was true. But no rational individual would accept such an argument because knowing that what is paid is equal to what is received is not useful and cannot tell us very much. In order for the Equation of Exchange we would have to be able to measure the independent variable within an acceptable range of certainty and would have to deal with the constant changes that are typical of a complex system. R&R may have reached a good conclusion but they certainly cannot use empirical data to support it. No empirical data can ever be used to support any conclusion on such a high level and as such empirical based macroeconomics is a failure.
Jacob A. Geller
May 9 2013 at 6:43am
I don’t understand why 90% was ever the cutoff. Isn’t the negative relationship between debt and GDP growth continuous?
They just chose three discrete categories of debt (low, medium, high), the highest of which was 90%+. Wasn’t that decision arbitrary?
If they had made the “high” category 80%+, or 100%, wouldn’t we still see the exact same basic result and if so, wouldn’t we all now be talking about that bogus 80% or 100% cutoff (or 120%, why not)?
Where did this discrete cutoff at 90% come from? Did it ever mean anything?
perfectlyGoodInk
May 9 2013 at 10:53pm
Jacob A. Gellar: “They just chose three discrete categories of debt (low, medium, high), the highest of which was 90%+. Wasn’t that decision arbitrary?”
Yes. That is, indeed, one of the main criticisms of R-R’s research and how they presented it to policymakers and to the media.
Jacob A. Gellar: “Isn’t the negative relationship between debt and GDP growth continuous?”
Yes, it is. R-R were claiming that they found a discontinuity in the function between debt and GDP growth at 90% (or a “tipping point” or “cliff”). That is why the Excel error is actually quite telling, despite its small effect. Discontinuities in relationships between such variables are so rare and so unlikely, and the error was located right at the one they claimed to have found. It is very hard to explain why on Earth they wouldn’t have blinked, double-checked, and then found that error.
It calls into question exactly what the goal of their research was.
Comments are closed.