A key reason for the academic disenchantment with these types of models included the view that the identification schemes used were untenable (e.g., why is income in the consumption function but not in the investment?). Another source is the combined impact of the inflationary 1960’s and 1970’s, and the Lucas Critique.
Chinn ignores the critique of structural macroeconometric models that most influenced me. In this regard, he is like many young macroeconomists today, including some who have taken shots at my blog posts (I don’t feel personally insulted, just bothered by the wrong-headed view of people who think that the only problem you need to solve in macroeconometrics is the Lucas critique). The American Economic Association has a new Journal of Macroeconomics containing no less than three articles that reflect this misguided view. My anger at these articles prompted me to begin an essay called “The Lost History of Macroeconometrics.” When I finish, I may submit it to the journal. In any case, I will post it here, because I really think that younger economists have failed to learn some key lessons. Below, I elaborate on my views.
[update: Mark Thoma adds color.]Macroeconometrics is fundamentally an attempt to turn different time periods (say, the 1970’s and the 1990’s) into controlled experiments. There are many challenges to overcome. One is that the size of the economy differs across time periods. There is more income and more consumption in 1990 than in 1970, for reasons having nothing to do with fiscal policy or monetary policy.
How can we make data from different time periods truly comparable? One approach would be to adjust data for trend factors that affect the scale of the economy–population growth and trend productivity growth. Unfortunately, these trend adjustments fail, as is demonstrated by the high coefficients of serial correlation that remain in the de-trended data. In layman’s terms, no matter how hard you try to adjust for trends, macro data still send out strong signals saying that time periods far apart are not really comparable.
The next thing that you can try is to “difference” the data. That is, instead of focusing on the level of consumption in the fourth quarter of 2003, you take the difference between consumption in the fourth quarter and consumption in the third quarter. In fact, given the high degree of serial correlation, failure to difference the data, or at least semi-difference the data, would be utterly unsound practice.
Once you difference the data, however, you greatly amplify noise in the data relative to signal. At this point, if you know anything about the conceptual problems and implementation issues that the statistical agencies have in constructing the data to begin with, you realize how reckless a project it is to simply turn a computer loose trying to find patterns in this noise, which is what vector autoregressions are all about. Instead, you filter out the noise in differenced or quasi-differenced data using “priors” about economic structure. In other words, you bring a point of view about key macroeconomic relationships to the data, and you force your statistical estimates to conform to those relationships. This is the structural approach.
The structural approach is nothing but a roundabout way of communicating the way you believe the economy works. The estimated equations are not being used to inform the investigator about how the economy works. Instead, the equations are being used by the econometrician to communicate to others the econometrician’s beliefs about how the economy ought to work. To a first approximation, using structural estimates is no different from creating a simulation model out of thin air by making up the parameters.
This “making up out of thin air” critique is logically distinct from the Lucas critique. Telling me that a structural model is robust with respect to the Lucas critique only tells me that you made it up out of thin air in a way that satisfies a particular set of beliefs about how the economy ought to work. It does not tell me that you have found reliable relationships in the data. The relationships are in your own head, and you have used the data as a calibration tool.
In my day, the leading macro model, which was the antecedent to the Federal Reserve model, was abbreviated FMP. This stood for Fed-MIT-Penn, but a common joke was that it stood for “Franco Modigliani’s Priors,” meaning his beliefs about the economy. The FMP model was a showcase for Modigliani’s life-cycle consumption function (one of the ideas cited in his Nobel award). However, his collaborator Albert Ando appeared to me to be the main force behind the FMP model. It would better be termed “Albert Ando’s priors.” For over thirty years, Flint Brayton at the Fed has been custodian of the model, so today it reflects his priors, which in turn have evolved in response to changes in opinions elsewhere in the profession.
Once again, the second critique of macroeconometrics is this. Structural models do not extract information from data. Instead, they are a method for creating and calibrating simulation models that embody the beliefs of the macroeconomist about how the economy works. Unless one shares those beliefs to begin with, there is no reason for any other economist to take seriously the results that are calculated.
READER COMMENTS
Brian
Feb 5 2009 at 9:27am
Arnold, very nicely said.
Robert Bell
Feb 5 2009 at 9:49am
“The next thing that you can try is to “difference” the data. ”
According to section 19.1 of James Hamilton’s “Time Series Analysis” it isn’t merely a problem of amplifying noise – the problem is that with unit root processes is that differencing effectively destroys the history that essential to understanding the relationships between cointegrated variables.
Jason
Feb 5 2009 at 10:17am
Structural models certainly extract information form the data. Indeed, without a theory, one cannot interpret the data in any way. An advantage of the structuralists, is that they make clear their assumptions and priors about how the economy works. Michael Keane has a great piece on this:
http://gemini.econ.umd.edu/jrust/research/JE_Keynote_7.pdf
Buzzcut
Feb 5 2009 at 10:29am
Arnold, I have told Menzie that he should be debating you directly. You two should have your own talking heads style blog.
Menzie is the voice of the economics conventional wisdom, and you are the educated countervoice to it.
I think a debate between you two would be a valuable public service.
Sean
Feb 5 2009 at 10:32am
It would appear that the same logic applies to climate modeling to only a slightly lesser degree.
Thomas DeMeo
Feb 5 2009 at 12:09pm
What if we are in a historical period where all key macroeconomic relationships to the data are rapidly changing, not just due to policy changes, but also due to massive technological shifts?
Greg Ransom
Feb 5 2009 at 1:19pm
This is a deeply pseudo-scientific understanding of macroeconomics.
More evidence that the economists truly do know less about economics than many non-economists.
Arnold writes:
“Macroeconometrics is fundamentally an attempt to turn different time periods (say, the 1970’s and the 1990’s) into controlled experiments.”
Greg Ransom
Feb 5 2009 at 1:29pm
You keep telling us that economists know more about the economy than non-economists ..
But you keep us that they know _less_ about it — because they believe so many deeply false things about it that non-economists would never believe, and don’t believe.
Economists are often stupider than well-read non-economists when it comes to economics and the economy.
Your examples of what economists believe prove it.
QED
Greg Ransom
Feb 5 2009 at 1:43pm
Make that:
You keep _showing_ us that they know less about it the economy — because they believe so many deeply false things about it that non-economists would never believe, and don’t believe.
fundamentalist
Feb 5 2009 at 2:11pm
Greg: “Economists are often stupider than well-read non-economists when it comes to economics and the economy.”
As you know, Mark Skousen has examples of other disciplines that follow Austrian econ in principle without knowing much about it. Just listening to CNBC, it seems to me that financial people understand the economy better than do mainstream economists and they tend toward views similar to those of Austrians.
MattYoung
Feb 5 2009 at 3:02pm
If we mean to stimulate the economy we knew before the unexpected decline then we will activate new priors. The economy has partially adapted to the shock we are trying to counter.
Hence, just buying the same set of products should yield resistance or crowding out sooner than we planned.
As we try this we should begin to learn what the real economic constraint is, and each effort at a stimulus experiment will get closer and closer to the solution. The economy will adapt out a solution.
Don the libertarian Democrat
Feb 5 2009 at 3:32pm
“Macroeconometrics is fundamentally an attempt to turn different time periods (say, the 1970’s and the 1990’s) into controlled experiments.”
Where any human behavior is involved, I don’t see it as possible to denude your investigation of the actual social and historical context of the particular time. For instance, in the 30s, many people assumed that capitalism was dying and that fascism or communism were the future. It’s hard for me to believe that the methods tried then were not only influenced by these presuppositions, but had an influence upon how they worked out.
“In other words, you bring a point of view about key macroeconomic relationships to the data, and you force your statistical estimates to conform to those relationships. This is the structural approach.”
A major problem here would be confirmation bias. Indeed, in many of the discussions of the 30s, people seem to be looking for any verification that they can find of their views,they then pronounce their evidence valid, and ignore or exclude contrary findings or explanations. I know that you’ve tried to downplay this, but, from the outside, it does look political.
From my point of view, this is how the world actually works. Keynes provides a map or narrative for our current situation, and, centering on that central narrative, people start offering either assent or dissent, even as people try anything that seems plausible in the real world.
I do the same thing in my own views. For instance, I really enjoy the book called “The Calculus Of Consent”. I have a link to it on my blog/diary. I find that it largely confirms my own views of political economy. However, I hesitate to quote it much because, if I’m not mistaken, Gordon Tullock is still around, and I’ve a bad feeling that he wouldn’t agree with some of my uses of that book. I fear a McLuhan in Annie Hall moment on some blog I suppose.
All of this debate seems healthy, assuming you can accept criticism. I can’t, but I’m not an academic.
MattYoung
Feb 5 2009 at 6:20pm
I will take my favorite topic, the economy in the face of technological shocks.
A shock changes the expected distribution of goods from what we thought, largely because some constraint on the economy is preventing further gains from the technology. The farther the sudden change in prospects, then the greater distortion in the smoothness of goods availibility, hence the sharper the bubble. Large technology innovations cause sharper bubbles as the technology seeks its path through the economy, solving the worst constraint first.
The larger the technological change, the greater must be the re-aggregation of economies of scale. Large continuous movements of goods will be substantially changed in the where and when of being moved.
Each time we push a bubble, we get better, in that sector, at using the technology to gain efficiency. We remember the reverberations in the bubbles, and we remember and price into the future, the limiting constraint for each attempt by the bubble to squeeze efficiency.
But, more to the Keynesian and Hayek point of view, the re-aggregation forced upon the larger, more valued goods flow, converges to an increasingly familiar distribution of goods which is organized differently than from before. We select, when dealing with the technology, a different expectation model, and restore that model for each bubble attempt. This is how we restructure i the face of technological change.
Keynesian because during transition our term structures must be short sighted as we are temporarily de-aggregated. Hayekian because the new form follows a sort of optimum Ramsey covering subgraph, minimizing the estimation error of our general well being (minimization of transactions).
Valter
Feb 6 2009 at 7:29am
Kling writes: “Structural models do not extract information from data. Instead, they are a method for creating and calibrating simulation models that embody the beliefs of the macroeconomist about how the economy works. Unless one shares those beliefs to begin with, there is no reason for any other economist to take seriously the results that are calculated.”
What about some “other economist” whose beliefs lead to calibrated models that fit the data much worse?
Or do you mean to say that it is impossible to compare the validity/effectiveness/reasonableness of the outputs of two different models and all tests for mis-specification or tests of non-nested hypothesis are thus worthless?
Or do you mean that the currently existing such tests are unable to discriminate among the currently existing macro theories?
Jeff
Feb 6 2009 at 3:41pm
There are a couple of ways you can approach macro econometrics. One is to think of economic theory as implying restrictions on an otherwise unconstrained vector autoregression (VAR). You estimate a VAR with and without the restrictions, calculate some statistics, and discover whether or not the data reject the restrictions. If they do, your theory is clearly wrong. If not, it doesn’t mean your theory is correct, but at least it isn’t obviously wrong.
If the restrictions aren’t rejected, you might go further and ask whether or not imposing them improves predictions from the model. If not, in what sense is your theory knowledge? You can quantify “improvements” by comparing information criteria computed with in-sample data (such as the weak AIC or stronger BIC), or by comparing out-of-sample forecast errors. The latter is a better test.
Clive Granger and others showed in the early 1970’s that simple univariate time series models were better at forecasting inflation and income than the big, hundreds-of-equations economic models. Not long afterward, Robert Litterman and Chris Simms showed that a Bayesian VAR (BVAR) with random walk priors outperformed both the big economic models and univariate time series models.
Most “structural” modeling today starts with a theory model, derives from it some testable restrictions on a VAR, and then tries to see if the data reject the restrictions. But for macro variables, the small sample sizes usually mean that the VAR with or without the theory restrictions does not forecast as well as a BVAR would. So you have an ordering:
VAR < VAR with theory < BVAR without theory
Just how that is supposed to convince anyone that macro is scientifically worthwhile is beyond me.
It seems the only thing macro theory is actually good for is helping you tell internally consistent stories about how you think the economy works. Keynesian stories don’t even meet that requirement. But just because a story is internally consistent is no reason to believe it.
Comments are closed.