Two Critiques of Macroeconometrics
A key reason for the academic disenchantment with these types of models included the view that the identification schemes used were untenable (e.g., why is income in the consumption function but not in the investment?). Another source is the combined impact of the inflationary 1960’s and 1970’s, and the Lucas Critique.
Chinn ignores the critique of structural macroeconometric models that most influenced me. In this regard, he is like many young macroeconomists today, including some who have taken shots at my blog posts (I don’t feel personally insulted, just bothered by the wrong-headed view of people who think that the only problem you need to solve in macroeconometrics is the Lucas critique). The American Economic Association has a new Journal of Macroeconomics containing no less than three articles that reflect this misguided view. My anger at these articles prompted me to begin an essay called “The Lost History of Macroeconometrics.” When I finish, I may submit it to the journal. In any case, I will post it here, because I really think that younger economists have failed to learn some key lessons. Below, I elaborate on my views.
[update: Mark Thoma adds color.]Macroeconometrics is fundamentally an attempt to turn different time periods (say, the 1970’s and the 1990’s) into controlled experiments. There are many challenges to overcome. One is that the size of the economy differs across time periods. There is more income and more consumption in 1990 than in 1970, for reasons having nothing to do with fiscal policy or monetary policy.
How can we make data from different time periods truly comparable? One approach would be to adjust data for trend factors that affect the scale of the economy–population growth and trend productivity growth. Unfortunately, these trend adjustments fail, as is demonstrated by the high coefficients of serial correlation that remain in the de-trended data. In layman’s terms, no matter how hard you try to adjust for trends, macro data still send out strong signals saying that time periods far apart are not really comparable.
The next thing that you can try is to “difference” the data. That is, instead of focusing on the level of consumption in the fourth quarter of 2003, you take the difference between consumption in the fourth quarter and consumption in the third quarter. In fact, given the high degree of serial correlation, failure to difference the data, or at least semi-difference the data, would be utterly unsound practice.
Once you difference the data, however, you greatly amplify noise in the data relative to signal. At this point, if you know anything about the conceptual problems and implementation issues that the statistical agencies have in constructing the data to begin with, you realize how reckless a project it is to simply turn a computer loose trying to find patterns in this noise, which is what vector autoregressions are all about. Instead, you filter out the noise in differenced or quasi-differenced data using “priors” about economic structure. In other words, you bring a point of view about key macroeconomic relationships to the data, and you force your statistical estimates to conform to those relationships. This is the structural approach.
The structural approach is nothing but a roundabout way of communicating the way you believe the economy works. The estimated equations are not being used to inform the investigator about how the economy works. Instead, the equations are being used by the econometrician to communicate to others the econometrician’s beliefs about how the economy ought to work. To a first approximation, using structural estimates is no different from creating a simulation model out of thin air by making up the parameters.
This “making up out of thin air” critique is logically distinct from the Lucas critique. Telling me that a structural model is robust with respect to the Lucas critique only tells me that you made it up out of thin air in a way that satisfies a particular set of beliefs about how the economy ought to work. It does not tell me that you have found reliable relationships in the data. The relationships are in your own head, and you have used the data as a calibration tool.
In my day, the leading macro model, which was the antecedent to the Federal Reserve model, was abbreviated FMP. This stood for Fed-MIT-Penn, but a common joke was that it stood for “Franco Modigliani’s Priors,” meaning his beliefs about the economy. The FMP model was a showcase for Modigliani’s life-cycle consumption function (one of the ideas cited in his Nobel award). However, his collaborator Albert Ando appeared to me to be the main force behind the FMP model. It would better be termed “Albert Ando’s priors.” For over thirty years, Flint Brayton at the Fed has been custodian of the model, so today it reflects his priors, which in turn have evolved in response to changes in opinions elsewhere in the profession.
Once again, the second critique of macroeconometrics is this. Structural models do not extract information from data. Instead, they are a method for creating and calibrating simulation models that embody the beliefs of the macroeconomist about how the economy works. Unless one shares those beliefs to begin with, there is no reason for any other economist to take seriously the results that are calculated.