The Kling post goes a bit far. Plenty of respected macroeconomists still use macroeconomic models, which is in itself a refutation of the idea that “the profession has decided that this macroeconometric project was a blind alley.” Alan Blinder, for instance, is part of the profession, and so is the president of the Minneapolis Fed, who wrote this paper on macroeconomic modeling.
This is an issue on which I have a bit more background and experience to share with Klein. I will put my discussion below the fold.
[UPDATE: Eric Falkenstein makes good points.]The essay by Minneapolis Fed President Narayana Kocherlakota that Klein cites says,
Modern macro models can be traced back to a revolution that began in the 1980s in response to a powerful critique authored by Robert Lucas (1976).
…these newer models do imply that government stabilization policy can be useful. However, as I will show, the desired policies are very different from those implied by the models of the 1960s or 1970s.
By Kocherlakota’s definition, the Moody’s model that Blinder and Zandi used is not modern. It is a model from the 1960’s or 1970’s. Kocherlakota’s generation of macroeconomists would reject the approach used in that model.
Kocherlakota would say that modern macro methods are better. I would say that they are different, but not better.
Robert Solow disapproves of the “representative agent” found in modern macro models. So do I. There is no specialization and trade in a representative agent model. Since I think that economic activity consists of sustainable patterns of specialization and trade, in my view these models contain no economic activity. Consequently, they capture none of the characteristics that I think are important for explaining macroeconomic outcomes.
However, the same criticism applies to the older generation of macroeconometric models. They are effectively representative-agent models, also.
The difference is that the older models impose habit-based expectations, while the newer models impose model-based expectations. For an explanation of the difference, see Installment 9. Kocherlakota’s generation of macroeconoimsts think that model-based expectations represent a big improvement. I think that insurmountable problems remain, as explained in my lost history paper.
Klein writes,
It’s also worth noting that the private sector relies extensively on these models, and it would be odd for them to give Moody’s all that money if they thought there was no predictive value.
If you took away Mark Zandi’s government contracts, could he still support his operation? Don’t be too sure. When I was Klein’s age, there were more companies in the modeling business, and I believe that they were more profitable. The industry has shrunk for a reason. In the 1970’s, people learned that you can forecast macro data more accurately and much less expensively using pure time-series statistical techniques.
Elsewhere, Klein interviews Blinder, who says,
I was surprised, and I think Mark was too, by the very large effects we got from the compression of interest rate spreads, which is how we modeled the effects of the financial policies.
Again, Blinder and Zandi are very opaque about what they did here. But:
(a) they attributed the decline in interest-rate spreads to a change on the supply side (government intervention) rather than a change on the demand side (less demand for loans). John Taylor, who tried to use event-study methods to determine the effect of financial policy on spreads, vigorously disputes this.
(b) They proceed to start from a baseline in which interest-rate spreads are low, supposedly due to policy, and then simulate an alternative scenario in which interest-rate spreads remain high. They then mechanically crank through the impact of these higher spreads. As low as spending on consumer durable goods and investment was in 2009, the simulation would say that spending would have been even lower. We cannot gauge the plausibility of this extrapolation, because Blinder and Zandi do not report any sectoral results. For all we know, the model cranked out a negative number for housing starts, and they just took that at face value.
In another part of the interview, Blinder advocates
a WPA-like program of temporary, direct, public hiring. People could work in parks, in maintenance, the many paper-shuffling jobs there are in government. You could save a lot of state and local jobs that would otherwise be terminated.
He is advocating what I would call the creation of unsustainable patterns of specialization and trade. The Keynesian multiplier idea is that people engaged in unsustainable jobs will then spend money that will lead to the creation of sustainable jobs. This is what happens in macroeconometric models, because that is how they are constructed. However, such models do not even recognize the difference between sustainable and unsustainable patterns of specialization and trade.
In macroeconometric models, spending is spending, and spending is good, regardless of where the spending goes. If you think that captures reality, then you might buy into the models. If not, then you would prefer a different model.
I would prefer a different model. However, I am quite pessimistic about the statistical properties of macroeconomic data for the purposes of undertaking a modeling exercise. Again, see my “lost history” paper.
READER COMMENTS
Stephan
Jul 30 2010 at 8:42am
I disagree. The concept of the state as “employer of last resort” goes back mostly to Lerner. As the term “last resort” implies the main objective isn’t to create sustainable patterns of specialization and trade.
What is unsustainable are the social costs incurred by prolonged mass unemployment. There might be waste and inefficiency. So what? The benefits outweigh the costs. And additional spending is certainly also helpful.
Daniel Kuehn
Jul 30 2010 at 8:53am
I take your point about “old” vs. “new” model, but I don’t think that translates into the “blind alley” point as reasonably as you think it does.
It’s hard to make the case when that the profession has rejected the model when professionals as prominent as Zandi and Blinder clearly don’t, when as you say the CBO doesn’t, when a professional as prominent as Romer doesn’t, and when much of the profession has embraced these results.
“The profession” hasn’t decided it is a blind alley – some of the profession has decided it’s a blind alley. There’s a big difference between those two things. And on this, Ezra Klein has a point. Something’s not right when you critique a whole swath of the profession for doing something that supposedly that profession has rejected.
It’s an odd standard for you to hold anyway. Hasn’t “the profession” more or less decided the Austrian School is a blind alley? I know you’re not quite as explicitly an Austrian as some others, but there are clear alliances and sympathies. Is that professional rejection a valid reason for simply discarding the Austrian school? I think probably not.
This is an “old” model – that I’ll certainly agree with. I don’t think that necessarily means it’s a defunct way of looking at things. I also agree with the limitations you listed – particularly the point on labor dynamics. I’m writing my statement of purpose for PhD programs right now and what I’m proposing as my research agenda is precisely that – working with the role of gross job flows in business cycle models. So it was neat to see you mention that qualification too. But the fact that these models can be improved or the fact that they are “old” isn’t a reason to reject the approach out of hand. I think you make too great a leap from “old” to “rejected by the profession”.
fundamentalist
Jul 30 2010 at 9:39am
The fact that there is a demand for the output of such models doesn’t mean the models are any good. All you have to do is track the forecasts to realize that they are abysmal at forecasting anything more than the next quarter. As Kling wrote, the popularity of huge models declined rapidly when time series techniques, such as ARIMA, came along. Politicians will always demand model output because the need it to fool their voters into thinking some kind of thought has gone into their policies. Politicians use the model output to justify their preconceived policies. They couldn’t care less how wrong the models are.
WIlliam Barghest
Jul 30 2010 at 9:50am
“In the 1970’s, people learned that you can forecast macro data more accurately and much less expensively using pure time-series statistical techniques.”
What is the simplest example of the difference between a macroeconomic model and a time-series statistical technique? They both seem to be functions from past data to future predictions. Is the difference that time-series statistical techniques simply make no assumptions about the relationship between the past data and the future predictions? (this seems impossible, they would have to be random functions of the past in order to make no assumptions, and they would also make random predictions) Or are the assumptions more like smoothness that are fairly general as opposed to assuming some specific relationships between macroeconomic quantities?
William Barghest
Jul 30 2010 at 9:55am
It’s just obvious that demand can exist for products that don’t do what they are supposed to do.
Take for example this product,
http://en.wikipedia.org/wiki/E-meter
The real question is what sort of business is modern economics for which the popularity of these models is evidence indeed.
david
Jul 30 2010 at 10:13am
Hi Arnold.
Another good post, thanks.
BTW, your link to your “lost history” paper isn’t quite right.
eccdogg
Jul 30 2010 at 10:16am
Exactly, fundamentalist.
Many forecasting models, not just in economics are used just to justify that something was done for a “Scientific” reason. My wife is a highway engineer and her company does traffic forecasting modeling out for 20-30 years. Given high level of uncertainty of all the variable and relationships involved and all the kluges necessary to get “reasonable” results the output of such models should be thought of as no more than the opinion of the person doing the analysis. Yet governments and private entities pay for them as a CYA so they can say “We made this decision after a rigorous analysis”.
William Barghest
Time series analysis applies no theory and very little structure to the analysis. It essentially forecast next periods number based on past numbers, their change, and the past errors of the model to several periods back.
So a simple time series model would be Yt=Yt-1 +e which is a random walk saying tomorrows value equals yesterday’s value plus an error. To that you can add trends and mean reversion and but at no point do you impose ANY economic theory on the data. The fact that most macro theories have a tough time beating these models is a pretty big indictment of the state of all macro modeling.
david
Jul 30 2010 at 10:22am
I meant to mention that I really like your characterization of “rational expectations” as “model-based expectations”. Scott Sumner has sometimes referred to rational expectations as “consistent expectations” which is also good. Both point to what appears to me to be a critical flaw in most modern macro – the assumption that rationality of expectations implies homogeneity of expectations.
eccdogg
Jul 30 2010 at 10:32am
BTW, People like Ezra Klein are exactly the reason there is a market for this type of modeling.
They are either gullible or in favor of a policy and don’t want to ask too many questions as long as the analysis fits what they want.
fundamentalist
Jul 30 2010 at 10:39am
Check out John Taylor’s comment:
http://www.economicpolicyjournal.com/
fundamentalist
Jul 30 2010 at 10:45am
William Barghest,
ARIMA stands for auto regressive integrated moving average. In short, its a combination of an auto regressive model, Yt=Yt-1 +e as eccdogg posted, and a moving average model. You can add some dummy variables if you want to indicate special circumstances at specific points. Those tend to forecast better than big econometric models and indicate turning points better. Others moved on to error correction models and co-integration analysis.
MikeDC
Jul 30 2010 at 10:47am
So basically, Blinder is advocating make work policies to eliminate unemployment caused by other government programs?
And Klein cheers this and his goofy model on?
Doubly shocking.
ThomasL
Jul 30 2010 at 12:42pm
Fabulous post, Dr. Kling.
Ryan
Jul 30 2010 at 1:10pm
Daniel Kuehn:
Hasn’t “the profession” more or less decided the Austrian School is a blind alley?
Generalizations and summaries about the views of the economics profession run into a major survivorship bias problem that I haven’t seen adequately addressed. Those who are educated in economics and who believe in more interventionist policies will find more value in an academic or policy advisory occupation. The educated who come to the opposite conclusion would be less likely to be in an advisory role where, to stay consistent with their views, they’d have to tell politicians and the general public to intervene less.
Wall Street is full of Austrian School folks, while academia is full of more Keynesian folks. This isn’t chance. Generalizations about the views of “economists” or “the profession” shouldn’t be confused with generalization about those educated in economics.
stephen
Jul 30 2010 at 1:33pm
Eric Falkenstein has good thoughts, as usual.
http://falkenblog.blogspot.com/2010/07/chief-economists-are-for-pr.html
Doc Merlin
Jul 30 2010 at 2:06pm
‘In the 1970’s, people learned that you can forecast macro data more accurately and much less expensively using pure time-series statistical techniques.’
Yep, non economics based statical models forecast macro time-series data far, far better than the economics based models. Macro models are particularly bad at macro modeling.
Elvin
Jul 30 2010 at 2:10pm
Fundamentalist:
I somewhat disagree with Taylor’s and others notion that “Poor policymaking prior to TARP helped turn a serious but seemingly controllable financial crisis into an out-of-control panic.” The word I object to here is “controllable”. I think it was uncontrollable. A panic was going to happen as the market realized the severity of the problem. The only question was when. Through dealing with the GSEs, Bear Stearns, forced mergers, etc. the government only delayed the day or reckoning. In my opinion, a bail-out of Lehman just meant a bigger problem down the road.
I guess it’s partly a debate about what’s worse: reactive attempts to stop the tide that mis-directs and wastes resources (controlable), or just getting the pain over with and letting the recalculation process begin with proper prices (out-of-control). Do we want to be like Japan and wait two decades for growth to appear or do we want to strive for a higher growth path?
ThomasL
Jul 30 2010 at 2:54pm
@DocMerlin
Yes, I think the proper lesson modelers should take is how little they understand what they are modeling.
Statistical forecasting requires very little understanding of the actual processes, so is handy when gaining a detailed understanding is not possible or not cost-effective to achieve.
Models are at their best when the modeler comprehends all the operating processes. That does not mean that the model itself needs to be comprehensive, but the understanding of the modeler needs to approach comprehensive, since he must know what is safe to leave out, what is safe to leave constant, etc., for his particular purposes.
Whenever statistical forecasting consistently beats models, it is a giant blinking sign that the modelers didn’t really understand what they were modeling, they only imagined they did.
Chris Koresko
Jul 30 2010 at 7:01pm
ThomasL: Whenever statistical forecasting consistently beats models, it is a giant blinking sign that the modelers didn’t really understand what they were modeling, they only imagined they did.
Amen, brother. It’s a truism in science that the test of understanding is nontrivial prediction. A model-free extrapolation of the data represents a trivial prediction. Think of it as the control placebo in a drug test. If the best model anyone can build based on our understanding of principles and processes cannot outperform it, then we cannot claim to understand macroeconomics.
If memory serves, David Henderson has a post somewhere on this site which presents another damning critique of macro, based on an information-theoretical argument. So it’s no surprise to me that the models are no better than the “placebo”.
The big question in my mind is why the models perform worse than the blind extrapolation. One possibility is that they’re being tuned to produce results desired by the researcher. Can anybody suggest a better idea?
ThomasL
Jul 30 2010 at 7:42pm
@Chris
By observation and inference, the phases of the sun and moon, movements of the planets, etc. were known and predictable [though, importantly, not understood] for thousands of years before Galileo’s ideas on heliocentricity were commonly accepted.
What if all that had been discarded and replaced by models based on his enlightened theories?
Neither the physics nor the entire concept of elliptical orbits were comprehended.
A model of Galileo’s theories would almost certainly have been far worse at predicting phases and motion than old-school geocentricity was able to do based solely on historic observation.
Babinich
Jul 30 2010 at 9:43pm
“a WPA-like program of temporary, direct, public hiring. People could work in parks, in maintenance, the many paper-shuffling jobs there are in government. You could save a lot of state and local jobs that would otherwise be terminated.”
Why not pay folks to dig holes and fill them back in? The proposal above is busy work, non productive and terminal to capital formation.
steve
Jul 31 2010 at 11:14am
“Wall Street is full of Austrian School folks, while academia is full of more Keynesian folks.”
Yes, the financial industry is full of Austrians and Objectivists. Most of those regulating them for the last 20-30 years also.
Steve
Chris Koresko
Aug 1 2010 at 12:07am
@ThomasL
Very interesting post. You made me think deeply.
I don’t agree with you completely, though. In the language of my first comment, both the Earth-centered and Sun-centered predictions of the positions of the planets, Moon, and Sun in the sky would fall into the “trivial” category of models, since (as you note in your post) they are simply mathematical fits to data and do not contain any physics at all.
As a side point, I think you’re not right to claim that the Sun-centered calculation would have done worse than the Earth-centered one. If memory serves (it’s been a long time, so I may be wrong here, but I doubt it) the basic approach in each case was to start with perfectly circular orbits (“cycles”) and fit the residuals with additional circular sub-orbits (“epicycles”) centered on the positions predicted by the cycles. The residuals from that would be fit by another level of epicycles, and so on, until the residuals were thought to be too small to justify continuing. In other words, in mathematical terms it’s just a kind of series expansion. It’s not clear to me how far the 17th-century astronomers actually took this.
I would be surprised if the Sun-centered starting point were worse than the Earth-centered one, in the sense of requiring a greater number of epicycles to get the prediction to agree with the data to within the error bars. But maybe you meant that a Sun-centered calculation with no epicycles would do worse than an Earth-centered one with an arbitrary number of epicycles, which I think has to be true.
As I understand it, Kepler’s discovery that elliptical orbits eliminated the need for epicycles was originally sold as a means to simplify the calculation of planetary positions, rather than as a claim that the orbits were really elliptical. It was only Newton’s demonstration that a single physical force, together with his laws of kinematics, was sufficient to account for all of the observations, that took us into the realm of a “principles and processes” model and gave us elliptical orbits as a real physical phenomenon, and not incidentally made the compelling case for a Sun-centered cosmology.
Newton’s model predicts and explains Kepler’s laws, under the assumption that the planets can be treated as massless test particles. When we relax that assumption, the planets influence each others’ orbits in complicated but predictable ways. Thus Newton’s predictions in effect go beyond the data on which they were based, and they led to the discovery of Neptune based on the behavior of the orbit of Uranus. This is the demonstration that Newton understood something about the system; he wasn’t just doing math.
So I continue to hold that the ability to out-predict the best functional fit is a necessary prerequisite to any claim to understand the system we’re trying to model.
NB: You’re giving Galileo way too much credit here. He was a great physicist, but not a great astronomer, in my opinion. He built a crude telescope based on someone else’s design, published some observations, provoked the Church with his egotism, and pushed a bunch of wrong ideas about astronomy. The real heroes in this story are Tycho, Kepler, and Newton.
Carlsson
Aug 1 2010 at 9:24am
ThomasL almost nails it with this comment:
“Whenever statistical forecasting consistently beats models, it is a giant blinking sign that the modelers didn’t really understand what they were modeling, they only imagined they did.”
The only thing I’d change is “didn’t really understand.” in truth, either they didn’t do their homework, or they willfully neglected known research results in order to preserve their own priors about basic Keynesian multiplier theory, as an excuse for more government.
The seminal research was done by Nelson, who showed at least two decades ago that all the important economic time series are non-stationary, i.e., basically random walks without mean reverting properties. That is a sledge hammer to the kind of models Zandi and Blinder are still trying to sell, because these models are based on the assumption of reversion to mean paths — how else can they estimate their effects?
This criticism, I believe, holds for modernized macro models too. The implicit assumptions about the nature of data have already been rejected by the tests for non-stationarity. I doubt any of these models are worth the electrons they occupy.
Brian Clendinen
Aug 2 2010 at 9:36am
I was going to make a comment in the original post but John Taylor brought it up in his second point. That point alone undermines the validity of the model and proves it is propaganda as originally stated. I do not know how many times I hear some Keynesian on NPR spouting off some multiplier effect that is significantly higher than research has shown in order to justify government intervention. Hiding the same type of argument in a more complex model really does not shock me. Climate models have been using these types of tactics for years so politicians can justify their polices are backed by “science”.
Don’t get me wrong I am glad for the detail rigorous research disproving macro modeling. However, the underlying concept is extremely ostentatious and arrogant to begin with. I mean at the most fundamental level one is using the model to try to predict numerous decisions of individuals number in the hundreds of millions, of which they often do not their own future descisions, add to the fact each individual does not have the same impact. Some individuals might have hundreds to thousands of times more influence than others on the model.
I personally do not think mathematics and research can ever come close to figuring out how to modeling future results of complex human systems with any reasonable accuracy especially considering humans are unpredictable at times.
Jeff
Aug 2 2010 at 11:10am
The right way to do empirical macro is to start with an a-theoretic statistical model, like a BVAR, (Bayesian Vector Auto Regression), that fits the data reasonably well. You then ask yourself what restrictions on the BVAR coefficients are implied by the economic theory you’re interested in. Re-estimate the BVAR with those restrictions imposed, and do a likelihood ratio test. If the test rejects the restrictions, the theory is wrong. If it doesn’t, all that means is that the theory is not obviously wrong.
You’re not done yet, however. You should also try to forecast out of sample with both your original BVAR and the theory-restricted BVAR. If the theory doesn’t improve the forecast, then it doesn’t actually tell you anything useful, even if it isn’t obviously wrong.
The profession has known for a long time how to do this stuff, but you almost never see it done. My impression is that some macro theories get rejected at stage1, and none of them improve out of sample forecasts significantly. The honest conclusion is that we really don’t know much at all about why the macroeconomy behaves as it does. Macro in general would be a good candidate for Penn and Teller’s Bullshit! show.
Comments are closed.