Nick Rowe writes,

We can’t forecast next year’s Y unless Statistics Canada tells us next year’s X, and they can’t.

His point is that some variables are outside of the model, and to make a forecast in real time you have to forecast those variables.

Which is exactly what model proprietors do. In the traditional model, most government spending (apart from something like unemployment benefits, which depends on other variables in the economy) is forecast, based on legislation and budget proposals. The price of oil might be another variable that is projected by the model proprietor, rather than determined within the model.

Incidentally, lagged values of variables were really important, so that the ratio of known information to unknown information was actually pretty high for very-near-term forecasts.

Back in the days when macro models were evaluated (see Stephen McNees), one type of study looked at how the models would have performed if all the values of variables outside the model had been known perfectly in advance. In general, this did not make the model perform better. It could even make the model perform worse, because model proprietors were fudging their models in ways that corrected for many problems, including bad forecasts for variables outside of the model.

In other words, the problem for model-based forecasts was not that the proprietors could not forecast variables outside of the model. The problem was that, even given correct values for those variables, the errors intrinsic to the models were large.