James C. McWilliams writes,

For many purposes that are well demonstrated with present practices, AOS models are very useful even without the necessity of carefully determining their precision compared with nature. These models are structurally unstable in various ways that are not yet well explored, and this implies a level of irreducible imprecision in their answers that is not yet well estimated. Their value as scientific tools is undeniable, and the theoretical limitations in their precision can become better understood even as their plausibility and practical utility continue to improve. Whether or not the irreducible imprecision proves to be a substantial fraction of present AOS discrepancies with nature, it seems imperative to determine what the magnitude of this type of imprecision is.

His paper is filled with technical jargon. A more readable account is here.

If climate models are indeed inherently structurally unstable, then two very precise simulations of the physical processes of the atmosphere and the oceans will nearly always generate predictions that differ substantially. In that case, it’s unlikely that climate prediction models will come to agree with one another over time.

McWilliams cannot prove that climate models are structurally unstable, but he argues in the May 22 Proceedings of the National Academy of Sciences that the evidence points in that direction. “Even though we don’t have a set of all possible reasonable models that we or our children might make,” McWilliams says, “we can begin to see that that set will not converge to an exact answer, and the climate forecasts are not likely to come to significantly greater mutual agreement as we go forward into an era of climate change.”

In the 1970’s macroeconometric models were structurally unstable. That was a polite way of saying that they were falling apart. New data would turn out to be outside the confidence intervals of the models, and so we kept having to re-estimate the models. Finally, as people like Chris Sims and Clive Granger thought more about the behavior of economic time series, we began to find that many highly-respected macro models were no better at forecasting than a simple random walk, which said that a variable like personal consumer expenditures would grow next year at its average growth rate over the previous 20 years. Many iconoclastic papers were written, with my favorite being by Dick Meese and Ken Rogoff, called “Exchange Rate Models: Are They Fit to Survive?” (Unfortunately, the journal editor was chicken-hearted and changed the sub-title to “How Well Do They Fit Out of Sample?”)

McWilliams says that the models still offer the best guide for policymakers. Greg Mankiw seems to say the same thing about macro-econometric models. I think instead that the unreliability of models tells us something important about our ignorance.