Jason Collins reviews Tim Harford’s Adapt.
The concept of “normal accidents”, taken from a book of that title by Charles Perrow, is compelling. If a system is complex, things will go wrong. Safety measures that increase complexity can increase the potential for problems. As such, the question changes from “how do we stop accidents” to how do we mitigate their damage when they inevitably occur? This takes us to the concept of decoupling. When applied to the financial system, can financial institutions be decoupled from the broader system so that we can let them fail?
As I like to say, try to make the system easier to fix, not harder to break.
READER COMMENTS
Brian Moore
Feb 17 2012 at 9:23am
I think the problem with adopting this strategy in general is that rarely get rewarded or recognized for creating a situation where accidents didn’t happen.
For example, there are many parts of our society where simplicity has created fewer accidents than we might have. Very little time is spent praising those responsible for their foresight.
Rick Hull
Feb 17 2012 at 9:46am
Put another way, we should build systems that are resilient to failure. The assumption that “we cannot tolerate failure” can be very dangerous and painful.
Glen S. McGhee
Feb 17 2012 at 10:48am
Chuck Perrow is an early neo-institutionalist. Just so ya’ know what you are getting into.
IVV
Feb 17 2012 at 10:57am
Yeah, I suspect part of the problem is that we, on the whole, enjoy punishing failure a bit too much.
drobviousso
Feb 17 2012 at 12:22pm
I’m an engineer that got into engineering because I think it’s cool to “figure out how things work.” I started reading econ because I realized it’s really a subset of engineering for “things” = “people.”
This post hits on that analogy perfectly. This the engineering principal “Design for Repairability” or “Design for Upgradability” which naturally leads to a lot of decoupling.
MikeP
Feb 17 2012 at 1:09pm
This offers the opportunity to point out one of the most striking examples of normal accidents.
The passenger supplementary oxygen systems that drop down if an airplane depressurizes have never been recorded to have saved a single life or prevented a single injury. But they resulted in the deaths of 110 people in the crash of ValuJet 592 in the Everglades.
It was a collection of errors that resulted in armed oxygen generator canisters being stored with flammable tires in a cargo hold on that airplane. But if some regulator somewhere sometime said, “You know, pilots have oxygen. When cabins depressurize, pilots dive to breathable air in a minute or two. Perhaps adding the complexity, weight, and innumerable new handling rules of highly dangerous objects to save nobody in any plausible scenario is not a good idea.”
Complexity breeds accidents — accidents that you’ll never see coming.
Jack
Feb 17 2012 at 1:11pm
Along similar lines, Rick Bookstaber’s excellent “A Demon of Our Own Design” discusses these issues in more detail.
Mark Michael
Feb 17 2012 at 2:53pm
Like commenter drobviousso, I too am an engineer and heartily agree with the principle suggested here.
A previous post mentioned that macroeconomics should use the professional engineering’s code of conduct/ethics as a guide for their profession rather than, say, a scientist’s. (I agreed with that idea, too!)
As an engineer, I’d toss in a few more practical ideas for “designing” a system. You could look at the cost of a particular component’s failure to the overall system and pay more attention to those that have a higher cost. In avionics where I worked (before I retired), the flight control system was made super-reliable, since a modern fighter (an F-16 & following) is unstable in flight. If the “fly-by-wire” electronics fails, it’s goodbye aircraft. It’ll drop like a stone. So they made the flight control system “quad redundant”! Other components that are nice to have, but little is lost if they fail, keep ’em low-cost.
In the late 1970s, a big argument raged in the AF avionics community about how much maintainability to build into the avionics, things like ability to easily self-diagnose the hardware. That would reduce the complexity of the ground-based maintenance gear, but increase the flight hardware cost, size, weight, power. (Given the rapid advance in electronics, who won that “argument” should be obvious! But it took a little while.) Or careful modularity to make it easier to identify the failed unit, remove & replace – or even isolate in flight & keep flying.
Dr. Kling’s point of having a banking system that could maybe “fail gracefully” or fail in less harmful ways seems commonsense to any engineer.
Think about our central bank situation – our very own Fed. Just list the Fed chiefs and think about their monetary policies: Wm McChesney Martin (’51-’70), Arthur Burns (’70-’78), G. William Miller (’78-’79), Volcker (’79-’87), Greenspan (’87-”06), and now Bernanke (’06- present). Then visualize a plot of inflation over the last 60 years, recognizing that inflation is “purely a monetary phenomenon” (hopefully, most macroeconomists believe that today). Or more importantly, recognize that the Fed as “bank of last resort” has the responsibility to oversee the riskiness of the operations of the banks under its watch – part of the Reserve System. Then recall the banking crises of the 1980s-early 1990s and then 2006 – present. (If you believe Taylor’s Rule is a good guide for setting monetary policy, then the toughest task for the Fed might be monitoring the health of the banks in the system.) What kind of a grade should one give the Fed for its most important task?
Could should “redesign” of the Fed’s “system” of health monitoring improve things a bit?
Does this “system” meet the idea of a well-designed “aircraft” avionics system? Or even a cheapy automobile?
Comments are closed.