Now that my son is 10 it seems that we always wind up talking about cars when we’re driving somewhere. It’s classic father-son bonding, enhanced by shows like TopGear when we’re not in the car. “They were talking about the new computer controlled suspensions,” he told me, “But they didn’t like them because when they lose control it happens suddenly and they preferred to use their own skill as drivers on a manual suspension that gives way slowly.”
Several points came to me quickly. One is that George is definitely just like his Dad on this stuff. The other is that he was talking about something that comes up an awful lot lately – and not just in cars or other engineering design. We live in a world where we’ve learned to control just about everything that fits into our pre-designed limits – and then, like a 10-year-old boy, the world seems to race out to test those limits to see what happens. I don’t even know if there is a good term for this phenom. I’ll call it a “boundary failure”.
The examples of boundary failure are all over our news today. A nuclear reactor, with all its sophisticated controls, chugs along for years with no major incident until there is an earthquake. People live in apparent peace under dictatorships until one day they hit the tipping point and take to the streets. Currencies notoriously trade in narrow ranges until suddenly they jump to a new range, sometimes for no obvious reason. The big economic collapse in 2008 snuck under the radar of many people until one day it was a crisis.
Many of these are examples of “Bounded Chaos” where something moves along apparently at random in ways that are predictable only in that they don’t go over a line of some kind. Most are more like an earthquake itself, where stress builds up to the point where the systems in place can’t handle the strain and it all gives way suddenly – in a way only predictable to those who have studied the boundary failure or at least watch the stress as closely as the movement.
This phenom is the opposite of the more gentle cycles of nature that move up and down in predictable patterns, such as the cycle of the seasons. If you live for a few years on the planet you can at least feel in your guts how the change is the greatest when you cross the midpoint, watching the temperature rise as the days get longer in spring. In the long run it’s all very predictable.
Not so with many man-made systems. They are only as good as our ability to understand the limits – and we’ve gotten so smart that we often think we have the limits nailed as long as things keep on keepin’ on.
One of the many things I learned at Carnegie-Mellon as a budding young engineer was “Fault Tree Analysis”, a discipline where you carefully look at how things can fail. It usually comes up as safety, understanding how a reactor can get out of control and the backup systems that are necessary to take care of the most urgent circumstances. Imagine, for example, if the Daiichi plant had many backup generators or a big water tower where water could flow downhill without pumps in a worst-case scenario. It might look like a big waste of money when things are going well, but a lot of people wish things like this had been in place when it all went bad.
Away from engineered systems we might think of these kinds of things as “insurance”. Our Federal government is the insurer of last resort in a boundary failure, which is why they stepped in when everything stopped and the financial world was left to stand around anxiously staring at each other. The system we had was supposedly insured through a complex system of derivatives and other financial instruments, but these only made the risk appear smaller to the people operating in that world. They didn’t realize, or care, that all they did was socialize the risk to the point where everyone had a share of it whether they knew it or not.
It’s a lot like a driver in a very good car that gradually becomes bold enough to drive it at the very limit. Eventually, they’ll go over the limits. What does the car do then? Does the driver even know that they are right at the edge before it’s too late?
This kind of boundary failure is going to be a bigger part of our world until we recognize that the worst-case scenario is going to happen eventually. Some events may seem so rare that they are as likely as winning the lottery and not worth planning for – yet every few weeks, someone somewhere does indeed win the lottery. Over the long haul any complex system built up by experts in the field is only going to be as good as how well the people in that system understand the boundaries – and the boundary failure mode.
It’s always the assumptions that go wrong. That’s not exactly what you’re saying here but I think it’s a lot of it. Excellent post and a great point as always.
Dale, that’s surely a lot of it! I was thinking about breaking down these kinds of failures into different classes, but I’m not going to pretend I have a comprehensive list. Failure of assumptions is probably one class, buildup of stress might be another, and random/near-random change in boundaries might be another. There may be more.
What I’m much more concerned about here is how we have more “appliance users” of tech the more they develop a confidence in their various systems that is not all that well deserved. It’s what James Burke talked about in Connections as the dark side of the “trigger effect” where technology spawns new tech and increases specialization constantly.
I don’t know if this is the same thing at all but this sounds to me like part of the ‘incentive problem’ in public policy. If you encourage something you get more of it – but what happens when you get too much or things change? It seems to me that this is when we run into these kinds of problems at least in economy and politics.
Anna, I think that’s brilliant! It’s not exactly the same thing, but they are related in many ways. A government or any other system that encourages “appliance users” either by design or just because it’s damned good at what it does usually winds up encouraging everyone to test the boundaries. In the financial world I think everyone’s hunt for an “edge” over everyone else will always propel this – so the success in (apparently) eliminating risk only made risky behavior more palatable – and the crash that came when the boundaries of that system’s ability to keep doing its thang were crossed became inevitable.
Or we could put the generators above surge level so they wouldn’t get swamped. Or not store excess disposed fuel so near to the reactor (you mentioned this earlier). Hey what do you think of the show Electric dreams on PBS?
Dan: Yes, you basically have to assume that everything might fail one day. That can be a lot to ask from people who are “into” setting up elegant systems of some kind or another. It’s really easy to say, “Yes, but the odds of the electricity from the outside going down AND the diesel backup going down at the same time are really miniscule”. And they are.
But a very tiny chance of a really huge disaster is one of those real problem areas. Economically speaking, a one in a billion chance of a billion dollar disaster is worth, what, about one buck, right? But what if you’re off a bit and it’s more like a one in a million chance of a 100 billion dollar disaaster? Oopsie.
Haven’t seen Electric Dreams, I’ll have to watch for it. Haven’t seen a lot of PBS lately for some reason, now that I think of it.
Pingback: Obama Doctrine | Barataria – The work of Erik Hare
Pingback: Four Years On | Barataria – The work of Erik Hare
Pingback: School’s Not Yet Out | Barataria – The work of Erik Hare
Pingback: Connections, Revisited | Barataria – The work of Erik Hare
Pingback: Investabots Amok! | Barataria – The work of Erik Hare