Despite the significant uptick in information security events on display thus far in 2011, and despite the diversity and caliber of organizations that are being breached, it seems that too many organizations are failing to learn the lessons of the victims.
More than that, it appears that when confronted with risks that require assessment, the vast majority of organizations (and their leaders) are unable to accurately make a valid assessment. They either downplay the likelihood of the risk, or they are paralyzed by the thought of it. Both attitudes tend toward inaction.
Consider how the Japanese downplayed any real risk of danger to their nuclear facilities, even as all indications pointed to something really, really bad in progress. Similarly, consider how the US is downplaying the flood potential at its own Nebraska nuclear facility. In an earlier version of this same article, we were told that the flood waters were not expected to exceed the limits of the berm. Why such nonchalance in the face of a potential catastrophe (if the assessment is wrong)?
Part of the reason we’re so bad at assessing threats and risks that can impact us, is that we are biased towards our own success. We are illogically optimistic about our chances vs potential adversity, and equally pessimistic about everyone else’s chances in the same situation. When we look at a situation where everyone else has struggled, we rarely take the middle road of “hmmm, let me evaluate what they did, and see how I could do better.” Instead, we tend towards one of the following:
- “Bah, those fools. I’ll never have that problem, because I’m smarter than that.”
- “Woe is me! This task is impossible!”
Neither of these positions offers much in the way of learning, but they represent the greater percentage of the options selected.
We’re also pretty bad at assessing cascading risks. That is, we assume that each risk we face has no possible impact on any other risk. So, the Japanese facility could withstand an 8.0 earthquake OR a moderate tsunami, but did anyone consider that an earthquake of 7.0 or better increases the risk of a tsunami, and that one could end up with *both* threats at the same time? It doesn’t seem so.
Likewise, how often when making an assessment of risk, do we calculate for the worst case scenario? It’s always “if X fails, we *should* be able to do Y and Z.” Really? What if you cannot, for some reason, do either Y or Z? What then?
In short, we fail to manage risks properly because we assess them unrealistically. We assume that the measures we put in place will work forever, and we never reassess them for updated situations as time goes by. When people come to us with discussions about risk, we get annoyed because it is distracting us from the things we really want to be doing, such as “making money.” This is even when the risk watcher is trying to protect the money you have already made.
We’re not going to get any better at protecting ourselves until we get better at doing the following:
- Not assuming that other people’s problems could never happen to you
- Recognizing that bad things *will* happen eventually, and typically when you are most unprepared for them
- Accepting that in a crisis, it won’t be the best combination of things that could happen, but maybe it will be the worst combination
- Understanding that if you have a risk of more than one bad thing happening, that means there’s a reasonable chance that more than one of those bad thing might happen together.
- Revisiting past risk mitigations to ensure that they are still valid mitigations and that the threat hasn’t worsened or changed in some manner
The sad part is that this is not limited to information security or technology issues, but to really critical issues like space shuttle safety, nuclear safety, anti terrorism, etc…
I wonder when we’ll start taking it all seriously enough to be useful.