Dealing with the Biggies

Now, the challenge with the indications approach we discussed last time, is that there are potentially unlimited numbers of scenarios. To break out of this, cyber risk analysts must spend significant time contemplating:

  • What is it that we absolutely cannot allow to happen?
  • What cyber risk event would really devastate us?
  • What scenarios has my organization not even considered?

For each of these questions, the analyst must ask “how hard would it be/how much would it cost for the adversary to do that”?

This line of thinking is a fantastic departure from traditional risk management approaches that tell us to rely on concepts like “probability”, “likelihood”, and “frequency”.

Ironically, thinking about the way things have always happened in the past lulls you into the trap of strategic surprise.

Several years ago the US Department of Energy worked with the North American Reliability Corporation to produce a report on High Impact Low Frequency risk events. 

It was a neat effort that brought together an able group of thinkers. But no where did it address the issue of adversary cost.

In prioritizing defensive investments for high impact cyber scenarios, we cannot afford to apply historical, model-able, “fit-to-a-curve” approaches. Those models simply do not apply to intelligent adversaries.

Instead, I advocate replacing HILF with HILAR (High Impact Low Adversary Resources). If we don’t, I’m afraid the result won’t be so funny.

Indication vs. Indicator

For the past weeks we’ve been discussing cyber risk intelligence. I promised to show you how to answer the burning question:

When will my organization be the victim of a significant cyber incident?

If you’ve been following along, congratulations! We’ve laid the foundation, and are officially arriving at the good stuff.

We start with a point almost entirely lost on cyber security voices today. That is the difference between an indication and an indicator.

Now, I admit that in common parlance these two terms are used interchangeably, but we are going to rehearse their specific meanings.

An indication is a general datapoint that can support a certain conclusion. For example, compile times in a piece of software is and indication of the geography in which the malware was written.

In another example, finding a hash on your computer that matches that of a known remote access Trojan is an indication that you have been compromised.

But an indicator is something that you, the security analyst, are looking for before it happens. It is an item on an indicator list.

Indicator list.
Indicator list.
Indicator list.

You see, you get ahead of the threat when you think through what a threat actor could reasonably do to achieve their objective. Please note this differs drastically with the likelihood of an event (more on that next time).

This means that the cyber risk analyst must create a set of scenarios describing, in a detailed way, what specific threat actor types would do to:

  • Establish an attack infrastructure
  • Create a target list
  • Initiate reconnaissance
  • Conduct target systems analysis
  • Examine options that lead to objectives
  • Create weapons
  • Launch campaign
  • Foothold
  • Pivot
  • Maintain presence
  • Take action

Once this detailed list is prepared, the analyst asks, “How could I detect the adversary at each step of the attack?”

These “detections” become your indicator list.

Now, don’t limit yourself by saying “but I don’t have any way I could actually detect the adversary doing these steps”. Instead, you must start asking yourself how you could get that detection.

At the risk of gross oversimplification: over time, when you see these indicators adding up, you know the adversary is getting closer. You are moving from indicators of interest, to indicators of opportunity, on to indicators of targeting, and finally to confirmations of compromise.

For more information on this approach, see Cynthia Grabo’s work on Anticipating Surprise.