Wrap up on cyber risk intelligence process

We’ve done a great series of posts explaining the cyber risk intelligence process. I wanted to take a moment and put it all together in summary form:

  1. Take the time to identify your assets. The more you document, the better off you will be.
  2. Create a list of scenarios — the things you absolutely cannot allow to happen.
  3. Identify the actions an adversary would take as it moved towards executing an attack against you. We call this an indicator list.
  4. Do all you can to understand what is going on in the external threat environment.
  5. Match your feeds from the external threat environment against your internal list of assets.
  6. Check the “matches” you observe against your indicator list. Monitor for “indicator progression”.
  7. Warn your boss when the things when the evidence is beginning to mount — but when you still have time to mitigate.

Dealing with the Biggies

Now, the challenge with the indications approach we discussed last time, is that there are potentially unlimited numbers of scenarios. To break out of this, cyber risk analysts must spend significant time contemplating:

  • What is it that we absolutely cannot allow to happen?
  • What cyber risk event would really devastate us?
  • What scenarios has my organization not even considered?

For each of these questions, the analyst must ask “how hard would it be/how much would it cost for the adversary to do that”?

This line of thinking is a fantastic departure from traditional risk management approaches that tell us to rely on concepts like “probability”, “likelihood”, and “frequency”.

Ironically, thinking about the way things have always happened in the past lulls you into the trap of strategic surprise.

Several years ago the US Department of Energy worked with the North American Reliability Corporation to produce a report on High Impact Low Frequency risk events. 

It was a neat effort that brought together an able group of thinkers. But no where did it address the issue of adversary cost.

In prioritizing defensive investments for high impact cyber scenarios, we cannot afford to apply historical, model-able, “fit-to-a-curve” approaches. Those models simply do not apply to intelligent adversaries.

Instead, I advocate replacing HILF with HILAR (High Impact Low Adversary Resources). If we don’t, I’m afraid the result won’t be so funny.

Indication vs. Indicator

For the past weeks we’ve been discussing cyber risk intelligence. I promised to show you how to answer the burning question:

When will my organization be the victim of a significant cyber incident?

If you’ve been following along, congratulations! We’ve laid the foundation, and are officially arriving at the good stuff.

We start with a point almost entirely lost on cyber security voices today. That is the difference between an indication and an indicator.

Now, I admit that in common parlance these two terms are used interchangeably, but we are going to rehearse their specific meanings.

An indication is a general datapoint that can support a certain conclusion. For example, compile times in a piece of software is and indication of the geography in which the malware was written.

In another example, finding a hash on your computer that matches that of a known remote access Trojan is an indication that you have been compromised.

But an indicator is something that you, the security analyst, are looking for before it happens. It is an item on an indicator list.

Indicator list.
Indicator list.
Indicator list.

You see, you get ahead of the threat when you think through what a threat actor could reasonably do to achieve their objective. Please note this differs drastically with the likelihood of an event (more on that next time).

This means that the cyber risk analyst must create a set of scenarios describing, in a detailed way, what specific threat actor types would do to:

  • Establish an attack infrastructure
  • Create a target list
  • Initiate reconnaissance
  • Conduct target systems analysis
  • Examine options that lead to objectives
  • Create weapons
  • Launch campaign
  • Foothold
  • Pivot
  • Maintain presence
  • Take action

Once this detailed list is prepared, the analyst asks, “How could I detect the adversary at each step of the attack?”

These “detections” become your indicator list.

Now, don’t limit yourself by saying “but I don’t have any way I could actually detect the adversary doing these steps”. Instead, you must start asking yourself how you could get that detection.

At the risk of gross oversimplification: over time, when you see these indicators adding up, you know the adversary is getting closer. You are moving from indicators of interest, to indicators of opportunity, on to indicators of targeting, and finally to confirmations of compromise.

For more information on this approach, see Cynthia Grabo’s work on Anticipating Surprise.

Thinking like an intelligence analyst

In the previous post, we addressed the importance of being able to parse the indications from the feed.

In order to match indications with external origin, you must have baselined you internal operations for the categories that interest you.

Let’s say you receive a report that suspected Guatemalan state-sponsored actor known as Sunshine Donkey (AKA Daft Burro / AKA Yummy Burrito) wiped hard drives at the Monterrey, Mexico facility of The Salsa Inc., a Mexico-based agri-food business, presumably in retaliation for Mexico’s policy to charge Guatemalan nationals higher rates for scuba diving permits.

The report includes malware file hashes.

The first thing your mega heavy cyber intel provider will have you do is grab the hashes and search your sensors. No hit = no problem. You are safe. Right?

Wrong.

A forward-looking approach would prompt you to ask the following questions as you attempt to match the external event against your own operations:

  • Do any of the countries where I operate have ongoing scuba permit disputes with Guatemala?
  • How easily could the same attack techniques be used to wipe my hard drives?
  • Do I have any dealings with The Salsa Inc?
  • Do I have any dealings in agri-foods?
  • Do I have any facilities in Monterrey?
  • Do I have any facilities in Mexico?
  • Do I have Mexican suppliers (in this industry)?
  • Do I have Mexican customers (in this industry)?
  • Do I have any dealings in scuba?
  • Do I have Guatemalan suppliers or customers (in this industry)?
  • Do my key employees have plans to scuba dive in Mexico in the near future?

Now we are getting closer to proactive cyber risk intelligence. The key is to prepare to answer these questions before you get the report. More on that next time.

Drinking from the indications firehose

When you purchase intelligence feeds, you are generally purchasing flow emanating from the fire hose.

In order to be successful in matching the external environment to your internal situation, you must first be able to parse out or extract the atomic indication (with its relationship data) from the feed.

This means, If you are getting the feed as an email, you have to be able to identify the elements of the email that can be relevant to you. Without the ability to parse this out, you will seldom find a match. You can’t rely on your intel guy to read the entire fire hose flow, make sense of it, and make good warnings and recommendations.

If you are getting the feed as a JSON stream or an XML document or via API, you need to make sure that the atomic items important to you are readily accessible. If they are not, you will seldom find a match.

Finally, If you are only looking for IOCs (ignoring IOT, IOI, and IOO), you are only worrying about what has already happened in the past. This is important, but not the value you really want to get from your intelligence analyst.