Indication vs. Indicator

For the past weeks we’ve been discussing cyber risk intelligence. I promised to show you how to answer the burning question:

When will my organization be the victim of a significant cyber incident?

If you’ve been following along, congratulations! We’ve laid the foundation, and are officially arriving at the good stuff.

We start with a point almost entirely lost on cyber security voices today. That is the difference between an indication and an indicator.

Now, I admit that in common parlance these two terms are used interchangeably, but we are going to rehearse their specific meanings.

An indication is a general datapoint that can support a certain conclusion. For example, compile times in a piece of software is and indication of the geography in which the malware was written.

In another example, finding a hash on your computer that matches that of a known remote access Trojan is an indication that you have been compromised.

But an indicator is something that you, the security analyst, are looking for before it happens. It is an item on an indicator list.

Indicator list.
Indicator list.
Indicator list.

You see, you get ahead of the threat when you think through what a threat actor could reasonably do to achieve their objective. Please note this differs drastically with the likelihood of an event (more on that next time).

This means that the cyber risk analyst must create a set of scenarios describing, in a detailed way, what specific threat actor types would do to:

  • Establish an attack infrastructure
  • Create a target list
  • Initiate reconnaissance
  • Conduct target systems analysis
  • Examine options that lead to objectives
  • Create weapons
  • Launch campaign
  • Foothold
  • Pivot
  • Maintain presence
  • Take action

Once this detailed list is prepared, the analyst asks, “How could I detect the adversary at each step of the attack?”

These “detections” become your indicator list.

Now, don’t limit yourself by saying “but I don’t have any way I could actually detect the adversary doing these steps”. Instead, you must start asking yourself how you could get that detection.

At the risk of gross oversimplification: over time, when you see these indicators adding up, you know the adversary is getting closer. You are moving from indicators of interest, to indicators of opportunity, on to indicators of targeting, and finally to confirmations of compromise.

For more information on this approach, see Cynthia Grabo’s work on Anticipating Surprise.

Thinking like an intelligence analyst

In the previous post, we addressed the importance of being able to parse the indications from the feed.

In order to match indications with external origin, you must have baselined you internal operations for the categories that interest you.

Let’s say you receive a report that suspected Guatemalan state-sponsored actor known as Sunshine Donkey (AKA Daft Burro / AKA Yummy Burrito) wiped hard drives at the Monterrey, Mexico facility of The Salsa Inc., a Mexico-based agri-food business, presumably in retaliation for Mexico’s policy to charge Guatemalan nationals higher rates for scuba diving permits.

The report includes malware file hashes.

The first thing your mega heavy cyber intel provider will have you do is grab the hashes and search your sensors. No hit = no problem. You are safe. Right?


A forward-looking approach would prompt you to ask the following questions as you attempt to match the external event against your own operations:

  • Do any of the countries where I operate have ongoing scuba permit disputes with Guatemala?
  • How easily could the same attack techniques be used to wipe my hard drives?
  • Do I have any dealings with The Salsa Inc?
  • Do I have any dealings in agri-foods?
  • Do I have any facilities in Monterrey?
  • Do I have any facilities in Mexico?
  • Do I have Mexican suppliers (in this industry)?
  • Do I have Mexican customers (in this industry)?
  • Do I have any dealings in scuba?
  • Do I have Guatemalan suppliers or customers (in this industry)?
  • Do my key employees have plans to scuba dive in Mexico in the near future?

Now we are getting closer to proactive cyber risk intelligence. The key is to prepare to answer these questions before you get the report. More on that next time.

Drinking from the indications firehose

When you purchase intelligence feeds, you are generally purchasing flow emanating from the fire hose.

In order to be successful in matching the external environment to your internal situation, you must first be able to parse out or extract the atomic indication (with its relationship data) from the feed.

This means, If you are getting the feed as an email, you have to be able to identify the elements of the email that can be relevant to you. Without the ability to parse this out, you will seldom find a match. You can’t rely on your intel guy to read the entire fire hose flow, make sense of it, and make good warnings and recommendations.

If you are getting the feed as a JSON stream or an XML document or via API, you need to make sure that the atomic items important to you are readily accessible. If they are not, you will seldom find a match.

Finally, If you are only looking for IOCs (ignoring IOT, IOI, and IOO), you are only worrying about what has already happened in the past. This is important, but not the value you really want to get from your intelligence analyst.

On Indications

The piece of information or intelligence that you get from an intelligence provider is known as an “indication”. For good and ill, these little guys have become a buzzword over the past 7 or so years. Pretty much every security practitioner today is familiar with the concept of “IOC” or “indication of compromise”.

Unfortunately the industry has been quite distracted and parochial about the broader concept of indications.

IOCs generally fall under what I call the “technical data” category. These are IP addresses, file hashes, email senders and subject lines, etc. Supposedly you can put these externally provided items into your internal sensors to see whether you were hit by the same adversaries.

IOCs are essentially reactive — that is, they are backwards looking. Ideally they were learned or observed as part of an attack that occurred somewhere else.

To be successful at cyber risk intelligence over the long term you need to expand beyond indications of compromise (IOCs) to also consider (or maybe even prioritize):

  • Indications of targeting (IOT)
  • Indications of adversary interest (IOI)
  • Indications of adversary opportunity (IOO)

These types of indications can be extracted from the intelligence feed types noted previously:

  • Technical data
  • TTPs
  • Assessment and Estimation
  • Vulnerability

The next post will discuss the paramount importance of indication extraction.

Striking the match with cyber threat intelligence

Our last couple of posts introduced four types of intelligence product/reporting: Technical Data, TTPs, and Assessment and Estimation, and Vulnerability.

Intelligence or information in these categories is available from a variety of sources, including paid intelligence providers. Intelligence practitioners call incoming sources of information or intelligence “feeds”. But until you know what to do with them, you will waste vast amounts of money, time, and energy.

So here’s the secret: when reviewing feeds, analysts seek for matches between the external world and their internal systems across all four categories.

You will note that each risk intelligence type roughly corresponds to activities that can be considered operational, tactical, and strategic, respectively. In many cases, this also corresponds to a different security role or user within an organization. For example:

  1. Threat Hunters match technical data (such as attacker-controlled domain names) with data in internal sensor networks to identify compromises that have already occurred.
  2. Change Management Team matches TTP information and vulnerability disclosure information (such as an understanding of vulnerabilities exploited in attacks against other organizations) with software operated internally to prioritize patching or other mitigations.
  3. CISOs and CIOs match assessments and estimations of adversary capability (such as those drawn from long term planning documents and military doctrine of non-friendly nation-states) with their own operational geographies and industries.