The piece of information or intelligence that you get from an intelligence provider is known as an “indication”. For good and ill, these little guys have become a buzzword over the past 7 or so years. Pretty much every security practitioner today is familiar with the concept of “IOC” or “indication of compromise”.
Unfortunately the industry has been quite distracted and parochial about the broader concept of indications.
IOCs generally fall under what I call the “technical data” category. These are IP addresses, file hashes, email senders and subject lines, etc. Supposedly you can put these externally provided items into your internal sensors to see whether you were hit by the same adversaries.
IOCs are essentially reactive — that is, they are backwards looking. Ideally they were learned or observed as part of an attack that occurred somewhere else.
To be successful at cyber risk intelligence over the long term you need to expand beyond indications of compromise (IOCs) to also consider (or maybe even prioritize):
- Indications of targeting (IOT)
- Indications of adversary interest (IOI)
- Indications of adversary opportunity (IOO)
These types of indications can be extracted from the intelligence feed types noted previously:
- Technical data
- Assessment and Estimation
The next post will discuss the paramount importance of indication extraction.
Our last couple of posts introduced four types of intelligence product/reporting: Technical Data, TTPs, and Assessment and Estimation, and Vulnerability.
Intelligence or information in these categories is available from a variety of sources, including paid intelligence providers. Intelligence practitioners call incoming sources of information or intelligence “feeds”. But until you know what to do with them, you will waste vast amounts of money, time, and energy.
So here’s the secret: when reviewing feeds, analysts seek for matches between the external world and their internal systems across all four categories.
You will note that each risk intelligence type roughly corresponds to activities that can be considered operational, tactical, and strategic, respectively. In many cases, this also corresponds to a different security role or user within an organization. For example:
- Threat Hunters match technical data (such as attacker-controlled domain names) with data in internal sensor networks to identify compromises that have already occurred.
- Change Management Team matches TTP information and vulnerability disclosure information (such as an understanding of vulnerabilities exploited in attacks against other organizations) with software operated internally to prioritize patching or other mitigations.
- CISOs and CIOs match assessments and estimations of adversary capability (such as those drawn from long term planning documents and military doctrine of non-friendly nation-states) with their own operational geographies and industries.
Our previous posts established the groundwork for understanding how cyber risk intelligence allows organizations to answer the question “When will my organization be the victim of a significant cyber incident?”
When we last left off, we agreed to discuss four ways cyber risk intelligence analysts could match external or “threat” developments with internal systems the analyst desires to protect:
- Technical Data
- Tools, Techniques, and Procedures (TTPs)
- Assessment and Estimation (A&E)
- Vulnerability Discovery and Disclosure Data
To give you an idea of how each of these contributes to our objective of anticipating cyber incidents, I have labeled first three of them on the X axis of the Boom Chart.
You notice that Technical data is primarily reactive. It is generally gleaned from incident investigation.
TTPs are also learned from previous attacks, but carry forward due to the insight they provide about how an adversary operates.
Assessment & Estimation is forward looking based on a broad variety of factors that extend beyond bits and bytes level analysis.
The following image of the mind map (discussed previously) is color coded to indicate which elements fall under each category.
While this mind map is somewhat notional rather than complete and detailed: brown represents technical data, yellow represents TTPs, and beige represents estimation and assessment.
The cyber risk intelligence analyst can use several techniques to place events and impacts accurately on the boom chart.
He must start with a foundational understanding of cyber event elements, many of which display in the following mind map:
An intelligence analyst learns all he can about these items. He is fascinated by context, and terrified by ignorance. He explores relationships and advances hypothesis. He builds on his knowledge and previous estimations. A good analyst knows and readily applies a broad number of analytical techniques, asking question after question — carefully documenting his results — building histories and frameworks.
When trying to anticipate the future, the cyber risk analyst applies all his efforts to match what he knows about the external environment with what he knows about the internal environment he must protect.
We can divide intelligence product types useful in this “matching” effort into four broad categories:
- Technical Data
- Adversary TTPs
- Assessment and Estimation
- Vulnerability Discovery and Disclosure
In the following posts we will look at each of these in greater detail.
When we left off, you were just beginning to wonder “When will my organization be the victim of a significant cyber incident?”
And I told you I would show you how cyber risk intelligence could help us get there. So here goes.
It is the job of the cyber risk intelligence analyst to place all cyber events affecting the organization he serves on the Boom Chart:
The Boom Chart is a conceptual tool the analysts uses to estimate when things go “boom”, and how big the boom will be. The Y axis displays “Impact”, the X axis displays “Time”.
T sub not (t0), shown on the X axis, represents the present. The cyber intelligence analyst deals always in the notion of time. He must cover both events that have already affected the organization (shown to the left of t0), and events that may affect the organization in the future (shown to the right of t0).
Intelligence analysts often do not learn about events that have impacted their organization until after the event has occurred. The “dwell time” statistic made famous by Mandiant’s annual “M-Trends” reports illustrates this concept nicely (see Mandiant metrics white paper for more detailed discussion about dwell time and its components). We all kind of naturally find ourselves wanting to “get that dwell time down”.
Indeed, a cyber intelligence analyst provides the most value to his organization when leadership trusts him to deliver an accurate appraisal of events that will occur in the future — eliminating dwell time all together. While important caveats exist, logic dictates that event impacts can be mitigated or diminished less-expensively and more-effectively before the event occurs than they can afterwards.
Next time, we will discuss some specific ways an analyst goes about this important task.