Five reasons ICS security fragility hasn’t mattered — yet

Brian Contos on the Norse security blog offered “Five Reasons ICS-SCADA Security is Fragile“. I especially liked seeing a blog sponsored by a threat intelligence company include a brief discussion of “zones” — an OT security concept unfamiliar to many IT security practitioners.

The post prompted me to ask, “If ICS security is so fragile, then why are known incidents so infrequent?” and “Why aren’t bad things happening to our infrastructure every day?”

I am not the first to ask these questions. And, while I don’t pretend to have all the answers, I’m offering an alternative to Mr. Contos’ perspective.

Five reasons ICS fragility hasn’t mattered — yet.

1. Product is still being produced. At the end of the day, some process outages are tolerated. Until we have a spike in these outages which we can positively trace to cyber attacks, from a business perspective there isn’t much reason to be concerned.

2. Ignorance is bliss. You can’t catch what you can’t observe. Just don’t install any security sensors on your ICS networks, and you are good to go!

3. The “air gaps” separating IT and OT networks at large ICS installations do provide some security. OK, I’m playing devil’s advocate to the customary “air gaps don’t exist”; but look, some gap, some segmentation, is better than none. Very few, if any, large ICS installations are connected directly to the Internet.

4. Process engineering can defeat cyber attacks. I’m not implying that ICS are built to stop intentional attacks, but where dangerous conditions can occur, physical-world engineering helps avoid those conditions.

5. Successful attacks against ICS for specific, premeditated, physical consequence requires cross-domain expertise. Your average script kiddie might brick your Modicon PLC using the Modbus function code for firmware upload, but he probably can’t predict what that will do to the process relying on that PLC.

DNI Clapper’s threat assessment: Something sensible, something exaggerated

I read DNI Clappert’s assessment of the cyber threat, as briefed to the Senate Armed Services Committee.

Clappert

I was pleased to see this statement (on page 1):

Overall, the unclassified informs istion and communication technology (ICT) networks that support US Government, military, commercial, and social activities remain vulnerable to espionage and/or disruption. However, the likelihood of a catastrophic attack from any particular actor is remote at this time. Rather than a “Cyber Armageddon” scenario that debilitates the entire US infrastructure, we envision something different. We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on US economic competitiveness and national security.

 

Wow, I thought a U.S. military leader would never learn to temper the cyber alarmism.

But then I read this (on page 7):

Computer security studies assert that unspecified Russian cyber actors are developing means to access industrial control systems (ICS) remotely. These systems manage critical infrastructures such as electric power grids, urban mass-transit systems, air-traffic control, and oil and gas distribution networks. These unspecified Russian actors have successfully compromised the product supply chains of three ICS vendors so that customers download exploitative malware directly from the vendors’ websites along with routine software updates, according to private sector cyber security experts.

 

That sounds like a direct reference to the Havex/Crouching Yeti/ Dragonfly malware. Several phrases in there seem overblown:

  • “These systems manage critical infrastructure…”

Yes, ICS manages critical infrastructure, but the placement of the statement makes it seem like actual “electric power grids, urban mass-transit systems, air-traffic control and oil and gas distribution networks” were infiltrated in this case.

Now there is some open source evidence that an electric utility and an ONG firm in Norway had Havex on their networks. But it is not clear that it was on their ICS networks. There are probably HAVEX infections in the USA, but were those infections on ICS for all those sectors?

  • “have successfully compromised the product supply chains of three ICS vendors…”

Yes, ICS provider supply chains have been compromised in the past. But in the case of Havex/Crouching/Dragonfly, the “supply chain” happened to be “Web pages” — not nearly as exciting as it sounds, but clever move by the attackers none-the-less.

The attackers wrapped the original installers with their own installers. If you 1) are downloading files from the public Interwebs to use in “critical infrastructure”, 2) aren’t verifying file integrity, and 3) and aren’t forcing “run only signed code”, then who knows what could happen to your ICS networks, even from run-of-the-mill malware.

Moreover, the ICS vendors whose Web sites were compromised were relatively small players. Joel Langill said the attacks could be targeted at the pharmaceutical sector, I wondered about manufacturing that relied on robots, Reid Wightman thought maybe data centers. None of which are an obvious fit for “critical infrastructure”.

So, good job on the early self-restraint, but let’s use more precision on the examples!

Six reasons government-led info sharing won’t die

I read, with a sense of satisfied indignation, the headline “Obama’s info-sharing plan won’t significantly reduce security breaches… according to Passcode’s Influencer’s Poll“.

Not that the”influencers” know all, or that the poll couldn’t have introduced bias. But there appears to be a significant disconnect between what security leaders and practitioners believe and what the federal government is pushing.

It seems ridiculous. Why is this the case?

Here are my best guesses:

  1.  85% of Critical infrastructure is privately owned. Government has a duty to help, right?
  2. Historically, many private organizations only learned of breaches when officials from the federal government paid a visit to their corporate offices. Hence, leaders of these organizations believe that the government has some super secret technology that allows them to detect all attacks against the private sector. The government should just automate sharing of that information and problem solved.
  3. Private organizations aren’t going to turn down “free” (well taxpayer funded) assistance from some federal group who holds themselves out as “the experts”.
  4. Bureaucracy waxeth. Once government has private industry depending on it for “free” assistance, it can easily make the case that “the private sector loves it” and so it “just needs more funds” to scale up.
  5. Info sharing sounds easy. But it’s not: let’s not forget that someone has to actually create the information in the first place, that tools have to be created/deployed to consume it, and that analysts and practitioners must be trained to action it.
  6. It’s a dirty compromise. Industry can request info sharing to hold off government regulation. Legislators can easily point to information sharing to show they are addressing the issue.

Monitoring for ICS vulnerabilities vs exploits

I recently realized (though they’ve been doing it for years) that in its vulnerability advisories, Siemens is calculating not only a CVSS base score, but is also including Temporal metrics. Temporal metrics are those “variables” that can change over time, including “Exploitability”.

Temporal metric

Exploitability describes how easily/reliably an adversary can exploit a vulnerability. CVSSv2 official documentation describes:

This [Exploitability] metric measures the current state of exploit techniques or code availability. Public availability of easy-to-use exploit code increases the number of potential attackers by including those who are unskilled, thereby increasing the severity of the vulnerability.

Choices for Exploitability (per CVSSv2) include: Not Defined, Unproven that exploit exists, Proof of concept code, Functional exploit exists, and High.

Obviously, Exploitability scores can and do change over time (it’s a “temporal metric”). Essentially that means that someone might release exploit code for a vulnerability weeks, months, or even years down the road.

As mentioned above, Siemens includes temporal metrics in its  vulnerability advisories. ICS-CERT does not include temporal metrics, but does include a section entitled, “Existence of Exploit”.

When producing a vulnerability advisory that includes information about the existence of an exploit, you have to be careful not to unintentionally mislead those relying on your advisory some time later.

For example, you would not want the user of a vulnerable ICS product (e.g. those operating critical infrastructure) to rely on your most-recent advisory (maybe at a yearly maintenance downtime) and conclude, based on the information in the advisory, that they can skip a patch because there are no known exploits — when in the intervening time, exploits have been released.

This makes me wonder whether Siemens and ICS-CERT are monitoring for exploits (be they free or commercial) against vulnerabilities AFTER advisories have been released, and whether they update the advisories accordingly.

Because exploit disclosures significantly alter the risk associated with a particular vulnerability, the Critical Intelligence Core ICS Intelligence Service provides continuous, ongoing, monitoring and alerting for ICS vulnerabilities AND exploits.

Shaking foundations: are infosec paradigms in crisis?

I enjoyed reading Dan Geer’s lecture at the NIST science of security gathering.

Geer

As usual it is rather heady, academic stuff, but he levels that with clear flow and witty turns of phrase.

Venturing directly to the heart of the issue, he questioned the adequacy of prevailing paradigms in information security. Among the paradigms he doubted were the concepts of Confidentiality, Integrity, and Availability.

Some call this the C-I-A triad. We might trace the triad at least back to the “Comprehensive” (also called “McCumber”) model proposed in 1991. That model formed the foundation of infosec education for national security systems, and has spread from there. You can find the model in Annex A to NSTISSI 4011.

I tend to agree that the C-I-A triad is overused and not effective in some cases. At best, it is useful at design time — when you are deciding how to build security into a system, but less so for security operations — when you are trying to maintain that system in an evolving threat environment.

Let met give an example:

One time consuming security operations task is vulnerability and patch management. CVSS is the common vulnerability scoring system. The system relies on impact to confidentially, integrity, and availability, among other factors, to produce a score that theoretically helps defenders prioritize what vulnerabilities to mitigate.

Look at CVE-2010-2568 — the Microsoft LNK vulnerability. This vulnerability received a base score of 9.3 with a “vector” of (AV:N/AC:M/Au:N/C:C/I:C/A:C).

Interpreting the base score vector requires memorization of the categories and variables. A description of these can be found here. Of course, the more severe the variables, the higher the score.

In the case of vulnerability management, I don’t find the C-I-A jargon very useful at all. In theory you can map the C-I-A impacts to C-I-A requirements you’ve established for each software you operate, but I still don’t think it helps make decisions any more accurately or quickly than simply saying: “denial of service”, “arbitrary code execution”, “privilege escalation” or even “access to password hashes”. Which is probably how the researcher who found the vulnerability characterized it in the first place!

Various other criticisms have a been levied against CVSS. I don’t want to get into those here. My point is that using C-I-A as the basis for operational security decisions tends to confuse rather than simplify the issue.