Six reasons government-led info sharing won’t die

I read, with a sense of satisfied indignation, the headline “Obama’s info-sharing plan won’t significantly reduce security breaches… according to Passcode’s Influencer’s Poll“.

Not that the”influencers” know all, or that the poll couldn’t have introduced bias. But there appears to be a significant disconnect between what security leaders and practitioners believe and what the federal government is pushing.

It seems ridiculous. Why is this the case?

Here are my best guesses:

  1.  85% of Critical infrastructure is privately owned. Government has a duty to help, right?
  2. Historically, many private organizations only learned of breaches when officials from the federal government paid a visit to their corporate offices. Hence, leaders of these organizations believe that the government has some super secret technology that allows them to detect all attacks against the private sector. The government should just automate sharing of that information and problem solved.
  3. Private organizations aren’t going to turn down “free” (well taxpayer funded) assistance from some federal group who holds themselves out as “the experts”.
  4. Bureaucracy waxeth. Once government has private industry depending on it for “free” assistance, it can easily make the case that “the private sector loves it” and so it “just needs more funds” to scale up.
  5. Info sharing sounds easy. But it’s not: let’s not forget that someone has to actually create the information in the first place, that tools have to be created/deployed to consume it, and that analysts and practitioners must be trained to action it.
  6. It’s a dirty compromise. Industry can request info sharing to hold off government regulation. Legislators can easily point to information sharing to show they are addressing the issue.

Monitoring for ICS vulnerabilities vs exploits

I recently realized (though they’ve been doing it for years) that in its vulnerability advisories, Siemens is calculating not only a CVSS base score, but is also including Temporal metrics. Temporal metrics are those “variables” that can change over time, including “Exploitability”.

Temporal metric

Exploitability describes how easily/reliably an adversary can exploit a vulnerability. CVSSv2 official documentation describes:

This [Exploitability] metric measures the current state of exploit techniques or code availability. Public availability of easy-to-use exploit code increases the number of potential attackers by including those who are unskilled, thereby increasing the severity of the vulnerability.

Choices for Exploitability (per CVSSv2) include: Not Defined, Unproven that exploit exists, Proof of concept code, Functional exploit exists, and High.

Obviously, Exploitability scores can and do change over time (it’s a “temporal metric”). Essentially that means that someone might release exploit code for a vulnerability weeks, months, or even years down the road.

As mentioned above, Siemens includes temporal metrics in its  vulnerability advisories. ICS-CERT does not include temporal metrics, but does include a section entitled, “Existence of Exploit”.

When producing a vulnerability advisory that includes information about the existence of an exploit, you have to be careful not to unintentionally mislead those relying on your advisory some time later.

For example, you would not want the user of a vulnerable ICS product (e.g. those operating critical infrastructure) to rely on your most-recent advisory (maybe at a yearly maintenance downtime) and conclude, based on the information in the advisory, that they can skip a patch because there are no known exploits — when in the intervening time, exploits have been released.

This makes me wonder whether Siemens and ICS-CERT are monitoring for exploits (be they free or commercial) against vulnerabilities AFTER advisories have been released, and whether they update the advisories accordingly.

Because exploit disclosures significantly alter the risk associated with a particular vulnerability, the Critical Intelligence Core ICS Intelligence Service provides continuous, ongoing, monitoring and alerting for ICS vulnerabilities AND exploits.

Shaking foundations: are infosec paradigms in crisis?

I enjoyed reading Dan Geer’s lecture at the NIST science of security gathering.


As usual it is rather heady, academic stuff, but he levels that with clear flow and witty turns of phrase.

Venturing directly to the heart of the issue, he questioned the adequacy of prevailing paradigms in information security. Among the paradigms he doubted were the concepts of Confidentiality, Integrity, and Availability.

Some call this the C-I-A triad. We might trace the triad at least back to the “Comprehensive” (also called “McCumber”) model proposed in 1991. That model formed the foundation of infosec education for national security systems, and has spread from there. You can find the model in Annex A to NSTISSI 4011.

I tend to agree that the C-I-A triad is overused and not effective in some cases. At best, it is useful at design time — when you are deciding how to build security into a system, but less so for security operations — when you are trying to maintain that system in an evolving threat environment.

Let met give an example:

One time consuming security operations task is vulnerability and patch management. CVSS is the common vulnerability scoring system. The system relies on impact to confidentially, integrity, and availability, among other factors, to produce a score that theoretically helps defenders prioritize what vulnerabilities to mitigate.

Look at CVE-2010-2568 — the Microsoft LNK vulnerability. This vulnerability received a base score of 9.3 with a “vector” of (AV:N/AC:M/Au:N/C:C/I:C/A:C).

Interpreting the base score vector requires memorization of the categories and variables. A description of these can be found here. Of course, the more severe the variables, the higher the score.

In the case of vulnerability management, I don’t find the C-I-A jargon very useful at all. In theory you can map the C-I-A impacts to C-I-A requirements you’ve established for each software you operate, but I still don’t think it helps make decisions any more accurately or quickly than simply saying: “denial of service”, “arbitrary code execution”, “privilege escalation” or even “access to password hashes”. Which is probably how the researcher who found the vulnerability characterized it in the first place!

Various other criticisms have a been levied against CVSS. I don’t want to get into those here. My point is that using C-I-A as the basis for operational security decisions tends to confuse rather than simplify the issue.


I came across a GAO publication the other day: “Iranian Commercial Activities Update: Foreign Firms Reported to Have Engaged in Iran’s Energy or Communications Sectors”

GAO Iran

This is a recurring report the GAO issues on foreign firms that could be helping Iran with energy or communications infrastructure projects.

I found it interesting from two angles.

First, the report relies exclusively on OSINT to make its determinations:

We searched for the names of firms identified in our January 2014 report as well as for key terms such as “Iran” that appeared within 25 words from “explore,” “drill,” “refinery,” “natural gas,” or “petroleum.” We also searched for locations in Iran where oil, gas, and petrochemical activities were being conducted. In addition, we reviewed company publications, including annual reports; U.S. Securities and Exchange Commission (SEC) filings, if available; firms’ press releases and corporate statements that publicly reported their commercial activities in Iran; and corrected information that had been publicly reported. We excluded firms that reported purchasing crude oil or natural gas from Iran, because these purchases did not meet our definition of commercial activity in Iran’s oil, gas, or petrochemical sectors. We identified firms that were reported as having contracts, agreements, and memorandums of understanding to conduct commercial activity in Iran.


Second, the report reads like a first-pass targeting list.

Maybe I am imagining too much here. But I can envision this report’s message as: “Here’s what these companies are saying about themselves on the public Internet. Now, let’s pass this thing off to the heavy dudes (for CNE), and see what’s really going on.”

There could even be an implicit threat — something like “You help our adversaries, we will help ourselves to your networks, your data, and the infrastructure you helped build.”


Have you heard of the “pre-emptive cyber strike” doctrine?

I view “preemptive cyber strike” as the digital counterpart to the Bush-era pre-emptive strike doctrine expressed in a national security policy document in September 2002. This policy  was used to “justify” U.S. actions in Afghanistan and Iraq:

To forestall or prevent such hostile acts by our adversaries, the United States will, if necessary, act preemptively in exercising our inherent right of self-defense. The United States will not resort to force in all cases to preempt emerging threats. Our preference is that nonmilitary actions succeed. And no country should ever use preemption as a pretext for aggression.


While contemplating the implications of preemptive cyber strike for critical infrastructure, I had this novel idea for a NERC CIP-005 R2-“compliant” appropriate use banner:

//       WARNING