Revisiting copycats and Stuxnet

As I read Kim Zetter’s “Countdown to Zero Day” I was reminded of the copycat discussions that seemed sparked by Ralph Langner’s warnings (see pp. 182-183).

“Langner suspected it would tie just six months for the first copycat attacks to appear. They wouldn’t be exact replicas of Stuxnet, or as sophisticated in design… but they wouldn’t need to be.”

“[Melissa] Hathaway, the former nationals cuber security coordinator for the White House… told the New York Times, ‘We have about 90 days to fix this before some hacker begins using it’.”

Did we have copycats? Do we have copycats?

Off the top of my head, I can’t think of any I would call a close copy cat. That doesn’t mean there aren’t any, but if there are, they are still virtually unknown.

However, we should recognize that some threat actors seem to have learned what I consider the most valuable lesson from Stuxnet: Engineering firms, ICS integrators and ICS software vendors are high value targets.

Stuxnet attackers apparently went after after the computers at NEDA and other ICS integrators to get access to Natanz. This means the attackers had access to engineering details necessary to create highly-specific and customized attacks. It also means that the attackers had access to the ICS networks themselves (via engineering lap tops at a minimum).

When we think of Stuxnet, we think of Natanz — but broaden your view. What other projects had NEDA and the other targeted ICS integrators worked on? Stuxnet and its cousin code (Duqu etc.) was/is all over Iranian (critical) infrastructure.

Back to the copycats thread. Look at Havex. The parties behind Havex certainly targeted ICS integrators and support providers ( via Trojanized software from eWon and MBConnectLine). So in 2014 we saw a copycat of a key concept. And I would fully expect to see more ICS vendors, integrators, and engineering firms targeted by ICS-seeking malware in the near future.

So, if you operate critical infrastructure, consider the following questions:

  • Who are your ICS integrators?
  • Who is providing maintenance to your ICS?
  • What security policies and procedures are you requiring of those parties?

If the answer to these questions is buried in layers of subcontracts, and all you know is “that your control systems work” chances are there’s not a lot of security oversight going on. Good luck when the next copycats arrive.

Schneider Proclima vulns: ICS or not?

The Schneider ProClima vulnerability disclosures were another interesting case study on ICS security communications.

ProClima

Security Week ran an article on them. As did Threatpost.

Interesting-ness #1
In communications from Schneider and DHS, there are two “vulnerabilities”, both classified as “Command Injection” (CWE-77), yet a total of five CVEs. I understand the reasons behind combining analysis in some cases, but am I the only one that thinks each CVE should serve exactly one vulnerability?

Interesting-ness #2
ProClima software would very rarely be found on an industrial network. It is enclosure design software. It helps engineers design control enclosures/cabinets so that they don’t get too hot. It could maybe, be on ICS engineer lap tops, but its fundamental purpose is not process control or process design — it is process control cabinet design!

Interesting-ness #3
CVSSv2 base score for these vulnerabilities is 10.0 (the highest score possible). The vulnerabilities are in ActiveX, so if it were on an ICS network (but it’s not — see #2 above) the vulnerable machine would still have to be surfing the public Internet to get infected. If your ICS machines can do that, then you have worse problems than some obscure ActiveX vuln. In short, the score here does a poor job of characterizing the potential impact to the actual process being controlled.

The reason I think these “small” analytical issues matter is that if we are really concerned about protecting critical infrastructure we have to communicate clearly. There is *virtually zero* potential process impact that results from successful exploitation of these vulnerabilities.

If you want to cut the hype and get solid ICS vuln analysis, then subscribe to Critical Intelligence ICS Core Intelligence Service.

Thoughts on “Countdown to Zero Day”

Well, I finished Kim Zetter’s book, Countdown to Zero Day.

Overall a great story. Good read for anyone who wants to get an idea for the last eight or so years of action in the ICS security space. I’m recommending it to family members and friends who want to “get” what I do.

I had to wonder though about a couple of ideas/concepts/parts in the book.

1. DHS Capabilities

Within two days, [DHS] had catalogued some 4,000 functions in the code–more than most commercial software packages–and had also uncovered the four zero-day exploits that Symantec and Kaspersky would later find.” (p. 187)

Now, I’ve heard the Stuxnet story from DHS analysts before, but in contrast with Zetter’s descriptions of the Symantec effort, this seemed unrealistic. The idea (apparently expressed by DHS leadership at the time) is that what took Symantec’s brightest minds weeks of painstaking effort (see pp.52-54), DHS could whip out in two days?

I’m not saying it’s not possible, and maybe I am misinterpreting the story, but there seems to be a stark contrast there.

2. “Getting caught”

Perhaps the biggest consideration of all was the risk of tipping off Iran and other enemies to US cyber capabilities. ( p. 191)

This gets back to a fundamental difference between Zetter’s view and mine. I think in the end, what we know as “Stuxnet” was intended to get caught. It was (or at least included) an overt signal to Iran that the USA and perhaps Israel was all in their business.

Consider for example, that the worm recorded every computer it had infected. It’s payload was weakly encrypted. Some versions were released after the Natanz target was hit. The code included decipherable references to Iran and Israel and the USA. With several zero days and additional propagation vectors, the worm (at least the versions that were found) couldn’t and wouldn’t keep quiet for ever.

I don’t believe a highly professional and competent group could/would plan an operation like Stuxnet without carefully considering OPSEC and making intentional choices. I lean towards the idea that at some point Stuxnet’s “going public” wasn’t a surprise or a mistake, it was an intentional statement.

Reconnaissance Exposure

Critical Intelligence launched a new — and unique — service offering for companies that own and operate critical infrastructure. It’s called ReconX.

 

Website Screenshot

It’s a different sort of offering from the myriad voices talking about risk consulting or security program building or penetration testing. ReconX is all about the concept of reconnaissance exposure.

What is Reconnaissance Exposure?

It is essentially a bench mark or baseline for the important question “what does an adversary reasonably know about how to attack me?”

Questions examined in the course of an assessment could include (among many others):

  • Who are my key employees (to include ICS engineers and control room operators)?
  • What contact details (including passwords) are public for my employees?
  • What information are those employees leaking via LinkedIn or Instagram?
  • Who are my key suppliers?
  • What information about my company are those suppliers leaking via case studies on their Web sites?
  • Who regulates me? What potentially sensitive or “useful” information exists in publicly accessible government databases about my company?

Examples of bad practice (AKA information leaks) are way more common than you might hope. Here’s a quick one:

A Chinese national attending a U.S. university did an internship at a major electric utility. Numerous details of a substation upgrade were written up as part of an “academic report” and posted to the world wide web.

Your quarterly penetration test is not likely to catch that — because that’s not the objective.

So, you might try something different this time around. For more information, head to the CI Web site, download the glossy and contact Critical Intelligence.

Warning Intelligence and Critical Infrastructure

If you are a security professional looking for a fantastic read… something foundational that you might have overlooked, I suggest Cynthia Grabo’s “Anticipating Surprise: Analysis for Strategic Warning”

Grabo book

Grabo reportedly wrote the book in the early 70s, but it remained classified until 2002. There are some fantastic concepts that help the security professional get out of the techno-centric run-the-software mindset and into a “think ahead” approach. (It is amusing that I am saying its useful to look back in time in order to alter perspective for effectively moving ahead.)

Here’s a great quote:

The philosophy behind indicator lists is that [an adversary] in preparation for [an attack] will or may undertake certain [activities], and that it is useful for analysts and collectors to determine in advance what these are or might be, and to identify them as specifically as possible.

At the risk of oversimplification, this means that if you are defending critical infrastructure, you would think through what an adversary may attack, and how that attack might come — getting into the specifics.

For example:

What facilities are the most important (to you, to the country, to specific customers)? What equipment is used at those facilities? How is that equipment connected to a network? Who has access to that equipment? What known vulnerabilities affect that equipment? and so forth.

In my experience, few defenders are systematically thinking in this way… have a read.