Now, based on my experience working in the commercial cyber threat intelligence space for many years, I bet you didn’t really go through the effort of identifying the question like I asked you to in my last post.
So, I will repeat the question, then read your mind.
The question: What question if you knew the answer to it, would most significantly improve your security operations?
Think hard… I will wait…
Now for the mind reading: It’s probably something like:
When will my organization be the victim of a significant cyber incident?
So, you thought it. You know you did. But there was a bit of cognitive dissonance because when it crossed your mind, you also thought “I can’t ask that. No one can know the future.”
But that is where the deep magic of intelligence really begins.
After two-and-a-half years, I’ve decided to publicly take up the pen again on professional topics. Mostly its for me to record my thoughts — but of course that means you are free to think along!
What is Intelligence?
Intelligence is super simple — so simple, in fact it can be easily misunderstood. It is the ability to intentionally acquire knowledge. For example, cyber threat intelligence would be the ability to acquire knowledge (AKA learn) about cyber threats.
What is the mysterious intelligence cycle?
It’s a model for how people tasked with acquiring knowledge go about their jobs. It’s not all that different from models like the software development lifecycle: You start out with requirements, figure out a way to meet the requirements, give it a try, show the customer, ask them what they think, and do it again.
How do I choose a cyber threat intelligence provider?
Well, let’s not worry about choosing providers until we know exactly what knowledge you want to have. Try answering this as a starting point: What question if you knew the answer to it, would most significantly improve your security operations?
Once you can identify that question, you are starting to operate your own intelligence cycle. It’s exciting! You are doing intelligence!
I read over the New South Wales (Australia) cyber security audit. I liked the fact that the audit is publicly available. I liked the fact that it included a close look at the industrial control systems of Sydney Water Corporation (SWC). This helps the average citizen understand where security improvements might need to be made.
I thought the auditors made some concise observations that could well be applied to a number of critical infrastructure operators:
- SWC has an established Information Security Management System (ISMS), but it only covers the corporate data centre and not the engineering systems.
Comment: So the systems that control the service that the corporation exists to provide are not covered by the security management system?
- SWC’s risk management process documents risks and controls at a strategic level but does not cover all operational level risks, such as the potential introduction of USB-based malicious software.
Comment: Ahh, no one has decided how to secure the SCADA. And the strategy they have is so high level that it does not reflect proven threat vectors.
- A range of common specific risks and their mitigating controls have not been documented, including risks associated with non-expiring engineering passwords.
Comment: One password… forever…. for your SCADAs. Because you don’t want passwords getting in the way of the engineers doing their job. (but what about former employees?)
- SWC indicated that an assessment is conducted for every SCADA related alert from national computer emergency response team (CERT Australia), with emails received on a regular basis by several managers whose job it is to assess impacts. However the assessment process was not defined and the analysis documentation that was provided to support this assertion was limited to a minority of the security advisories released by the US Government.
Comment: At least they say they are looking at vulnerabilities, right?
Those may sound easy enough, but the truth is that they require a change in mentality reaching from corporate management to operations engineers. Will they do it?
And what about your water provider?
I heard some surprising and insightful conversation from a pair of young schoolchildren the other day.
Joedamadman: Thomas School Bus
They were chatting about riding the bus to school. One of them said, “I don’t understand why the bus driver is the only one who gets a seat belt.”
“Yeah”, came the response, “and he’s the oldest one!”
Although they couldn’t tell you so, these two students were already grasping some important concepts about safety, security and risk.
- They honed in on the purpose of the school bus — to get *the students* to school safely (rather than the driver).
- They did a comparison of useful remaining life — reasoning that it makes more sense to worry about the children, because they have longer to live than the bus driver.
Those same thoughts have a nice analogy to ICS security.
- Whether your business is generating electricity or making cookies, your most important computing assets are those controlling the industrial process — it’s the production network that is making you money! You should not ignore its safety and security while only investing in protection for the “enterprise side”.
- Withe a limited budget you’re going to have to choose where to dedicate the most resources. There may be significant challenges with securing older plants. If you are going to build a new facility it is your chance to invest in a lifetime of a more-secure, more-safe control system. Focus your security efforts and budget where they will serve you the longest.
I came across what I thought was an interesting concept: the Cyber Attack Automated Unconventional Sensor Environment (CAUSE) program from IARPA.
The briefing provided by IARPA to entities who may propose solutions is fascinating.
In short, the idea is to get beyond the *reactive* nature of indicators of compromise (IOCs). While I am a proponent of information sharing, I am well aware that IOCs are a treadmill game much of the time. I don’t think IOCs are useless, but I generally view them the same way I view antivirus signatures.
IARPA wants to progress towards an *automated* and *predictive* approach that combines a wide variety of information (about industries, markets, geopolitics, etc.) with cyber security indicators.
According to the slide deck, IARPA has a whole plan set up to *measure* the effectiveness of each proposed solution. The measurements will even include a “ground truth” component theoretically allowing IARPA to compare warnings generated with actual events.
Lots of good thoughts in this one. Will it work? At least they have a plan to find out.
Owners and operators of critical infrastructure might benefit from re-examining their own use of cyber intelligence and how they plan to move beyond the reactive IOC treadmill.