31 October, 2006
I know that strictly speaking a vulnerability is not the inverse of a control (obviously it is sort of apples and oranges) but there does seem to be a connection.
One of my earlier posts I talked about threat classifications but fleshing it out to include what the environment is like and how to compensate assesments for theat likelyhood vs existing controls would be valuable.
30 October, 2006
The main reason is that you have to have a means of learning what the threats and vulnerabilities are.
This is a particularly interesting project along those lines. I am not sure if it is still in progress but suspect it is. I was there at its inception but have not been involved for some time. Even if it is nascent Ray Parks and Sandia are leaders in the field so a visit is worth it.
The NAC comments are particularly on the mark. NAC will be a valuable addition to our controls arsenal but if I hear another vendor claim it will solve world hunger I will hang up on them.
I also agree that in 36 to 48 months a given tech starts to loose its teeth. The one comment I will make on this is that the need for it almost never goes away.
This becomes clear when people discuss items like deperimiterization. I don't think that anyone who advocates that path says we should do away with all firewalls completely. Instead what they are saying is that people should realize when a control like a firewall is mostly useless for the key threats of the time and adjust accordingly. In some cases this will mean eliminating firewalls (or at least making their configurations more simple) but in most it will mean adding new controls. For the last few years those have been NIDS and NIPS. This has evolved somewhat to HIPS and HIDS.
The people who understand the mechanisms these controls use to protect realize the strengths and limitations but as Richards post shows it is an evolving world.
What it ultimately comes down to is which mix of controls best fits the need of the organization you are part of and protects against the most likely threats you will face.
28 October, 2006
Sort of like talking about conspiracy theorys. It can be fun but when someone lets it change the way they act things are going to far.
Talk and play mind games with the edgy stuff but live in the real world.
Some of our FUD attacks fall into this catagory. We get paid to anticipate the uncommon but we need to keep our feet firmly grounded.
One comment on the Register Art though. Personal attacks on an author or advocate are not the same as refutations of their arguments. The foundations of the conclusions in "Singularity" were laid in Drexlers work, Moore's law, and current trends. My comment is that prognosticators are almost always mostly wrong.
27 October, 2006
"Here is the big leap in logic . . . perhaps products and services for SCADA security are going to have to come from the control system software and hardware vendors, not the IT security market. This has its own challenges because doing security right is very difficult and not a skill set found in most control system vendors. Verano, MTL, Honeywell and others are starting down this path."
I think that the security of SCADA systems can be made into a key marketing differentiator for SCADA vendors. In some ways it already is. Their clients are certainly factoring it into their purchasing decisions. It just hasn't made the leap to vendors using it in their value add proposition or ROI. If you take the concept a step further it could be an easy profit margin enhancement area if played properly.
This is where I think IT side security vendors are missing out. There is significantly more cost tolerance in the PCN/SCADA world if you can show value than in the standard IT world. Standard IT security vendors that team up with SCADA vendors will have a significant new market that they can let someone else sell for them and probably at larger margins.
Eric is at the edge of this play right now and even doing it in a safe manner.
The SCADA vendors I mentioned here are the ones that are already noticing the benefits of differentiation.
26 October, 2006
Mike Rothman Chimed in on the 100% compliance piece and did a far neater and faster summary of what I was trying to say.
This part brings me full circle to the original conversation on Risk Units and some of the differences between risk management and best practices.
Essentially best practices is a bunch of smart (hopefully) guys sitting around in Gartner, Forester, D&T, PWC, E&Y, SANS, and other groups coming to a consensus on which controls cover the closest to 100% for a given threat they are looking at and which are the best controls to put in place.
(yes yes I know this is going to be an avalanche of what about this or that group)
This is great. It gives us a outside look at how various actions and tools compare to each other to help prevent problems but it doesn’t factor in all of the variables that each company and organization have.
It establishes a solid baseline and goals.
Coming up with best practices by definition includes dealing with the vendor marketing apparatus and all the fluff therein. It also is heavily based on the current trends, hype cycles, and opinions of what is really at issue.
In some companies a given best practice is just not possible because of political, environmental, architectural, economic or any number of other reasons. This is why it is more important to focus on what the real risk of an issue is.
There are number of questions I like to keep in mind when looking at the effectiveness and appropriateness of controls being considered.
What threats does a control provide protection from and how?
Are there overlaps with other controls and for which threats/vulnerabilities?
In a perfect environment how much protection from any given threat does a control provide?
How much coverage can I afford to get with the given control?
How much does the control interfere with existing work?
How much does the control interfere with changes and limit future flexibility?
I would love to hear others.
This is why I am so interested in the “Units” and math that might be associated with it. I picture a type of finite element analysis that can be applied to Information Security controls.
(and before any structural engineers start laughing at me yes I know it is not the same. I am using it as an example not as a literal mathematical equivalency)
Even if we could come up with detailed equations for this stuff I realize most of the time they wouldn’t be used. I wouldn’t expect them to. When I was a Reactor Operator I didn’t do all of the full equations for every variable for every shim or pump switch. I did however have a thorough understanding of them and because of that knew exactly what would happen before I did it.
A different series of questions might be what are the disparate pieces that make up control? How do they interrelate? How do they fit into the greater piece of impact times likelihood?
What I really want to know is if Batting Avg. or On base % is going to get me more scores in the end.
Securosis Does a better job at describing the "Best Practices" process.
I love this quote from it.
"Analyst best practices will make you really fracking secure, but probably cost more than a CEOs parachute and aren’t always politically correct."
I am very aware of how the process are worked so my hopes aren't dashed. His points are valid and more descriptive but that level of detail wasn't essential for the point.
Still more detail (and more accuracy as well) is better so thanks for the critique.
25 October, 2006
Some other companies seem to just be trying to ignore the problem or worse blame it on the purchaser.
Part 1 here
Part 2 here
So if you can’t get 100% with a single control how do you get 100% or close to it?
I’ll use worms as the example because it is easy not because I think they are the most likely current threat.
If you can stop 80% of the worms with your companies external firewall.
Then stop 80% of the remaining worms with segmentation to your PCN.
Then stop 80% with a NIPS device
Then stop 80% of the remaining with a Host based firewall
Then 80% with patching
Then 80% with HIPS
Then 80% with Memory Based Protection
If you can get an 80% reduction with each layer then you have reached your .001% likelihood layer with 6 controls even if you had a 100% certainty of the threat event occurring to begin with.
So the trick is identifying the applicable controls, determining how they (and how much they) reduce the likelihood, and if they can be layered with outer controls.
This is why I have been so interested lately with the risk conversations at RiskAnalys and Episteme.
If we can identify a relationship with the units of risk to controls that would be very valuable.
Final Section Here
24 October, 2006
I first crossed paths with Eric when we were both included in an article by Kevin Poulsen when he was at Security Focus (Kevin is at WIRED now) several years ago. It was one of the first times the mainstream information security press addressed the issues with control systems. I didn't get a chance to meet him face to face until just last year. He is just as affable in person as he is on the phone and as his written work would suggest.
Eric has been at the lead of the SCADA security topic for years now. He and PA Consulting were two of the front runners in providing services for DCS and SCADA security. Along with Darren (one of other members of Byres Security) they have perhaps the best technical and conceptual understanding of the issues and potential solutions out there.
They have created an appliance - Tofino - that can be used to protect the PCN with minimal or no interference.
It is much more expensive to try to get a control to be 100% effective. Things have to be designed around, more manpower has to be dedicated to policing the solution, and the solution is as or more likely to cause a loss of availability than what is being protected from.
As an example a colleague of mine designed a hyper redundant Ethernet network to “ensure” connectivity to a particularly demanding user group. He used Spanning tree as the mechanism. Any networking guys reading already know what happened. Long story short they had far more frequent and complete outages thanks to the redundancies than due to equipment failure. (btw if spanning tree is used properly it isn’t a problem) Constant route reconvergence caused low level problems and any time there was a minor change to the network the entire thing would crash. This caused far more frequent and complete outages than the MTBF for the switches would have indicated if there was only one path to each location.
23 October, 2006
Another Security professional realized what we have been trying to say for a while.
The good news is that there are a lot of controls that a good IT/IS security guy wouldn't know of. If there wasn't then you have no idea how much chaos we would already have.
The bad news. What he saw was pretty typical and not unusual. (I know redundant)
Welcome to our world.
Sorry that sounded a bit snarky. There are SCADA security groups that are doing good and arev well informed. The problem is that in general the level and numbers of both good and bad items in the SCADA world can be compared with the state of info security in the standard IT world in about 2000. I discuss this briefly in my myths facts and goals post but it should get more attention.
To protect from worms on a system you have a lot of options. None of them are 100% effective. But many of them are 80% to 90% effective.
I am not an advocate of complete physical separation. The reason is simple. The organization that separates the system usually assumes that the solution is 100% effective. The reality is that someone some time is going to connect into them.
An organization I was in contact with several years ago did a great job separating their network. They had loads of documentation, did scans and had clear policies and standards associated with their requirements. When blaster broke out their business systems were pretty much unaffected. A week into the outbreak a contractor hired to maintain their DCS got his mac address approved through the proper channels and plugged into one of the isolated networks to monitor settings. Twelve hours later (and much lost production) they managed to get it cleaned up.
In this scenario the problem was that the separation actually made it more difficult to keep AV and patches up to date.
A quick clarification here for the non SCADA security folks. The "isolated" networks approach is still heavily advocated in some areas of the DCS world and many vendors default approach is "just don't connect it to anything". Like IT and IS in the early '90s they think they can be safe if they just don't connect. Many haven't realized that it isn't possible to totally isolate anymore. That said isolation is a control, just not a very realistic one.
Goto Part 2 here
22 October, 2006
Snow Crash - A bit campy but in a way that makes it more fun to read. It has perhaps my favorite quote from a fiction book. "Condense fact from the vapor of nuance" It is a insightful exploration of the potentials of memetic threats in a world where human machine interface is tighter and more intertwined. Social commentary is light but the technical daydreaming is fabulous.
Diamond Age - Has one of the best basic descriptions of the process by which computers work that I have ever seen in a fiction work. Leads to the edge of Singulatarian thinking without really crossing. Still easy reading but without the silliness of Snow Crash. Actually manages to explore the concept of Depreimeterization and defense in Depth with enough detail and ease of understanding for anyone to get it. Again this in a fiction book written seven years ago. Still applicable and in some ways presented.
Cryptonomicon - Complex intertwined story with significant historical reference and a detailed understanding of the interplay of information uses.
Easily One of my favorite authors - So who are yours?
21 October, 2006
Non - Fiction
Thought provoking and good example on how to look at conventional wisdom. Light touch on being realistic about statistics.
Awesome baseball and awesome business book. Thought provoking on how to look at statistics and what really matters.
The Singularity is Near
If it is right it changes everything. Even if it is only close to right it means that automation security will become one the most significant issues there is. (yes I know that sounds outrageous. You will have to read it and think to understand what I mean by that). Brings "Engines of Creation" into an easier to digest format. (though definitely that should be a read as well. Perhaps I should dig it out again)
Army of Davids
A phenomenal take on the existing trends that already exist and have occurred in the last several years. Written by Instapundit one of my daily reads.
Blunt and Honest a light approach to the "Prince" of our time.
The title say it all.
Fiction and more later
20 October, 2006
Yet another great post on Riskanalys on risk management.
A lot of people out there have been bashing the risk management model of data security. The essence of the attack generally comes down to "that didn't come from real data." In many cases that is true.
In just as many the fault lies with us looking at the wrong data or worse doing the right thing a the wrong time and just not being "lucky".
Moneyball is a great book that I would recommend for anyone.
I got three great peanuts out of it.
1. Make sure you are tracking the right metrics.
2. Even if the metrics correlate with the goals look for better ones for your objectives. (i.e. better batting average does make you a better player but on base percentage is a better indicator of whether you can generate scores)
3. Probability (luck) means that even if you are looking at the right things sometimes you won't see what is right. The trick is to find out why and what the real desired outcome is.
I suppose that I fall to a certain extent into the "metrics geek" camp on the security side.
In defense of the metrics people there is precedent. The insurance and financial industry's have identified that the equations really do work. You just have to be measuring the right stuff.
Nuclear physicist do the same things and the equations are remarkably similar.
19 October, 2006
Detective - Detective controls identify when a defined action or series of actions take place.
• Only provide control functionality if combined with active control
• Typically feeds data to and active control at a different layer of the controls hierarchy
Active - Active controls directly modify system behavior
• Without Detective they have no feedback mechanism they just make things change blindly
• Typically guided by detective at a different level of the controls hierarchy
– Adjusts system
– Starts actions
– Stops actions
Reactive/Responsive - Reactive or Responsive controls modify the system based on identification of defined actions or series of actions
• Comprised of two parts - the detective portion and the action (active) portion
• Differentiated from detective and active controls in that both functions occur at the same level of the controls hierarchy
– Sense and Adjust
– Sense and Stop
Inherent - Inherent controls rely on fundamental principles of the system to prevent undesired action
• A control that is inherent at one level may not be at another
– It Just can’t happen (within certain assumptions)
– Temperature Coefficient of Reactivity in a Nuclear core
So why is there so little obvious pain?
Some of the arguments are that the virus writers have gone pro. There is certainly some evidence for that but it cannot account for everything.
A few organizations have made spectacular efforts at protecting their environments but we all know that that is the exception not the norm.
So why are we so lucky? (are we?)
I think that we have been good at layering our defenses either intentionally or unintentionally.
I've been talking about layering in addition to defense in depth for quite some time.
Both from a process
and technical perspective.
If we start doing it from a more methodical perspective then we can caplitalize on it to save some money.
Controls are the key
Control Types – addresses the degree to which a control is effective for a given threat and at a given level of the controls hierarchy
There are two primary Control types. Preventive and Mitigating
Control types are complementary not exclusive
A given Control might be preventative at one layer of the controls hierarchy and mitigating at another
•Preventative controls stop the unwanted action from occurring.
•Passwords stop unauthorized from accessing a system.
•Patches prevent exploits from effecting a system
•One of the weaknesses for Preventative controls is that it is impossible to ensure 100% compliance and in isolation a breach allows full exploitation. (once they have root…)
•One of the strengths of Preventative controls is that when they work they completely eliminate the threat they are designed for
•Mitigating Controls limit the scope of an unwanted action and reduce its impact.
•Access Rights control where one can go once inside
•Approval limits minimize the amount of potential mistakes and malfeasance
•One of the weaknesses of mitigating controls is that they rarely completely stop a threat
•One of the strengths of mitigating controls is that they can be more easily and broadly implemented because they are less likely to impact business
More detail next post
18 October, 2006
This is just another indicator that SCADA and DCS security issues are going mainstream.
Those of you have known me over the last eight odd years also know that I am a firm advocate of the need to enumerate and look for vulnerabilities on even SCADA networks. Within that context I am happy when tools become available. Still this indicates that these issues are reaching a clear tipping point as it relates to bad guy target cross section.
I am a big fan of Tenable and long time user of Nessus and my concern is not directed at them.
(I have preferred Core for the last 4 years though. Much more than a vuln Scanner)
My concern is that now the security of obscurity is clearly flying out the window.
It was bound to happen.
Just some thoughts.
Threat Types - Internal Undirected
Single Point Mistakes
Threat Type - Internal Directed
Threat Type - External Undirected
Disasters Service Failure (power, building, metropolitan)
Info Security Events (Viruses, Worms, Spyware)
Global IT issues (Mass Scans, Naming Attacks)
Threat Type - External Directed
How would you Break it down? What would you add to groupings and to the listings?
The key points for a light touch are.
•Integrated Overlapping Business and IS controls
•Transparent Controls (Where possible)
–Both in ease of Audit
–And as seen (or not seen) from the users
•Leverage other (non Security) standards and controls
•Few or No Exceptions
•Little or No Emergency access
–both because there is no need
•Utilize and Integrate External resources (don’t stand alone)