31 January, 2007
Uuhhh ... NO!
First of all it isn't "your" computer. It belongs to the company unless you bought it with your own money. Even if you did buy it with your own money you probably signed away (or clicked away) your privacy before you logged onto the network.
Look I am a huge Internet privacy advocate and I can't stand the stupid things that are happening to privacy on the net out there but the fact is that there is no privacy on a work machine. Even within my company I fight for some level of non interference and discretion of observation of the network uses but it is a given that the owner of the hardware also has almost total access rights to the data on it unless specifically identified otherwise. They also have an obligation to protect themselves from liability and to investigate and report possible felony's if they are made aware of them. This isn't like a lot of the other Internet privacy arguments. What the RIAA and MPAA are doing is at least unethical and in some cases probably illegal but this isn't even the same argument. This was a work machine and it was being used in an illegal act.
This isn't a "Dangerous" precedent. That is stupid. That employer has a legal obligation to protect themselves from liability. Lets go a step further - they have a legal and moral obligation to report the child porn if it was found even if the FBI hadn't asked for it.
If you want privacy on a work machine then find a job at a company that makes that their priority. Some of them are out there. If they choose to accept the liability risk and clearly communicate that then more power to them but then if they become aware of a possible crime and do nothing about it then they should be prosecuted as well. I would consider it despicable for them 9or any court) to help someone by using a rule like that to protect some pervert.
I love Wired but I am pretty irritated at the spin they gave this article. This isn't a privacy rights issue. If anything what is at issue is the rights of an organization to protect themselves from a felon of an employee. To present it as a privacy rights issue dilutes the legitimate privacy rights arguments by associating them with faulty logic and moral filth.
Grow up guys.
A good Comment by Scott:
I think you have misunderstood the "rights issue" that Wired points to. The concern is not whether your employer has a right to monitor your computer usage on a work computer, but whether the police have a right to search your work computer without a warrant. The Ninth Circuit's original ruling stated that since the employee in question had no expectation of privacy on a work computer, the FBI had a right to search it with or without a warrant. Surely you can see the privacy rights concerns present for both employer and employee if such a ruling were maintained.
In short they do have the right to search your work computer if the employer lets them. Saying that the decision protects the employer is disingenuous.
In the case the police (FBI actually) asked the company to provide access to the computer. If the company had chosen to say "no we want a warrant" then that was their right. Instead they choose to say fine here is the computer. (as they should unless prevented by their own policies) They key here is that (as far as I read) the employer never asked for a warrant. It was their choice and right whether to ask for one or not.
Their choice should not be used to protect an individual who was using their property for a felony.
Let me put it this way.
If I had a store and a cop walked up to me and said "I think that Joe your janitor is storing drugs in your stores bathroom can I look?" I then have a choice. Do I ask the cop to present a warrant (and possibly implicate myself in Joe's drug use) or just say sure go ahead and look.
That is a decision for me to make and not some court. By implying that there is an expectation of protection of privacy the real precedence for the reversal of the decision is that the employers need to be asking for warrants anytime the issue might come up.
More importantly anytime law enforcement wants access to anything instead of starting by the least intrusive method and just asking they have to immediately get a warrant.That is simply wrong and places a new and prohibitive burden on both the employer and authorities.
It should be the employers choice as to whether they ask for a warrant or not. They own the system and they own the data.
This decision dilutes the property rights of the employer.
Tell me where I am wrong. If I am wrong on the details of the case then my argument falls.
This stuff is good. He mentions the lack of awareness of security issues in the SCADA world and has a point but it is also nice to see the information security world start to take notice. It will be interesting to see the preconceptions of both sides challenged.
Between Symantec, Determina, Tenable and N-Circle the word is getting out that there is a significant market here.
That market is huge (at least the size of the existing IT market perhaps larger in terms of capital availability) and hungry for solutions that fit. Right now it is almost entirely a security vacuum. It has some real significant and important distinctions from the casual IT market but a lot of the existing solutions can be adopted to fit if done properly.
I am looking forward to the merger over the next several years.
Ok quick question here? When is it a root kit and when is it something else?
A lot of this stuff sounds like Trojans or variants thereof.
I am reluctant to get into this discussion because the biggest flame war I was ever in involved semantics around root kits back in '98 but I am just trying to figure out how to classify some of this stuff.
Is it a "root" kit if it doesn't really touch the root? If it touches drivers that touch the root is that enough?
Perhaps another name would fit.
Just some non threatening don't hit me with the heavy metal object or set me on fire thoughts.
Show Me the SECURITY!!!
Could be there I just want to see it.
30 January, 2007
29 January, 2007
ACL’s, Firewalls and the bottom capabilities of NIPS
If you have successfully divided your PCN subnet from the rest of you LAN’s you still have to have a way enforce that separation. Access Control Lists (ACL’s), Firewalls, and the bottom layer and capabilities of a NIPS provide a method of doing this. Note that I am not getting into ports yet. Next layer up.
At layer three they all function in a relatively similar manner and are close to being the same capability. Firewalls (and NIPS using firewalls) of any type are less likely to be susceptible to spoofing or man in the middle attacks from traffic that must traverse the PCN to the Business network but most routers and switches in the last few years have a pretty robust ACL capability. A firewall capable switch or router gives even more flexibility but isn’t always available. The real key here is how the networks are set up.
For smaller organizations a single division point and one network is all that is necessary.
In this environment you would have a PCN connected via a firewall to the business network. If the business network has access to the internet (which they all do) it is essential that that access is also protected by a firewall. This isn’t about protecting your business network so I will skip all of the details here but it is important to remember that if you have connections to your PCN then anything that compromises you business network also puts your PCN at increased risk. This means that a solid DMZ and extranet environment are important for the business network. I am writing all of the rest of this from the presumption that this is the case.
I have never seen an acceptable reason for a PLC to be directly accessible from the business networks so putting in a log any any, drop any any, (dump your logs to a syslog server) for PLC addresses should be the standard. If there is a need to directly access a PLC from a remote point (and there sometimes is) then use a VPN or some other secure authentication and communication method to facilitate the access. Terminate it on a separate subnet that has no direct external access and then route from there.
For larger companies and organizations there will be a need to provide multiple differentiated networks. Many organizations use a PCN DMZ (sometimes called a Process Information Network [PIN]) to house Historians and MES. By doing this you can granularly control access to actual control nodes while greatly simplifying secure access to data from the production nodes.
I have seen a lot of other distinctions
Utility Networks – used to house servers that pass patches, AV updates, software revisions and other utility software (be careful that it doesn’t just become the easy way around security)
ESD Network – Emergency Shutdown Network – Just like the name implies they house the systems used to shutdown in an emergency. Access is very tightly controlled often these systems are completely separated from others.
Critical Systems or Red Line Networks – For highly critical valves, pumps, breakers and gauges a critical systems network allows tight granular access and control of access for systems that may have safety or environmental significance or for systems that might have cascading failure modes.
Monitoring Network – A network where PLC’s or RTU’s are used only for monitoring functions and have no direct control capabilities. Because the risk of inadvertent operation is much lower a looser set of controls can be applied. You still must be careful that it isn’t used as a jumping point to other systems. You also have to be careful if it is used in an open loop control scenario where an operator is making control decisions based on the readings.
Legacy Network – used to separate legacy and unmanaged equipment from the rest. This is a very important network to consider. The fact of the matter is that for many automated control systems there will be hold over systems that have distinct security issues that might be better off separated from other systems.
Vendor Systems Separations – many vendors who have taken up the security hue and cry have started defining their systems within specific subneting requirements. In general this is a good thing because they can tightly control access and what traffic goes in and out based on their on hardware’s needs
Vendor PCN Extranet – An extranet subnet that houses servers to provide synchronization and control between divergent vendors OR (big OR not and) provide a controlled access drop off point for vendor access to systems for maintenance. I have seen both definitions used for the same term. If someone wants to come up with something better please do. I’ll float it and see if it catches on.
Partner PCN Extranet – Allows a controlled termination point for access either between operating partner networks or for external contractor controls either for troubleshooting or for actual operations.
Site PCN Extranet – Allows for the aggregation of information and data controls from multiple sites. It is distinguished from the PIN extranet in that actual control functions might be necessary such as on pipelines or long distance power transmission lines.
Site PIN Extranet – usually aids in the termination into a centralized control and operations center. Also provide a gathering point for production data into business systems in very large companies.
There are actually a few more but I am stopping now. The key here is keep it as simple as possible. If adding one of the network subdivisions I mentioned above helps make control of access to those systems simpler and doesn’t make the overall design too complicated then use it. If, on the other hand, you only have a few dozen PLC’s and a single historian then the simplest solution is best. One firewall and at most two control networks, a PIN and a PCN should be fine.
Same catch phrases as always for firewall or ACL configuration. Least rights needed for effective operation. Default at the end of the chain is deny any any and above that is specific permits for the traffic that is absolutely needed. If they don’t demonstrate a defined need to get to an address don’t permit it.
If you are on a more complicated network then the business network should access the PIN and vice verse and the PCN should access the PIN and vice verse but it should be designed such that the PCN never needs to access the business network or vice verse.
ESD and Redline Networks should be locked tight except during controlled change windows.
26 January, 2007
I can't vouch for the time lines. He/they must be spending a lot of time on this to get that much detail.
As for the facts on the polonium He is mostly right.
Even if it is in a salt based aqueous solution though there would be migration into cracks in the tea cup. After that it could stay for quite some time and through many washings. That doesn't mean that his conclusions are wrong it just means that direct metal to cup contact isn't the only way that high of a level of contamination could be obtained and maintained over time. If there are burn marks in the cup it would enforce his hypothesis. It would be interesting to see pictures.
I think it is definitely a possibility that this was a smuggling attempt as I indicated here, here and here.
But it isn't as clear cut as might be indicated in his post.
This whole thing is starting to take on a conspiracy theory feel to me so I am going to drop it after this post.
What I will end with is:
- It is definitely possible that this was a smuggling operation gone bad.
- It is also possible that it is a botched assassination or an assassination that sent a message.
- It required a Nation State level actor but not necessarily the knowledge of that Nation State.
- Most of the stuff I have been seeing presented as Science (both in the MSM and in many blogs) is at best inaccurate and often intentionally sensational.
- It was an interesting if sad and scary topic
Have fun with the Warren report boys. Hopefully some good sleuths are tracking down the real facts because some of the possibilities could be really bad.
25 January, 2007
24 January, 2007
IP is on controllers and control networks.
Of course IP is everywhere. Why wouldn’t it be?
It is so beautifully simple. Some of the best and most elegant engineering I have ever seen.
With 4 bytes of information (likely less than the amount of information required to encode two letters of your name) you can get from any computer in the world to any computer in the world and back again.
Oh, this is a bit over simplistic. There is certainly more information involved in the total train of the data movement but as far as your computer is concerned only 4 bytes matter. How simple can you get? The fractal complexity that grows from this seed is amazing.
The consequences of this are what make all of the other security concerns significant. If a PLC or MES is connected to an IP network (even indirectly) then anyone in the world that knows how can access them (though not necessarily easily). With controllers and MES’s the way they are currently designed that means that potentially anyone in the world can operate them. That means that anyone in the world can potentially operate the equipment they are connected to.
Everything else flows from this.
So what are the control mechanisms for layer 3?
Subnetting and Subnet design
For the most part a VLAN’s purpose in layer 2 is to logically divide and possibly isolate separate information conduits. The significance in layer three is that it is very easy to route around a VLAN as a divider. This can be done in several ways. The most common is simply using a router but dual homed systems and multi homed systems are also a threat. Basically what this means is that the control aspects gained using VLAN’s on layer 2 are useless if there is open routing of any type between the VLAN’s. Many times I have been told “oh don’t worry it is on its own VLAN”. The engineer thinks that somehow that provides isolation. It doesn’t. The point is that a protection that can be quite effective when viewed exclusively from the perspective of its own layer can be easily rendered useless at a higher or lower layer if it is not coupled with additional controls.
Subnetting and Subnet Design
By themselves subnets provide very little control. Done properly they can provide slight advantages to other controls. More importantly, if done improperly, they can actually make it impossible to secure a system by drastically reducing the options of control available.
PCN’s should be on their own subnet. There is no technical reason for a PCN to co-reside on a subnet used for other purposes. They often do because it is difficult to get a new network set up specifically for use as a PCN and there is a cost associated with separating them but in my opinion the small additional cost and amount of work is trivial compared to the amount that not separating them increases the threat environment. This is true even for non-significant PCN’s.
This one might be a bit contentious but I am a fan of using private address spaces for PCN’s. It provides some control in that it limits the potential external accessibility (ok not much but even a little can help), it helps people keep the networks separate in their minds, it doesn’t significantly impact connectivity and it allows some obfuscation of the environment at least from certain perspectives. The only real drawback is that to access it remotely NAT might be necessary (of course I kinda see this as a plus).
Keep the subnets relatively small while allowing for growth. There is absolutely no reason I can think of for having a 248 or 240 mask. If the PCN is going to be that large it wouldn’t hurt to logically divide it anyway. Increased division can also help from a redundancy and reliability standpoint by facilitating the use of routing protocols for redundant paths vs. spanning tree. Use spanning tree only for close redundancies one or two hops at most (in my opinion not even then, I am really not a fan of spanning tree I see it as an attempt to inject layer 3 functions into an inherently layer 2 protocol suite, It’s only valid function is stopping loops not providing redundancy in my mind – sorry networking religious quirk of mine) use routing for anything more significant.
If you have a large enough site to require multiple subnets and you are using private addresses (or are lucky enough to have a huge public range and choose to ignore my advice to use private ranges anyway) chose subnet breakdowns that allow for easy masking for expansions or acquisitions. (Net ranges at 16, 32 or even 64 on a 10.). This is good advice for normal networking as well. I don’t know how many organizations I have seen paint themselves into a box with 10.1, 10.2, 10.3 schemes that prevented easy logical aggregation using the octets themselves without sucking up huge ranges.
With one exception (the Gulf of Mexico’s Deepwater Rigs) almost all PCN’s I have seen have been small enough that they are end subnets on any routing network. My only real comments on this one are why route it if you don’t need to and if you do route contain the gateways and paths to something you (or at least your organization) have control of.
MPLS hasn’t caused any significant problems that I have seen yet but it can be compromised from the provider side. This compromise is not limited to watching traffic. A friend of mine and I successfully did an injection attack by replacing labels in line using a perl script. We convinced “customer” network Alice that we were an address on “customer” network Bob and pinged addresses in Alice. This was in a lab environment so this is easier said than done but it is possible. The main reason I think this is significant is that in some nations access to the nodes of the provider network might not be as controlled as in others. Of course the same risk holds true for Frame Relay and ATM but the pool of potential hostiles that are knowledgeable enough to pull it off for those two is a lot smaller. I also trust the carrier networks less because I know that many of the MPLS networks are growths from the older and uncontrolled MIP days. Frame Relay and ATM networks were never used as direct IP ISP’s. (though they did carry them at a different layer) Plus MPLS is growing like a weed because it saves the carriers money and they can pass a bit on to the customers.
Anyway you’ve been warned.
Enough writing for now. I’ll do ACL’s Firewalls and NIPS/NIDS Thursday or Friday.
This was also a topic of discussion at my Monday night dinner. One of the concerns for me is that as complexity is added the likelihood of unintentional failure increases.
It becomes a balance between the risk due to adding complexity and the risk of impact from either nefarious or mistaken connections.
I tend to think that we need to pursue these types of solutions now for the systems that need very tight controls and for a future environment that might be significantly more hostile. We should, however, be careful of how we deploy them.
If you look at my Ideal PCN post from a few months ago I touch on this.
Another quick comment: The Crypto isn't what matters here it is the control over access that the crypto provides that could add value.
Don't Google it until you give up. Don't blatently post it in the comments. If you know it feel free to provide hints in the Comments.
"The art of statesmanship is to foresee the inevitable and to expedite its occurrence."
First hint - Napoleonic
23 January, 2007
One lame excuse and one good one but only from my perspective.
The lame excuse is that I had little or no time this weekend to stage any posts so the entire week is likely to be sparse.
The good one (from my perspective anyway) is that I was lucky enough last night to have dinner with two luminaries of the information security world. Although I am into name dropping when it is appropriate in this case I will hold back (well a bit).
If you get the chance to have dinner with the CISO of one of the largest companies in the world and one of the founding members of several security firms that either grew on their own to be first rate or successfully got purchased at a good profit for all involved, you don't turn it down.
That goes double if they choose to pick up the bill.
Boston Clam Chowder, Mussels and Scallops, Wrap it up with Creme Brule.
I passed on the Wine because I had a snow filled drive back home.
Conversation ranged from Hitchhikers Guide to the reproductive idiosyncrasies of bees all in one unworldly way linking back to info security.
Geeky but fun.
With the rush I didn't even realize that I somehow double posted yesterdays link to Alan. I'll try to pick it up a bit over the next few days and should have the layer - 3 networking post done either tomorrow or Thursday.
22 January, 2007
You think they would know their audience better. I am handcuffed on browser choice at work but at home I block pop-ups.
20 January, 2007
I traced down all of the links and it is quite interesting. The most interesting piece for me was the last section with the questions about how to access the energy produced.
One thing that always tickles me about how the MSM usually leads a story about fusion is that they describe it as a safe "waste free" type of nuclear power.
With the Tokamak designs they rely on neutron heating of a water (or other medium) tank as the primary external energy transfer mechanism. In order to get enough energy to be efficient using this method you would have to have one heck of a massive neutron flux. Neutron fluxes create active isotopes so there will be large amounts of radioactive material (RAM) created. Of course this can all be contained in a similar way that RAM from fission reactors are. There is an advantage over fission reactors in that since transuranic elements are not used the really long lived RAM will be very small to non existent but Tokamaks will create a lot of RAM including every nukes favorite Isotope CO-60.
Energy capture from a Boron proton fusion would have to involve heat collection from the collisions and scatters of the three resulting alphas. The biggest drawback there is that there is no easy mechanism to get them out of the reaction area. Neutrons literally walk right though walls but the alphas won't go far. The design would probably have to have a high enough operating temperature range at certain locations for standard heat transfer mechanisms to be efficient.
This quote is spot on:
"The fusion is quite real, unlike the cold-fusion fiasco. What seems like the biggest problems are energy break even and durability of the equipment. The conventional fusion reactor has achieved energy break even already, the next step for it is economic break even."
This doesn't seem to be junk science but still wouldn't be easy. In any case full development of it or a similar fusion methodology using different isotopes is certainly worth the effort. I'm not sure overall explorations should be limited to this combination either.
19 January, 2007
Fuzzing was a very popular post but I can tell from some of my emails that there are a lot of people that really, really don't get it.
They seem to think that fuzzing is some sort of new hacking or pen testing method and that you can use it to get into a remote system that you know little or nothing about.
Nothing can be further from the truth.
First of all fuzzing has been out there for quite some time and is not new. Secondly it can help you find a weakness but only if you already have visibility of what is occuring.
Wiki has a pretty good description of what it really is.
Basically in order to properly fuzz you need to have total access to the target system and application and have the ability to do verbose logging (or at least watch the processes failures). All it is is jamming random (well more often targeted random) garbage at an input interface to the application. That input interface can be a table in a Db, a entry field in a GUI form, web form, IPC Mechanism, TCP or UDP port.
What the fuzzing tools do is make it easy to get to the point where you can most effectively spue the garbage and sometimes help you choose what kind of stink you want that garbage to have and finally watch what happens to the systems and apps when you do.
I'll repeat. Fuzzing will not help you (directly at least) break into a system you do not already have access to. With a few exceptions the best you can do with it is cause a fault and even then it is often only likely if you already have a pretty good idea of how to make it happen.
If you know what you are doing, PHP and/or Perl combined with detailed protocol and application interface documentation are the best fuzzing tools out there. Near unlimited versatility is the biggest reason I say this. The tools mentioned in the Computer Defense post are all great at getting you to the point of the data entry and even helping the random spue but ultimately you have to be able to analyze the failures (if any) that occur to get any value out of it.
If you are trying to find a completely new less than zero day they can help some but even then it is kind of like the infinite monkeys meme (certainly some will write great books) unless you already have a pretty good idea of what you are looking for.
Some of the tools can also be useful in manipulating systems in other ways but that really isn't fuzzing.
18 January, 2007
Shortly after the matter of cloth weaving has been disposed of, the button makers guild raises a cry of outrage; the tailors are beginning to make buttons out of cloth, an unheard-of thing. The government, indignant that an innovation should threaten a settled industry, imposes a fine on the cloth-button makers. But the wardens of the button guild are not yet satisfied. They demand the right to search people's homes and wardrobes and fine and even arrest them on the streets if they are seen wearing these subversive goods."
Requiring permission to innovate? Feeling entitled to search others' property? Getting the power to act like law enforcement in order to fine or arrest those who are taking part in activities that challenge your business model? Don't these all sound quite familiar?
I really don't understand why they don't realize the damage they are doing to themselves and to their industry. I suppose it is the frog in a pot being brought to a slow boil.
This would make Control and Security of ACS far, far more important than it currently is.
I am not exaggerating. If it is successful everything changes over the next 15 years.
Anyone who has read my earlier posts will know that I am an on again off again enthusiast then skeptic for most Singularity type topics.
I usually try to be more realistic.
For those who are not into this topic the "Singularity" (Tech as opposed to astronomical or physics) could probably be summed up by what happens when the exponential trends of Moores Law merge with Nano engineering and biotechnology.
The basic concept is that as these exponential trends continue they will reach a point were the expansion and changes occur so quickly that they are beyond the means of the human mind to comprehend them.
I don't know whether current trends will continue uninterrupted or will instead plateau but even if they only continue for a short time there are some very significant changes in the works.
In the realm of info security this could be very important. If microscopic automated machines (bionanomachines might be a better term) run on software and use soon to be discovered communication mechanisms then control system viruses and hackers take on a new significance.
The meme structures, governance, and controls that are used will have to be hybrid solutions just like the systems to be protected. Just like a hybrid approach is necessary to tackling the difficulties emerging in the automated control would as it further integrates with the Internet technological generation, these control systems will have to include information security andControl system like safety and security. They (and we) will also have to adopt biological controls like immunology and other controls to ensure not only their operablility but also their safety.
I know this is a flight of fancy for most of the people reading this but the reality is that even if only a small fraction of what is being (realistically) predicted comes true there will be phenomenal changes to the human race. Changes that are just as (if not more) significant than the changes that the technological revolution of the last century but they will happen over then next 10 to 20 years.
Pretty funny from the land down under
And if you don't get what I am saying please stop reading my blog.
No really, don't come back.
Weird, I wrote this two days ago and posted it this morning before hitting newsgator where I saw this by Alex and followed it to this Dilbert. I am officially freaked out.
17 January, 2007
In the comments on the first half of the post, Dale from Digitalbond mentioned DNP3 as a layer two protocol implemented over Ethernet and correctly pointed out that Modbus IP was an application layer implementation of one of the communication protocols that ran on the older RS-232 links. Ron seconded this. There are a lot of other similar instances as well by many control vendors. They basically packetize simple direct connection communications often (always? as far as I can think) without any authentication. I can think of ones from Rockwell/ABB, Seimens and Honeywell off the top of my head. They were proprietary layer 2 communication protocols and to enable their easy use over IP networks a simple (usually very simple) packet based communication string was set up. Usually a bunch of checksums and CRC's are used to try to deal with the deviance from a deterministic network.
When layer two protocols run on layer two point ot point connections there is rarely much of an issue. Security can be handled as a physical access problem and the realistic threat pool is vanishingly small. Not to many people are willing to either separate your wires from thousands of others, climb to the top of a pole, or dig into a ditch to tap into a single link to a single RTU or PLC. Even if they were it wouldn't net much.
The real risks come from two different implementation mechanisms. Wireless deployments and efforts at implementing layer two protocols over layer 3+ designs and/or integrated with multipoint layer two mechanisms seem to present the most problems.
Now that I think about it wireless could probably be considered just another example of the last point.
A quick comment on DNP3 over IP and Ethernet from a networking standpoint (as opposed to a security standpoint, though this certainly fits with reliability and therefore availability). DNP3 uses a ton of CRC’s so it is pretty chatty from a collision domain standpoint. For smaller implementations this probably won’t show up but for larger sites and for networks that have multiple uses you will have a lot of collision storms if you either have older networking equipment (hubs) that aren't switched or you have a lot of nodes converging at a single point.
This symptom lead to one of the most common security mistakes I see made regarding a misunderstanding of layer 2 and 3 overlap. The “its ok you can’t sniff it because it is switched” response.
First of all in most cases I could care less if you can sniff most SCADA traffic (the whole AIC vs. CIA conversation). I do however care if you can interrupt traffic or worse yet insert invalid traffic (intentionally or not). On Ethernet it is a trivial exercise to do this.
So far I have been spending most of my time talking about wired communications but wireless has been around for a long, long time in the SCADA world. When used exclusively for telemetry it is mostly harmless. The one thing to be very careful of in a telemetry monitoring mode is that open loop decisions that are taken using suspect data are subject to initiating cascading failures. Decisions made remote from a site due to old or inaccurate data can easily lead to a chain of improper system and people responses.
My biggest concern with the wireless deployments and equipment I have seen recently is that they don’t seem to be learning from the mistakes in the IT world. 802.11 equipment is prevalent and it often is just used with default settings. There is a huge pool (many thousands) of people and devices specifically looking for openings in 802.11 networks. Even spread spectrum equipment is often implemented with default factory settings. This results in being able to connect to the back end networks without authentication by simply having the right equipment. Admittedly few people have this equipment but it isn’t difficult to get and is sometimes relatively cheap. Since the background connections for much of this equipment is an IP network it is often trivial to get on to the PCN (sometimes from a great distance away).
In summary SCADA controls for layer 2
For RS-232 or 485 the only real protection mechanisms is physical line security. There is an inherent risk mitigation for RS-232 due to the fact that it is only point to point. Even if you can easily tap (and interfere with) one of the connections realistically it is very difficult to affect the overall operation of the system because there are usually multiple nodes that provide correlating information and control. Unless all or most of those nodes are interfered with there is usually little risk of significant impact.
A single point tap to Ethernet and IP deployments provides access and control functionality to all nodes that are not specifically isolated on that network. This greatly increases the risk. Controls for Ethernet include MAC filters, NAC (not quite ready yet but emerging), VLAN’s, Port disabling/control, node level segmentation and dynamic monitoring and response.
Similar to Ethernet wireless implementations pose the potential risk of access to multiple nodes from a single access point. It is worse than Ethernet in that physical proximity is not essential for the compromise to take place. The easiest control for wireless is simply to not use it unless necessary. Unfortunately it is necessary (actually essential) in many, many instances. If possible, one of the most effective controls is to limit Wireless connections to a point to point model where it is not possible for any of the nodes to access the root communication network. Only the historian or system they need to report to should be accessible. If aggregation points are necessary use some means of authentication coupled with encryption. Avoid using factory defaults unless those defaults include strong node authentication. For 802.11 controls include WEP (for encryption [it makes it just slightly harder to connect and helps protect in other layers]), EAP (and variants LEAP and PEAP) , WAP and MAC filters.
Let me be clear here, I am not saying to not use these technologies. There is an enormous amount of value in using them and in many cases security is actually being improved when they are properly implemented. I am saying to use the controls that are appropriate for the level of safety or risk associated with the system the controls are on.
Stephenson is an awesome writer
one of my favorite quotes
"To condense fact from the vapor of nuance" which is in Snowcrash.
I hope Clooney doesn't mess it up.
16 January, 2007
The Diet For The New Year
The Purina Diet
I was in Wal-Mart buying a large bag of Purina
for Lola and was in
line to check out. A woman behind me asked if
I had a dog........ Duh!
I was feeling a bit crabby so on impulse,
I told her no, I was
starting The Purina Diet again, although I
probably shouldn't because I'd
ended up in the hospital last time, but that
I'd lost 50 pounds before I
awakened in an intensive care unit with tubes
coming out of most of
my orifices and IV's in both arms. Her eyes
about bugged out of her
head. I went on and on with the bogus diet
story and she was totally buying
it. I told her that it was an easy,
inexpensive diet and that the way it
works is to load your pockets or purse with
Purina nuggets and simply
eat one or two every time you feel hungry.
The package said the food
is nutritionally complete so I was going to
try it again.
I have to mention here that practically
everyone in the line was by
now enthralled with my story, particularly a
tall guy behind her.
Horrified, she asked if something in the dog
food had poisoned me and
was that why I ended up in the hospital.
I said no.....I'd been sitting in the street
licking my butt when a car
I thought the tall guy was going to have to be
carried out the door.
I would describe Achilles as a fuzzer for SCADA and DCS systems.
There was an effort (well thought really) to build Achilles testing into SP99 that Jay White at Chevron, and a couple of other oil industry guys were thinking of chasing shortly after the first SANS scada conference. I am not sure where it went. If I remember right Michael at INL and Ray and Sandia were also in that conversation briefly.
A standardized way to approach the process would be good for all ACS using industries.
This post at Computerdefense covers quite a few fuzzers not related specifically to SCADA.
More Fuzzing Detail here.
If you get a chance please visit the home page and mention it on lists or your own blog if you like it. This is a realitivly new blog and I am trying to spread the word.
15 January, 2007
OK - Second Layer. In OSI it is the Data link layer with Collision Detect, Collision Prevent, Ethernet, Token, TDM, and all of the others. In a nutshell it is how the systems talk to each other on a point to point basis. When you are talking Ethernet and switching (especially spanning tree) you get overlap into layer 3.
There are a number of areas where it is significant from an information security standpoint for SCADA systems. In the last 10 years the conversation has become dominated by the Ethernet issues but there are other significant issues occurring as well particularly in the wireless realm.
RS-232 (now iea232) was the prominent linking mechanism for quite some time (defined in 1969). The PLC’s can play the part of either DTE or DCE depending on its function in the design. There are some huge advantages to the RS-232 usages. It supports deterministic timing meaning that actions and responses can be watched real time and reactions can be based on ladder logic layouts without much concern of a “lost” packet. It supports a sufficiently high data transfer rate for most automation processes and it has been well tested and used. RS-232 is falling somewhat out of favor as a connection mechanism in the automation world and largely being replaced by Ethernet for local connections. (boy that sentence is going to generate some hate mail) Although IP is really at the next layer it is part of this shift and in the rest of the networking world this shift happened over a decade ago. If you look at my older posts this synchs with my stand that the automation world lags the rest of the information systems cycle by two to three generations and 8 to 10 years.
There are some substantial security implications of this shift to Ethernet. First of all the shift has just started. Less than 20% of PCN’s are Ethernet but most of them (say 80 to 90%) have direct control connections to the Ethernet network via various aggregation tools/methods such as RSLinx. Ethernet, while very reliable if properly deployed, is definitely not deterministic. Multiple nodes exist on the same structure and they work on a modified collision detect structure. If one node is talking the others wait random periods of time to start. Switches mitigate a lot of this by separating the collision domains but when a destination node is receiving traffic from multiple sources there are still lost packets. This is largely overshadowed by significantly greater data transfer rates.
There are some very specific weaknesses to Ethernet that I am concerned about in the PCN world. The most prominent is ARP spoofing. Without getting into the details (I’ll save that for the follow on PDF’s I am starting to write) arp spoofing involves taking advantage of the way ethernet makes connections to allow one node to “pretend” it is another node. Although I have never seen personally, or even heard of a case of arp spoofing on a PCN the entire architecture would be very vulnerable to it. I think the biggest reason it hasn’t emerged yet is that there is no real need to do it at this point. If there is no authentication to a MODBUS IP node anyway why bother pretending you are from somewhere else. As ACL’s and in line firewalls increase in the prevalence I think the frequency of ARP attacks will increase. This could have a very significant impact on devices that are so fragile that they croak when a syn scan hits them.
Controls for the Data Link Layer are pretty simple. For Ethernet MAC filters (Recently in the form of NAC) and switch configuration shutdown of ports (which overlaps with Physical security) serve as a first layer.
NAC is emerging but still needs some development. What it really comes down to for NAC is that a device needs to talk to an end point to be authenticated in any way (let alone a fancy key exchange followed by certificate verification). Since it needs to talk it has to be given the opportunity to connect to the network. What this eventually evolves into is a means of quarantining a device in an “unauthenticated” VLAN until it is verified by some means. This inserts multiple points of opportunity to overcome the defenses. Any time the layers work against security instead of for it you can almost guarantee that someone will find a hole.
The NAC schemes that seem to be most likely to succeed involve Identification of the MAC as an accepted MAC by an authentication and verification that occurs in a quarantine VLAN. A lot of the schemes are using DHCP because it already has a means of differentiating based on MAC address but this has the weakness of not covering for static addresses. All of these NAC methods require upgrades or replacement of existing hardware for most implementations. Other NAC schemes involve searching for the bad guys and using some other mechanisms to expel them from the network.
VLANS are the next major control associated with layer 2 in the Ethernet environment. Basically they are a means of segmenting traffic into separate “networks” on the same devices. They can be set up using different mechanisms as the differentiators for which traffic belongs to which VLAN. The most common I have seen at sites is a simple port assignment. With this mechanism ports 1-6 are assigned to VLAN Bob, ports 7-12 to Alice and so on. Since each VLAN is a separate logical network they typically “cannot” talk to each other without a layer 3 connection. VLAN’s are often associated with a specific IP subnet (sorry layer three + again here).
The last part is the catch from a security perspective. Network Administrators and Engineers almost always assign a gateway for each that has no filter or ACL to prevent Bob from talking to Alice or worse yet Eve (at a completely different site) from mugging Bob. Just because they are on a different subnet does not mean they cannot talk to or interfere with each other. The problems for this don’t occur at layer 2 but when designing, operating or auditing you shouldn’t think that being on a different VLAN by itself is a protection. A further complication with VLANS set up via Port assignment is that there is often a VLAN used for management or troubleshooting that is assigned the entire port range (or at least overlaps other ranges). Any device that bridges these also serves as an entry point. It also serves to complicate the design.
VLANS can also be based on source MAC addresses, QoS classifications, IP addresses and other means but I haven’t seen that much of the more detailed assignment mechanisms in the ACS world. MAC address differentiators are sometimes used but have most of the same pitfalls of the port based VLANing. Some realistic NAC implementation mechanisms try to take advantage of MAC based VLANing to provide the quarantining I mentioned above.
Key point here. Just because it is on a different VLAN does not mean it is segregated. Try something simple. Ping it from one device to the other.
Already too much writing for the weekend. I’ll continue later this week.
First of all slow down!!!
I got cut off this morning twice by morons driving 80 +. I did get the smug satisfaction of seening one of them about 15 feet into the woods with a cop standing by (clearly not hurt) about 15 min later.
On the other hand if you have to slow down to 15 MPH on the freeway perhaps you should have stayed at home.
If there is a blinking sign every 5 miles saying 45 MPH perhaps it is good to stay within 10MPH of 45? huh?
12 January, 2007
Coalescence of Beta Irradiated Carbon Nanotubes (sorry for the full article you have to pay but the summary is enough to get the mind moving) Or any number of other similar articles I have seen.
How about if you use electron streams from multiple angles so that the incidence beta radiation level is only high enough for covalent bonding in specific controlled points (or lines or planes[planes would be hard])
Stack the sheets and bond them?
Join Sheets edges?
Encase determined impurities?
Alter electrical characteristics in specific patterns?
How uniform are the sheets?
How much does tube damage degrade the Van Der Waals forces?
Can that degradation be overcome by increasing interbonding due to covalent interlinks?
Space Ribbon here we come.
A few of them have chased it down via Google.
It is Sanskrit for the lotus sutra.
It is a pretty interesting Sutra. My interest in it is that there are a number of parallels between it and items in Christianity.
The meme typing and possible cross pollination of concepts has me intrigued.
- It emerged prominently about the same time as the shift to the common era. (somewhere around 1 to 100 AD).
- It includes one of the first real examples of Buddha being described as divine. (something more than an arahat)
- Buddha is described as having chosen to come to earth and accept the suffering to teach others the path to arahat.
- It includes the concept of sacrifice of the divine to save (teach really) the non divine.
- There are several "parables" that mirror similar stories prominent in the new testament including most strikingly the prodigal son parable.
- It constantly alludes to a teaching that transcends the other teachings.
There are a number of other parallels. It is also interesting in that a number of its thought exercises are very similar to the mental gymnastics involved in quantum physics (not that that is unusual for Buddhist writings).
Obviously the timing and the similarities in content and context are striking.
Is it an example of cross pollination of meme structures and types? If so in which direction or both?
Could it be concurrent emergence of a meme structure due to either ubiquitous environmental or developmental factors?
Can it and the histories associated with the two religious writings be used to examine how meme's interact across divergent cultures and geographies?
Many many interesting questions emerge for me from these writings.
I had an attempt to steal my identity last month. Probably a bad idea for someone to try to steal the ID of a Info Security guy. In any case the card company (with help from us) chased down the perp.
The pattern matched what some other in the security blogsphere posted previously (sorry I searched but couldn't find the actual post, if you email it to me I will link) in that the theft was from my old mailbox not from any on line source.
I was pleasantly surprised with the credit card companies response. They clearly flagged the transaction early, they contacted us and verified information, they provided us with details on what occurred and what we should expect to see. Although they were thorough in ensuring that it wasn't me or my wife that took the money they were respectful and quick about it and didn't require (I should probably say try to require) me to do any significant work or divulge any information of my own.
So some companies (in this case Chase) clearly do take it as their responsibility.
11 January, 2007
250 Million USD for a soccer player in the US. Unbelievable.
I have to be honest though I don't know how they will recoup this. Not with the attendance I have seen for MLS. Perhaps he will change that.
Should be entertaining anyway.
Control Engineers need to both be aware of the need to patch and update and have an understanding of when it is relatively urgent. There is no real reason not to patch your systems. To go a step further control vendors need to start developing a organized and controlled mechanism for updating and patching the historians, MES and even PLC's. As cycle time shortens vendors that have already developed this capability will come out ahead.
Obviously all of this has to be done with proper change management.
On the flip side it is absolutely essential that companies like Microsoft and others realize that as they expand more and more into the automated control world they need to have a greater sensitivity to allowing the customer to control when, how and where any changes of any type occur on systems. If they cannot achieve this with their standard deployments then they need to develop deployments that are able to do it.
I'll go one more step further. If you are an engineer and a new system that a vendor is pushing you towards runs an application or OS that performs updates and changes without your full control, you have an obligation to NOT use that system. This is exactly what D was describing in his post on Vista and DRM.
If Vista performs updates and takes actions without allowing the administrator to control those actions in terms of when, how much and even if they occur then Vista should never be used in any closed or open loop control environments. Period.
With root kits and other driver level attacks becoming more prevalent it is good for MS to protect the drivers and ensure they are not the bad guy, but for process control systems they need to do so in a manner that leaves complete control of the process in the hands of the owner of the system not in the hands of some arbitrary algorithm. I don't believe this is some greedy driver licensing scheme (though I could be wrong).
In the SCADA world the ability of the opertor and engineer to fully control the operation of equipment trumps all.
In any case don't read it if you dislike conversations that change the spin of political and religious expectations.
He was born in Pakistan, lived in Saudi Arabia, and has gone to school in both the US and Canada.
I get a fair amount of readers from both Pakistan and Saudi Arabia, If you do choose to read his blog and disagree with it feel free to point out any discrepancies in the comments area and explain where or why you feel he is wrong.
10 January, 2007
09 January, 2007
08 January, 2007
Decent Exploit of ACER machines anyway.
This could be problematic.
I am also not that certain of their method of informing ACER of the Vuln (or even if it was approached that way). I doubt ACER meant it the way it can be used.
Any of the items I mentioned might be too extreme or not extreme enough depending on what the relative risk level is for a given system. For one system (on the physical protection level) an electrified fence with constantine wire, an armed guard and trained guard dog may be appropriate. For another a padlocked plywood door might do.
One of my key points from the post was that certain design factors of the physical system can be considered mitigating factors for information security risks related to the ACS.
Alex at Riskanalys deals very well with mechanisms of determining how significant potential issues can be. Hopefully some time in the future he an I might be able identify subsections of impact modifiers, threats and controls specific to DCS and SCADA.
07 January, 2007
This might lend additional credence to the argument that this was a mistake in transportation vs. a poisoning attempt.
If he was repeatedly receiving the shipments exposure could have returned multiple times.
One other item on this hypothesis is that Polonium has some interesting physical characteristics in that it will "climb" the walls of a container that it is in and passes weak seals easily. (only is significant quantities)
It does call into some suspect the Chechen link though.
Think hyper mercury.
05 January, 2007
As I have said previously I tend to think of Security layers in terms of an expanded OSI model. This might be somewhat simplistic but it does provide an easy structure for a working defense in depth strategy. In many cases it also matches well to the domains, objectives and ISO categories. In areas where it deviates it often fills gaps rather than creating superfluous work.
Strictly speaking layer 1 deals with the standards for physical connections radio and wireless characteristics and timing and signaling mechanisms. I am not talking about the actual OSI layer I am just using it as a conceptual guideline.
Physical Security is one of the fundamental pieces of the information security structure and is essential for proper defense in depth. Physical Security requirements are recognized in ISO 17799 as a category, within CoBiT in multiple control objectives and in ISC2 as a domain. It is often one of the more difficult aspects to deal with. Direct control of Physical Security is often out of the hands of IT or Engineering (typically for good reasons). Wireless mechanisms complicate proper implementation of physical security by bypassing existing mechanisms of control. Finally many Physical Security best practices and needs fall outside of the actual scope of Data Security. All of these are standard complicating factors when dealing with Physical Security.
Within the Automated Control world the physical security becomes far more complicated in that it also includes aspects of safety. While many of these are issues that properly reside in the responsibility realm of the engineers and operators it is still essential that the people responsible for managing information security risk understand how they work. Though they are not directly part of the information security realm, often proper physical security and physical design parameters can mitigate much or even all of the risks presented by information systems ties. There are also some unique challenges to obtaining the typical requirements for physical security of information systems.
Perimeter Security, Controlled access, Manned monitoring and reception, Environmental Controls, Control of access to cables, Public Areas, Secure Disposal methods and Monitoring of support infrastructure fall within this realm in typical Information Security implementations. Within ACS deployments Fail Safes, interlocks, inherent physical characteristics, proper finite element analysis and redundant essential systems (three pumps) greatly reduce risk of issues in critical systems. These should be added to the standard list of physical concerns to understand for information security professionals that deal with SCADA systems. When properly implemented, these design criteria and mechanisms can alleviate many of the concerns that are often cited in information security risk profiles for SCADA or ACS.
Perimeter Security is the establishment of a clearly defined boundary with controls to ensure that only the proper people have access to the equipment and systems within. The typical perimeters are walls, fences, hedges, cages, and separate offices or buildings. To be effective they have to be combined with controlled access and manned monitoring. Wireless systems circumvent perimeter security mechanisms completely and therefore must have a differentiated access control mechanism instead. ACS and SCADA complications to perimeter security mainly deal with scale. Some oil fields span hundreds of square miles, Power Lines are ubiquitous and have many unmanned transformer and switching stations, water systems and pipelines go through towns, cities and neighborhoods and can stretch for thousands of miles. While remote pumping and transformer stations usually have perimeters they are rarely manned. For reasons that have nothing to do with IT security they are usually well monitored in the form of alarm systems and physical access barriers but often the incoming telecommunication systems are accessible outside of this perimeter. A mitigating factor to physical access risk that deviates from a standard IT environment is that many of these systems are so remote that it would be very difficult for someone who is not already "inside" to access them. The North Slope and offshore rigs come to mind. This mitigating factor should be considered but not always relied on.
Controlled access includes locks, gates, key card entries, and the reception lobbies. For wireless systems it includes the authentication mechanisms. All of the encryption in the world is useless if you have no means of authenticating access to the root system. This was the entire nature of the misunderstanding of WEP for 802.11 and all the problems that have stemmed from those mistakes. This same gross conceptual error also extends to the spread spectrum systems being deployed currently in many SCADA and PCN environments. Just because I am unable to intercept communications between a base station and a node does not mean that I cannot connect to that base station directly provided I have the right settings. Without some form of authentication it becomes a function of security by obscurity. All of the devices and networks become accessible (sometimes from up to 100 K away) with one mistake.
Eventually any physical barrier or controlled access mechanism can be bypassed. At this point manned monitoring becomes an essential piece of the physical controls. Typical monitoring mechanisms are direct manning, patrols, cameras, log reviews and equipment monitoring. The last piece is one of the greatest mitigating factors for good ACS security. Almost all operating machinery has an operator somewhere monitoring it or the system attached to it. By properly using/training these individuals a significant reduction of risk can be obtained. The presence of these operators is one of the significant advantages that many SCADA environments have over the typical office environment. In some other post I will discuss Segregation of Duties and how in many cases these operators are one of the most likely risks but for the purposes of enhancing physical security they are one of your best assets.
Interestingly enough Environmental systems are often one of the stealth ACS environments out there that almost every organization is dependant on. HVAC systems are essential for the proper operation of any data center and are more and more likely to be controlled by network accessible interfaces. It is also becoming increasingly common for power distribution panels to have standardized Ethernet accessible PLC's controlling them. Other than the realization that these systems are increasingly likely to be able to be hacked there is little to differentiate the physical environmental requirements of ACS vs. Standard IT systems. Redundant power, proper cooling and heating are all important. One thing for engineers to keep in mind is that many security systems such as firewalls, NIPS and switches are designed for a data center environment. They may not perform well in a shed that reaches 20 below zero. I have seen a firewall implementation mandated by information security have difficulties with MTBF for precisely this reason. Note to vendors - If you want to get into the SCADA market start designing more resilient equipment. A typical Ethernet switch placed 10 feet away from an operating paper machine rarely lasts long.
Control of access to cables can be very problematic in a PCN environment. When a network extends for miles there are any number of points where access can be obtained. Fortunately there is some mitigation in the form of departure from typical Ethernet connections (at least as long as it lasts). Most extended networks require some form of longer range layer two connectivity's. I will discuss these items somewhat in layer 2. Including the fiber runs within trenches or other relatively inaccessible paths can help further mitigate risks associated with this control but for large geographic areas there are definitely challenges. For facilities with defined areas it is worth ensuring that cables that cross public roads or areas are not easily accessible or are protected at another layer if it is unavoidable. A key problem I have seen with this is RJ-45 outlets to a PCN Ethernet segment without any identification of the network type or any way of controlling who plugs into it. This often occurs when an engineer thinks it is alright to put a PCN connection in a conference room (or office, or even home) that he commonly uses. While not absolutely essential complete physical separation (including switching infrastructure) of PCN from all other networks should be considered. If the system is safety essential, critical or "red line" such as ESD systems then complete physical separation should be considered essential.
For the IT people reading "Fail Safes" is the failure mode of specific equipment or systems. As an example valves fail in three modes, Open, Shut, or as is, with a loss of power. The engineers who design the system determine which failure mode provide the most safe environment for a given system and status. Interlocks ensure that when certain devices or systems are operating in a specific manner that other specific actions cannot happen.
From an information security standpoint an important aspect to consider is the dependence off failure modes and interlocks on programmable controllers. Ideally A fail safe position is a fail safe position and nothing can alter it. It is an inherent part of the system. The same should be true for interlock responses. The problem occurs usually when specific programmable settings are used to enact the fail safe or interlock and those settings can be altered. I have seen some problems with this in some ladder logic deployments (essentially a series of inter dependant switch positions). Because controllers are more likely to be remotely configurable it is more common to see interlock settings and fail safes that can be alterable without the knowledge of the operators or engineers. This is one reason that control of physical access to the PCN (and by extension the PLC's) is so important. The flip side of all of this is that if the fail safes, interlocks and other inherent design considerations are done well it is very difficult for any failure mode to cause any significant issues. In a well designed system three or more sequential failures (at least one of which should be a physical property of the system) must occur before safety is compromised.
I couldn't tell you how many times I have sat in a room with an Information Security professional talking with Engineers and the IT guy states that one of the risks include fires or explosions. The engineers usually just roll their eyes. The fact of the matter is that in a well designed system even if an operator with complete access to the systems forcibly does things wrong it is usually very difficult to force a catastrophic failure. Of course I have also seen the reverse of this happen. If the failsafe is dependant on the proper operation of a PLC and that PLC configuration becomes suspect then that failsafe is no longer dependable. When an engineer learns of this the response is often a great deal of concern.
04 January, 2007
IE unsafe for 284 days of last year.
Perhaps that mitigates what I started writing this post on.
What I really wanted to mention was a combination of mild whine about my wife's blog getting hundreds of more daily hits than mine and for some reason her readers have more up to date and secure browsers than the ones reading mine.
Ain't statcounter great?
For IE her readers are 5 times as likely to be up to date.
For Firefox they are 3 times as likely.
Netscape, Safari, and Opera the numbers are too small to matter.
I really wander if this is a difference between home machines and work machines or some other demographic artifact.
Update your browsers people. (or if it is more appropiate tell you desktop group to get up to date)
What are your ideas as to the reason.
I am somewhat more sceptical on the politics and the science is light but definitely something to worry about. He is right on the money.
There is something wrong with the "assassination" theory. Polonium just isn't a good way to do something like that.
As far as the dirty bomb blow off he is somewhat right in that death tolls would be very low. On the other hand for a year or so no one could go within a mile of the site without some risk of pretty nasty polonium poisoning. So even if it isn't a nuke it would be pretty bad and very public.
This is disturbing.
Though as we start messing with genetics more and more perhaps it would be a good idea to know what code sequences are the buffer overflows of life. GPF takes on an entirely different meaning.
Snowcrash dealt with deadly meme's. Even more disturbing.
Though I suppose the researchers won't be so keen on it.
The kiddies might get irritated but the real big bad guys won't care too much.
But if we could actually reduce it for a month in the community what would happen?
It might also give lie or proof to the less than zero conversation.
If we did this what would the stats look like the next month and what stats would we care about?
Month to month AV infections?
botnet log information?
Firewall and IPS logs?
Can't say I blame him. I do have the Google adware links on my top and right.
In answer to his question about what they make. My significantly sub 10K page gets almost no income from the ads. I am just hoping that I can get enough to pay for the site (actually the statcounter).
I think I have made about 5 bucks in the last two months.
I don't do some of the fancy stuff I have seen. If it is in a post of mine there is no ad revenue. I have seen some others link to Amazon items with referral codes.
I have also been a bit surprised at what Google has chosen to advertise on my site. Most of the time it is topical but a few of the ads (especially after my happily married comment last week) have been ... lets just say surprising.
I agree with the need to keep it low key though. Not that I would follow the dancing cowboy anyway.
03 January, 2007
I thought it was pretty funny when they fell into their own trap at the end. It seems they were uncomfortable with the facts about nuclear power (which I read and are very accurate and somewhat pedestrian if you know the real science) and felt the need to try to debunk the very site they were plugging at the end.
It is kind of sad when people turn beliefs into pseudo Science and Pseudo Religion. Science allows for things to be proven wrong.
I provided a lot of links in my last post on global warming.
I am going to make a statement now that will probably irritate some.
Global warming is a fact.
Now hold on a second. Don't start labeling me as a political operative.
Stating that it is a fact does not mean it is a true fact. It simply means that it is possible to prove it wrong. (it is almost never possible to prove something true)
Facts grow in strength based on surviving attempts to disprove them. There have been many many attempts to disprove global warming. Many of the sub facts have been disproven but many others have stood. The argument is far from decided.
I am not a climatologist or geologist so I am no where near qualified to weigh all of the smaller facts in this.
It is interesting (and sad) that like the nuclear discussion this has reached the dogmatic stage where each side feels as if it must silence the other.
It is irresponsible for politicians to call for an end to the discussion and threaten financial ramifications. It is just as irresponsible for large corporations to obfuscate facts that are contrary to their side of the argument.
Nuclear power has been at this dogmatic stage for a while. Long enough that certain positions are trust cues for inclusion in many political groups.
My advice to politicians, reporters and pundits is engage in the discussion make your opinions known but don't try to use external pressure to change the science and don't disregard facts (true or not) just because they are counter to your preexisting meme structure.
No judgments just a bunch of links.
Forest Fires Reversal
Typical UN Effectiveness
A letter to Exxon
I found this when reading Volokh.
"Here's what I like about Ebenezer Scrooge: His meager lodgings were dark because darkness is cheap, and barely heated because coal is not free. His dinner was gruel, which he prepared himself. Scrooge paid no man to wait on him.
Scrooge has been called ungenerous. I say that's a bum rap. What could be more generous than keeping your lamps unlit and your plate unfilled, leaving more fuel for others to burn and more food for others to eat? Who is a more benevolent neighbor than the man who employs no servants, freeing them to wait on someone else?
Oh, it might be slightly more complicated than that. Maybe when Scrooge demands less coal for his fire, less coal ends up being mined. But that's fine, too. Instead of digging coal for Scrooge, some would-be miner is now free to perform some other service for himself or someone else."
and best of all
"In this whole world, there is nobody more generous than the miser—the man who could deplete the world's resources but chooses not to. The only difference between miserliness and philanthropy is that the philanthropist serves a favored few while the miser spreads his largess far and wide."
On the other hand there was that whole Tiny Tim thing. Not so great there. Pretty good description of the whole problem isn't it.
Was Scrooge the first real environmentalist?
02 January, 2007
The Windows Automotive Article in Detroit News brought to mind this old list. or Email
I think this might be an urban legend but it is still funny.
At a recent computer expo (COMDEX), Bill Gates reportedly compared the computer industry with the auto industry and stated, "If GM had kept up with the technology like the computer industry has, we would all be driving $25.00 cars that got 1,000 miles to the gallon."
In response to Bill's comments, General Motors issued a press release stating, "If GM had developed technology like Microsoft, we would all be driving cars with the following characteristics:
- For no reason whatsoever, your car would crash twice a day.
- Every time they painted new lines on the road, you would have to buy a new car.
- Occasionally your car would die on the freeway for no reason. You would have to pull over to the side of the road, close all of the windows, shut off the car, restart it, and reopen the windows before you could continue. For some reason you would simply accept this.
- Occasionally, executing a maneuver such as a left turn would cause your car to shut down and refuse to restart, in which case you would have to reinstall the engine.
- Only one person at a time could use the car unless you bought "CarNT," but then you would have to buy more seats.
- Macintosh would make a car that was powered by the sun, was reliable, five times as fast and twice as easy to drive -- but it would only run on five percent of the roads.
- The oil, water temperature and alternator warning lights would all be replaced by a single "general protect ion fault" warning light.
- The airbag system would ask, "Are you sure?" before deploying.
- Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key and grabbed hold of the antenna.
- GM would require all car buyers to also purchase a deluxe set of Rand McNally Road maps (now a GM subsidiary), even though they neither need nor want them. Attempting to delete this option would immediately cause the car's performance to diminish by 50 percent or more. Moreover, GM would become a target for investigation by the Justice Department.
- Every time GM introduced a new car, car buyers would have to learn to drive all over again because none of the controls would operate in the same manner as the old car.
- You'd have to press the "start" button to turn the engine off.
OK enough fun at MS's sake. The reality is that the mechanism of implementation is more important than the OS anyway.
Our little bloging community generally sticks to IT security. If you figure that IT is about 1/10th of the Internet community as a whole and security about 1% of that then the market is pretty small. The number of individuals interested in crafts is probably quite a bit larger.
I have quite a few posts cached so I will be picking up the slack over the next few days.
In the mean time Mike asks what would a deal breaker be for you?
In my case three things. (and each of these has caused me to leave jobs before in the past)
1. Anything I think would be detrimental to my family.
2. Anything that would require me to lie or impact my integrity. (including legal items)
3. Lack of engagement.