As I have said previously I tend to think of Security layers in terms of an expanded OSI model. This might be somewhat simplistic but it does provide an easy structure for a working defense in depth strategy. In many cases it also matches well to the domains, objectives and ISO categories. In areas where it deviates it often fills gaps rather than creating superfluous work.
Strictly speaking layer 1 deals with the standards for physical connections radio and wireless characteristics and timing and signaling mechanisms. I am not talking about the actual OSI layer I am just using it as a conceptual guideline.
Physical Security is one of the fundamental pieces of the information security structure and is essential for proper defense in depth. Physical Security requirements are recognized in ISO 17799 as a category, within CoBiT in multiple control objectives and in ISC2 as a domain. It is often one of the more difficult aspects to deal with. Direct control of Physical Security is often out of the hands of IT or Engineering (typically for good reasons). Wireless mechanisms complicate proper implementation of physical security by bypassing existing mechanisms of control. Finally many Physical Security best practices and needs fall outside of the actual scope of Data Security. All of these are standard complicating factors when dealing with Physical Security.
Within the Automated Control world the physical security becomes far more complicated in that it also includes aspects of safety. While many of these are issues that properly reside in the responsibility realm of the engineers and operators it is still essential that the people responsible for managing information security risk understand how they work. Though they are not directly part of the information security realm, often proper physical security and physical design parameters can mitigate much or even all of the risks presented by information systems ties. There are also some unique challenges to obtaining the typical requirements for physical security of information systems.
Perimeter Security, Controlled access, Manned monitoring and reception, Environmental Controls, Control of access to cables, Public Areas, Secure Disposal methods and Monitoring of support infrastructure fall within this realm in typical Information Security implementations. Within ACS deployments Fail Safes, interlocks, inherent physical characteristics, proper finite element analysis and redundant essential systems (three pumps) greatly reduce risk of issues in critical systems. These should be added to the standard list of physical concerns to understand for information security professionals that deal with SCADA systems. When properly implemented, these design criteria and mechanisms can alleviate many of the concerns that are often cited in information security risk profiles for SCADA or ACS.
Perimeter Security is the establishment of a clearly defined boundary with controls to ensure that only the proper people have access to the equipment and systems within. The typical perimeters are walls, fences, hedges, cages, and separate offices or buildings. To be effective they have to be combined with controlled access and manned monitoring. Wireless systems circumvent perimeter security mechanisms completely and therefore must have a differentiated access control mechanism instead. ACS and SCADA complications to perimeter security mainly deal with scale. Some oil fields span hundreds of square miles, Power Lines are ubiquitous and have many unmanned transformer and switching stations, water systems and pipelines go through towns, cities and neighborhoods and can stretch for thousands of miles. While remote pumping and transformer stations usually have perimeters they are rarely manned. For reasons that have nothing to do with IT security they are usually well monitored in the form of alarm systems and physical access barriers but often the incoming telecommunication systems are accessible outside of this perimeter. A mitigating factor to physical access risk that deviates from a standard IT environment is that many of these systems are so remote that it would be very difficult for someone who is not already "inside" to access them. The North Slope and offshore rigs come to mind. This mitigating factor should be considered but not always relied on.
Controlled access includes locks, gates, key card entries, and the reception lobbies. For wireless systems it includes the authentication mechanisms. All of the encryption in the world is useless if you have no means of authenticating access to the root system. This was the entire nature of the misunderstanding of WEP for 802.11 and all the problems that have stemmed from those mistakes. This same gross conceptual error also extends to the spread spectrum systems being deployed currently in many SCADA and PCN environments. Just because I am unable to intercept communications between a base station and a node does not mean that I cannot connect to that base station directly provided I have the right settings. Without some form of authentication it becomes a function of security by obscurity. All of the devices and networks become accessible (sometimes from up to 100 K away) with one mistake.
Eventually any physical barrier or controlled access mechanism can be bypassed. At this point manned monitoring becomes an essential piece of the physical controls. Typical monitoring mechanisms are direct manning, patrols, cameras, log reviews and equipment monitoring. The last piece is one of the greatest mitigating factors for good ACS security. Almost all operating machinery has an operator somewhere monitoring it or the system attached to it. By properly using/training these individuals a significant reduction of risk can be obtained. The presence of these operators is one of the significant advantages that many SCADA environments have over the typical office environment. In some other post I will discuss Segregation of Duties and how in many cases these operators are one of the most likely risks but for the purposes of enhancing physical security they are one of your best assets.
Interestingly enough Environmental systems are often one of the stealth ACS environments out there that almost every organization is dependant on. HVAC systems are essential for the proper operation of any data center and are more and more likely to be controlled by network accessible interfaces. It is also becoming increasingly common for power distribution panels to have standardized Ethernet accessible PLC's controlling them. Other than the realization that these systems are increasingly likely to be able to be hacked there is little to differentiate the physical environmental requirements of ACS vs. Standard IT systems. Redundant power, proper cooling and heating are all important. One thing for engineers to keep in mind is that many security systems such as firewalls, NIPS and switches are designed for a data center environment. They may not perform well in a shed that reaches 20 below zero. I have seen a firewall implementation mandated by information security have difficulties with MTBF for precisely this reason. Note to vendors - If you want to get into the SCADA market start designing more resilient equipment. A typical Ethernet switch placed 10 feet away from an operating paper machine rarely lasts long.
Control of access to cables can be very problematic in a PCN environment. When a network extends for miles there are any number of points where access can be obtained. Fortunately there is some mitigation in the form of departure from typical Ethernet connections (at least as long as it lasts). Most extended networks require some form of longer range layer two connectivity's. I will discuss these items somewhat in layer 2. Including the fiber runs within trenches or other relatively inaccessible paths can help further mitigate risks associated with this control but for large geographic areas there are definitely challenges. For facilities with defined areas it is worth ensuring that cables that cross public roads or areas are not easily accessible or are protected at another layer if it is unavoidable. A key problem I have seen with this is RJ-45 outlets to a PCN Ethernet segment without any identification of the network type or any way of controlling who plugs into it. This often occurs when an engineer thinks it is alright to put a PCN connection in a conference room (or office, or even home) that he commonly uses. While not absolutely essential complete physical separation (including switching infrastructure) of PCN from all other networks should be considered. If the system is safety essential, critical or "red line" such as ESD systems then complete physical separation should be considered essential.
For the IT people reading "Fail Safes" is the failure mode of specific equipment or systems. As an example valves fail in three modes, Open, Shut, or as is, with a loss of power. The engineers who design the system determine which failure mode provide the most safe environment for a given system and status. Interlocks ensure that when certain devices or systems are operating in a specific manner that other specific actions cannot happen.
From an information security standpoint an important aspect to consider is the dependence off failure modes and interlocks on programmable controllers. Ideally A fail safe position is a fail safe position and nothing can alter it. It is an inherent part of the system. The same should be true for interlock responses. The problem occurs usually when specific programmable settings are used to enact the fail safe or interlock and those settings can be altered. I have seen some problems with this in some ladder logic deployments (essentially a series of inter dependant switch positions). Because controllers are more likely to be remotely configurable it is more common to see interlock settings and fail safes that can be alterable without the knowledge of the operators or engineers. This is one reason that control of physical access to the PCN (and by extension the PLC's) is so important. The flip side of all of this is that if the fail safes, interlocks and other inherent design considerations are done well it is very difficult for any failure mode to cause any significant issues. In a well designed system three or more sequential failures (at least one of which should be a physical property of the system) must occur before safety is compromised.
I couldn't tell you how many times I have sat in a room with an Information Security professional talking with Engineers and the IT guy states that one of the risks include fires or explosions. The engineers usually just roll their eyes. The fact of the matter is that in a well designed system even if an operator with complete access to the systems forcibly does things wrong it is usually very difficult to force a catastrophic failure. Of course I have also seen the reverse of this happen. If the failsafe is dependant on the proper operation of a PLC and that PLC configuration becomes suspect then that failsafe is no longer dependable. When an engineer learns of this the response is often a great deal of concern.