… the corrupt DEA agent in Luc Besson’s great movie “Léon (The Professional)”. I’m sure quite some of you, dear readers, know the plot…
Just before the final shootout, when sending the first men of the NYPD ESU team into Léon’s apartment, he tells them to “Be careful!”. After learning those men got killed he just comments: “I told you”.
[btw: before yelling to bring “EEEEEEEVERYONE!!!!”, as those familiar with the piece will certainly remember ;-)].
I’m fully aware that I risk playing “the arrogant scumbag card” today and that it’s generally not very nice to refer to one’s own earlier statements with an “I told you” attitude (especially if harm was caused to some party), but this is exactly how I feel when reading these news. And – pls believe me – it’s an expression of utmost despair.
How often do organizations have to be told that running Adobe Flash might not be the greatest idea in the world, security-wise? How many statistics like this one (see section “Vulnerabilities” in the bottom part of it) have to see the light of the world until people realize that (quoting from this blogpost) “running Flash on corporate desktops is simply asking for trouble. Asking for trouble loudly. Very loudly.”?
When we wrote this document on configuring IE8 securely we pointed out that using Adobe Flash required a risk acceptance, from our perspective. Man, how I was attacked! for this very statement afterwards in the customer environment that document was initially developed for. I’ve since mentioned Flash in this blog here, here and here.
Furthermore we’ll include a talk on Flash in next year’s Troopers line-up, I promise. And be it only to avoid this post sounding like a crusade of a bitter old man… (yes, this was a wordplay referring to some character from the movie ;-).
A few days ago the European Network and Information Security Agency (ENISA) published this quite interesting document with the exact title. Here’s what it covers:
“The booming smartphone industry has a special way of delivering software to end-users: appstores. Popular appstores have hundreds of thousands of apps for anything from online banking to mosquito repellent, and the most popular stores (Apple Appstore, Google Android market) claim billions of app downloads. But appstores have not escaped the attention of cyber attackers. Over the course of 2011 numerous malicious apps were found, across a variety of smartphone models. Using malicious apps, attackers can easily tap into the vast amount of private data processed on smartphones such as confidential business emails, location data, phone calls, SMS messages and so on. Starting from a threat model for appstores, this paper identifies five lines of defence that must be in place to address malware in appstores: app review, reputation, kill-switches, device security and jails.”
Just read through it and while I’ve never been a big fan of STRIDE (mainly due the application centric approach which simply is not my cup of tea) I have to say it’s applied elegantly to the “app ecosystem” described in the paper.
The doc somewhat accompanies this one titled “Smartphones: Information security risks, opportunities and recommendations for users” (released by ENISA in late 2010), which is a valuable resource in itself.
Overall excellent work from those guys in Heraklion, providing good insight from and for practitioners in the field.
I’m currently involved in a “Remote Access Security Assessment” and you might be wondering what exactly this means. Well, so did we. At least to some degree (btw: last year we provided some notes on types of security assessments here).
It happens quite often we’re brought into an organization to perform “a security assessment” of “some item” (“our network”, “that new procurement portal”, “the PKI” etc.). It happens as well the customer does not have a very clear idea of the way such an assessment should be carried out (telling us “you are the experts, you should know what to do”). Or the five people from the customer’s side present in the kick-off meeting have five different concepts (ok, four. as one of them only wants “to get that damned assessment done so we can finally go live”) and we end up moderating their arguments on what should be tested, how this should be done, when this is going to happen, which type of report format is needed (obviously, there’s different ones, depending on the goal/scope/methodology of the assessment…) etc.
In such “vague cases” we usually define a set of (~ 10) categories/[security] criteria to be fulfilled by the item in question, based on different sources like the organization’s internal policy framework, potentially the project-to-be-assessed’s security goals, relevant standards and “common security best practices” (often the latter only to a small degree as we have a mostly risk based security approach which does not necessarily fit very well with [assumption-based] “best practices”; more on this philosophical question in yet-another-post ;-))
For example, in the case of a “Firewall & DMZ assessment” these categories/criteria could include stuff like “Dedicated LAN switches” (from $ORG’s internal guidelines), “Logging of denied packets” (same source), “Secure management” (from ISO 18028-3), “Scalability” (based on business needs as for $ITEM) or “Proper vulnerability mgmt” (ISO 27001).
We then perform a combination of assessment methods to obtain the necessary information to evaluate those categories in terms of “fulfilled”, “major/minor non-compliance”, “leads to relevant business risk”, “documentation missing” and the like.
Figuring the individual categories/criteria which are suited to the customer, their needs and the assessment’s (time, budget, political) constraints usually is intellectually challenging and thus “hard work”. Still, this approach (hopefully ;-)) ensures we can “answer the customer’s relevant question[s] in a structured way and with reasonable effort”.
So why do I tell you all this (besides the implied shameless self-plug)? I faced the very situation in the mentioned “Remote Access Security Assessment” and made some observations that I think are worthwhile sharing.
Let’s have a look at the potential sources for the approach laid out above:
a) internal policies: if I told you that “no formal document written, say, in the last five years, covering the topic and/or available to the people engaging us” could be identified, you wouldn’t be surprised, would you? 😉
[ok, I didn’t give you any details as for the customer… but does it really matter? how many organizations do you know which have an up-to-date “remote access policy”… I mean, one that actually provides reasonable guidance besides “all remote access must be secured appropriately” …]
b) standards & “best practices”
To discuss these I’d first like to introduce some level of abstraction shortly – regular readers of this blog might remember my constant endeavor to “bring structure into things”- by breaking down “a remote access solution” to the following generic entities involved:
gateway(s)
network transmission incl. protocols & algorithms
“endpoints”
Traditionally there have been 3-4 relevant standard documents in the remote access space, that are ISO 18028-4 Information technology — Security techniques — IT network security — Part 4: Securing remote access (latest version from 2005), NIST SP 800-77 Guide to IPsec VPNs (2005), NIST SP 800-113 Guide to SSL VPNs (2008) and potentially revision 1 of NIST SP 800-46 Guide to Enterprise Telework and Remote Access Security (Rev. 1 published in June 2009).
[Of course, feel free to notify me by PM or comment on this post if you’re aware of others].
Those documents are mainly focused on the first two of the above “abstract elements”, that are gateways and network transmission, which results in lengthy discussions of gateway architectures or protocol specifics and ciphers. It becomes clear that back then the main threats were considered to be “attacks against gateways” or “attacks against network traffic in transit” (whereas today the main risk is probably “unauthorized physical access to endpoint”, after loss or theft of device/smartphone).
But times have changed, and in 2011 I tend to assume that these two elements (gateways, network transmission) are sufficiently well handled & secured in the vast majority of organizations. Ok, just when I write this I somehow hesitate, given I recently worked in an environment where the deployed gateways (based on the – from my perspective – market lead SSL gateway solution) were configured to support stuff like this:
SSLv3
———————————————————————-
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
TLSv1
———————————————————————-
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
Not that I regard this as a relevant risk, it’s just interesting to note. I mean, it’s 2011…
Back to track: so, while gateways and protocols might no more constitute areas of “deep concern”, nowadays “the endpoint” certainly does. When the mentioned standards were written, endpoints were supposed to be mostly company managed Windows systems. In the meantime most organizations face an “unmanaged mess”, composed of a growing number of smartphones and tablets of all flavors, some – to some degree – company managed, quite some predominantly “free floating”.
And unfortunately there is – to the best of my knowledge – no current (industry) standard laying out how to handle these from a security perpective. Which in turn means once an “authoritative framework” is needed (be it for policy reasons, be it for assessments using the approach I described above) there’s nothing “to rely on”, but this has to be figured by the individual organization.
To get a clearer picture I went out contacting “some of my ISO colleagues” in the “how do you handle this?” manner. In the following I’ll lay out “the results how mature organizations address the topic”, together with some recommendations from our side, based on our usual approach to balance security and business needs.
In the end of the day the task can be broken down to the question: “which types of endpoints should be allowed which type of access to which (classification level of) data?”.
To answer this one in a comprehensive and consistent way a number of factors has to be taken into account:
different types of endpoints.
different types of “access methods”.
classification of data to be processed.
potential prerequisites for certain use cases (type of authentication, employee signing an acceptable use policy [AUP], additional [technical] security controls on endpoints and the like).
Classifying types of endpoints
I personally think that the “traditional distinction” between (just) company-managed and non company-managed – which is used for example in ISO 18028-4 and NIST SP 800-113 – does no longer work, but that it makes sense to differentiate roughly between three types of endpoints. To characterize those I quote from a policy I wrote some time ago:
“Company managed device
Any device owned and managed by $COMPANY, e.g. corporate laptops.
Private trustworthy device
Any device that is owned by an individual employee which is used solely by named invidual and which is kept in an appropriate state as for security controls (up-to-date patch level, anti-malware protection and/or firewall software if feasible etc.). It might be (partly) managed by company driven controls (e.g. configuration policies).
If used for local processing of restricted data appropriate encryption technologies protecting said data must be in place.
In no instance shall any competitors, business partners, and/or other business-relevant parties be allowed to physically access such a device, even if such persons are otherwise related to the employee or are personal friends or acquaintances of the employee. Furthermore information classified ‘strictly confidential’ shall never be processed on this type of device.
Untrusted device
All other devices, for example systems in a cyber café or devices in an employee’s household which are shared with family members or friends.”
Pls note that both NIST SP 800-46r1 (where the party responsible for the security of the device is taken into account. and this one can be: organization, teleworker, third party.) and Gartner (classifying into “platform”, “application” and “concierge” types of services) use a similar three-fold classification approach.
Access methods
By “access method” I mean the way the actual data processing is performed. Let’s classify as follows:
“full network access” by means of a tunneled IP connection, provided by a network stack like piece of code. This is where the traditional full-blown IPsec or SSL VPN clients come into play.
“application based” access. This includes all HTTP(S) based portals (with MS Outlook Web Access [OWA] being the most prominent example) and the portals provided by typical SSL VPN gateways that enable users to run certain applications (MS Outlook, SAP GUI, File Browsing and the like).
“restricted application based”: same as above but usually without file exchange between remote network and local system, e.g. OWA without attachments, or remote desktop access without use of local drives, no copy+paste in TS client etc.
“(just) presentation logic”: here no actual data processing on the endpoint takes place. The best known example is probably Citrix based stuff, in different flavors (ICA client, Citrix Receiver, xd-agent et.al.).
Classification of data to-be-processed
Another aspect to take into account is the classification of the data that is allowed to be processed on a “remote device”. While this may seem a no-brainer at the first glance, in reality this is a tricky one as most organizations do not dispose of a (“widely used in daily practice”) data classification scheme anyway and usually there’s no file (other information entity) attributed meta-information that allows the easy identification of its classification level, together with a subsequent enforcement of technical controls (which, btw, is one of the main reasons why DLP is so hard to implement correctly. which could be discussed in more detail in another post 😉
And, of course, for an employee who has access anyway technically it’s usually easy to process higher classified data “than allowed”. The most common approach addressing this is having the employees siging AUPs prohibiting this/such type of handling, combined with some liability in case of breaches of such data (this approach, again, might be a minefield in itself, depending on the part of the world and associated juridical system you’re [located] in ;-)).
For the following discussion let’s assume that organizations use a scale of four classifications/ classes designated SC0 to SC3 (where SC = “security class” or “security classification”) with 3 being the highest (e.g. “strictly confidential” or “top secret”) and 0 being the lowest (e.g. “public”, “unclassified”).
Prerequisites for certain use cases
Frequently additional organizational or technical prerequisites for certain use cases (e.g. types of devices, potentially in combination with certain access methods) are induced. In this space can be found:
Acceptable Use Policies (AUPs) prescribing what can or should be done, which lay out “prohibited practices” (that might still technically possible, see above), the handling of backups, the separation of business and private use etc.
the level of authentication required for certain usage scenarios.
additional security controls on the endpoint (anti-malware, local firewall, local disk/file/container encryption and the like), with their presence on the endpoint potentially verified by some “host integrity checking” technology, e.g. “UAC” in Juniper space. In this context it should be noted that in most environments UAC or similar approaches from other vendors do not provide an actual security benefit (“enforce that only a trustworthy and secure endpoint can connect”) but simply differentiate between “company managed” and “unmanaged” in the way “look for a certain registry key and if present assume that’s a company managed device” (“no idea if the associated piece of security software is really present, let alone correctly configured, not to mention that it provides any actual security benefit”…).
Putting it all together
Now, putting all this together I came up with the following table. It displays what we feel “is common best security practice” nowadays for handling remote access (endpoint) security, balancing business and security needs:
Type of device
Classification of data (allowed) to be processed
Allowed access methods
Prerequisites
company managed
usually all, some organizations do not allow highest on mobile devices at all
all
private trustworthy (which means, as of definition, disposing of appropriate capabilities/controls)
depends. some organizations do not allow highest in this scenario. We recommend: no SC3 here.
usually all, but “application based” preferred over full network connect
MFA, appropriate capabilities, sometimes AUP
all (others) including “private” devices without appropriate capabilities (e.g. apps on iPads that do not use the “data protection” feature)
SC3 not allowed at all. In case of SC2…
no data processing on device, only presentation logic
MFA, Employee signed AUP
SC0, SC1 only
“restricted application based” recommended
all others when gateway deployed sandbox technologies (e.g. Juniper Secure Virtual Storage for Win32 systems) are used.
depends, usually sth in the range SC 0-2, pretty much never SC3. We recommend: no SC3.
“application based”
MFA, Employee signed AUP
Pls note: evidently, “everything else” (other combinations of devices, access methods or data classification levels to-be-processed) can be implemented as well (and this actually happens in environments I regularly work in). I mean, “infosec follows, supports and enables business”. It’s just – from our perspective – a risk acceptance needed then 😉
So, this is what I finally took as the “baseline to evaluate the actual implementation against” (for the “endpoint” category) in the course of that assessment mentioned in the beginning of the post. Hopefully laying out the sources of input, the classifications and the rationale leading to certain restrictions turns out to be helpful for some of you, dear readers.
This advisory describes an interesting attack vector:
“In the period of December 2010 until August 2011, Cisco shipped warranty CDs that contain a reference to a third-party website known to be a malware repository. When the CD is opened with a web browser, it automatically and without warning accesses this third-party website. Additionally, on computers where the operating system is configured to automatically open inserted media, the computer’s default web browser will access the third-party site when the CD is inserted, without requiring any further action by the user.”
The approach is smart as it potentially avoids the malware scanning stage that is presumably part of the preparation and shipping process of those CDs. And as it exploits the trust relationships pertinent to the network equipment supply chain…
We’ll probably see (yet) more such stuff in the next years.
Hi everybody,
eye-catching title of this post, huh?
Actually there is some justification for it ;-), that is bringing this excellent document covering the exact topic to your attention.
Other than that this post contains some unordered reflections which arose in a recent meeting in a quite large organization on the “common current iPad topic” (executives would like to have/use an iPad, infosec doesn’t like the idea, business – as we all know – wins, so bring external expertise in “to help us find a way of doing this securely” yadda yadda yadda).
Which – given those nifty little boxes are _consumer_ devices which were probably never meant to process sensitive corporate data – might be a next-to-impossible task… at least in a way that satisfies business expectations as for “usability”…[btw: can anybody confirm my observation that there’s a correlation between “rigor of restriction approach” to “number of corporate emails forwarded to private webmail accounts”?]
Anyway, in that meeting – due to my usual endeavor to look at things in a structured way – I started categorizing flavors of data wiping. I came up with
a) device-induced (call it “automatic” if you want) wipe. Here the trigger (to wipe) comes from the device itself, usually after some particular condition is met, which might be
number of failed passcode entries. This is supposed to help against an opportunistic attacker who “has found an iPad somewhere” and then tries to get access. Still, assuming a 4-digit passcode, based on their distribution the attacker might have a one-in-seven chance to succeed when the number of passcodes-to-fail is set to ten (isn’t this is the default setting? I don’t use such a device so I really don’t know ;-)).
check some system parameter (“am I jailbroken?”) and then perform a wipe.This somehow raises a – let’s call it – “matrix problem”: “judge the world’s trustworthy state from the own perspective and then delete my memory if found untrustworthy”. But how can I know my decision is a correct one if my own overall (“consciousness”) state might heavily depend on the USB port I’m connected to…
phone home (“Find My iPhone” et.al.), find out “I’m lost or stolen”, quickly wipe myself.This one requires a network connection, so a skilled+motivated attacker going after the data on the device will prevent this exact (network) connection. As most of you probably already knew ;-).
b) remote-wipe. That largely overhyped feature going like “if we learn that one of our devices is lost or stolen, we’ll just push the button and, boom, all the data on the device is wiped remotely”.
Unfortunately this one requires that the organization is able to react once the state of the device changed from “trustworthy environment” to “untrustworthy environment”. Which in turn usually relies on processes involving humans, e.g. might require people to call the organization’s service desk to inform them “I just lost my iPad”… which, depending on various circumstances that I leave the reader to imagine, might happen “in close temporal proximity to the event” or not …
And, of course, a skilled+motivated attacker will prevent the network connection needed for this one, as stated above.
So, all these flavors of wiping have their own share of shortcomings or pitfalls. At some point during that discussion I silently asked myself:
“How crazy is this? why do we spend all these cycles and resources and life hours of smart people on a detective+reactive type of control?”
Why not spend all this energy on avoiding the threat in the first place by just not putting the data on those devices (which lack fundamental security properties and are highly exposed to untrustworthy human behavior and environments)?
Which directly leads to the plea expressed in my Trooperskeynote “Do not process sensitive data on smartphones!” (but use those just as display terminals to applications and data hosted in secure environments).
Yes, I know that “but then we depend on network connectivity and Ms. CxO can’t read her emails while in a plane” argument. And I’m soo tired of it. Spending so much operational effort for those few offline minutes (by pursuing the “we must have the data on the device” approach) seems just a bit of waste to me [and, btw, I’m a CxO “of company driven by innovation” myself ;-)]. Which might even be acceptable if it wouldn’t expose the organization to severe risks at the same time. And if all the effort wasn’t doomed anyway in six months… when your organizations’ executives have found yet another fancy gadget they’d like to use…
Think about it & have a great sunday,
Enno
PS: as we’re a company with quite diverse mindsets and a high degree of freedom to conduct an individual lifestyle and express individual opinions, some of my colleagues actually think data processing on those devices can be done in a reasonable secure way. See for example this workshop or wait for our upcoming newsletter on “Certificate based authentication with iPads”.
This again is going to be a little series of posts. Their main topic – next to the usual deviations & ranting I tend to include in blogposts 😉 – is some discussion of “trust” and putting this discussion into the context of recent events and future developments in the infosec space. The title originates from a conversation between Angus Blitter and me in a nice Thai restaurant in Zurich where we figured the consequences of the latest RSA revelations. While we both expect that – unfortunately – not much is really going to happen (surprisingly many people, including some CSOs we know, are still trying to somehow downplay this or sweep it under the carpet, shying away from the – obvious – consequences it might have to accept that for a number of environments RSA SecurID is potentially reduced to single factor auth nowadays…), the long term impact on our understanding of 3rd party (e.g. vendor) trust might be more interesting. Furthermore “Broken Trust” seems a promising title for a talk at upcoming Day-Con V… 😉
Despite (or maybe due to) being an apparently essential part of human nature and a fundamental factor of our relationships and societies there is no single, concise definition of “trust”. Quite some researchers discussing trust related topics do not define the term at all and presume some “natural language” understanding. This is even more true in papers in the area of computer science (CS) and related fields, the most prominent example being probably Ken Thompson’s ” Reflections on Trusting Trust” where no definition is provided at all. Given the character and purpose of RFCs in general and their relevance for computer networks it seems an obvious course of action to look for an RFC providing a clarification. In fact the RFC 2828 defines as follows
“Trust […] Information system usage: The extent to which someone who relies on a system can have confidence that the system meets its specifications, i.e., that the system does what it claims to do and does not perform unwanted functions.”
which is usually shortened to statements like “trust = system[s] perform[s] as expected”. We don’t like this definition for a number of reasons (outside the scope of a blogpost) and prefer the definition the Italian sociologist Diego Gambetta published in his paper “Can we trust trust?” in 1988 (and which has – while originating from another discipline – gained quite some adoption in CS) that states
“trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.”
With Falcone & Castelfranchi we define control as
“a (meta) action:
a) Aimed at ascertaining whether another action has been successfully executed or if a given state of the world has been realized or maintained […]
b) aimed at dealing with the possible deviations and unforeseen events in order to positively cope with them (intervention).”
It should be noted that the term “control” is used here with a much broader sense (thus the attribute “meta” in the above definition) than in some information security standard documents (e.g. the ISO 27000 family) where control is defined as a “means of managing risk, including policies, procedures, guidelines, practices or organizational structures, which can be of administrative, technical, management, or legal nature.” [ISO27002, section 2.2]
Following Cofta’s model both, trust and control, constitute ways to build confidence which he defines as
“one’s subjective probability of expectation that a certain desired event will happen (or that the undesired one will not happen), if the course of action is believed to depend on another agent”.
[I know that this sounds quite similar to Gambetta’s definition of trust but I will skip discussing such subtleties for the moment, in this post. ;-).]
Putting the elements together & bringing the whole stuff to the infosec world
Let’s face it: in the end of the day all the efforts we as security practitioners take ultimately serve a simple goal, that is somebody (be it yourself, your boss or some CxO of your organization) reaching a point where she states “considering the relevant risks and our limited resources, we’ve done what is humanly possible”. Or just “it’s ok the way it is. I can sleep well now”.
I’m well aware that this may sound strange to some readers still believing in a “concret, concise and measurable approach to information security” but this is the reality in most organizations. And the mentioned point of “it’s ok” reflects fairly precisely the above definition of confidence (with the whole of events threatening the security of an environment being “another agent”).
Now, this state of confidence can be attained on two roads, that of “control” (applying security controls and, having done so, subsequently sleeping well) or that of “trust” (reflecting on some elements of the overall picture and then refraining from the application of certain security controls, still sleeping well).
A simple example might help to understand the difference: imagine you move to a new house in another part of the world. Your family is set to arrive one week later, so you have some days left to create an environment you consider “to be safe enough” for them.
What would you do (to reach the state of confidence)? I regularly ask this question in workshops I give and the most common answers go like “install [or check] the doors & locks”, “buy a dog”, “install an alarm system”. These are typical responses for “technology driven people” and the last one, sorry guys, is – as of my humble opinion – a particularly dull one given this is a detective/reactive type of control requiring lots of the most expensive operational resource, that is human brain (namely for follow-up on alarms = incident response). Which, btw, is the very reason why it pretty much never works in a satisfying manner, in pretty much any organization.
And yes, I understand that naming this regrettable reality is against the current vogue of “you’ll get owned anyway – uh, uh APT is around – so you need elaborate detection capabilities. just sign here for our new fancy SIEM/deep packet inspection appliance/deep inspection packet appliance/revolutionary network monitoring platform” BS.
Back to our initial topic (I announced some ranting, didn’t I? ;-)): all this stuff (doors & locks, the dog, the alarm system) follow the “control approach” and, more importantly and often overlooked, they all might require quite some operational effort (key management for the doors – don’t underestimate this, especially if you have a household full of kids 😉 -, walking the dog, tuning the alarm system & as stated above: resolving the incidents one it goes off, etc.).
Another approach – at least for some threats, to some assets – could be to find out which parties are involved in the overall picture (the neighbors, the utility providers, the legal authorities and so on) and then, maybe, deciding “in this environment we have trust in (some of) those and that’s why we don’t need the full set of potential controls”. Living in a hamlet with about 40 inhabitants, in the Bavarian country side, I can tell you that the handling of doors and locks there certainly is a different one than in most European metropolises…
Some of you might argue here “nice for you, Enno, but what’s the application in corporate information security space?”. Actually that’s an easy one. Just think about it:
– Do you encrypt the MPLS links connecting your organization’s main sites? [Most of you don’t, because “we trust our carriers”. Which can be a entirely reasonable security decision, depending on your carriers… and their partners… and the partners of their partners…]
– Do you perform full database encryption for your ERP systems hosted & operated by some outsourcing provider? [Most of you don’t, trusting that provider, which again might be a fully legitimate and reasonable approach].
– Did you ever ask the company providing essential parts of your authentication infrastructure if they kept copies of your key material and, more importantly, if they did so in a sufficiently secure way? [Most of you didn’t, relying on reputation-based trust “everybody uses them, and aren’t they the inventors of that algorithm widely used for banking transactions? and isn’t ‘the military’ using this stuff, too?” or so].
So, in short: trust is a confidence-contributing element and common security instrument in all environments and – here comes the relevant message – this is fully ok. As efficient information security work (leading to confidence) relies on both approaches: trust (where justified) and control (where needed). Alas most infosec people still have a control-driven mindset, not recognizing the value of trust. [this will have to change radically in “the age of the cloud”, more on this in a later part of this series].
Unfortunately, both approaches (trust & control) have their own respective shortcomings:
– following the control road usually leads to increased operational cost & complexity and might have severe business impact.
– trust, by it’s very nature (see the “Gambetta definition” above) is something “subjective” and thereby might not be suited to base corporate security decisions on 😉
BUT – and I’m finally getting to the main point of this post 😉 – if we could transform “subjective trust” into sth documented and justified, it might become a valuable and accepted element in an organization’s infosec governance process [and, again, this will have to happen anyway, as in the age of the cloud, the control approach is doomed. and pls don’t try to tell me the “we’ll audit our cloud providers then” story. ever tried to negotiate with $YOUR_FAVORITE_IAAS_PROVIDER on data center visits or just very basic security reporting or sth.?].
Still, then the question is: What could that those “reasons for trust” be?
Evaluating trust(worthiness) in a structured way
There are various approaches to evaluate factors that contribute to the trustworthiness of another party (called “trustee” in the following) and hence the own (the “trustor’s”) trust towards that party. For example, Cofta lists three main elements, that are (the associated questions in brackets are paraphrases by myself):
Continuity (“How long will we work together?”)
Competence (“Can $TRUSTEE provide what we expect?”)
Motivation (“What’s $TRUSTEE’s motivation?”)
We, howver, usually prefer the avenue the ISECOM uses for their Certified Trust Analyst training which is roughly laid out here. It’s based on ten trust properties, two of which are not aligned with our/Gambetta’s definition of trust and are thereby omitted (these two are “control” and “offsets”, for obvious reasons. Negotiating a compensation to be paid when the trust is broken constitutes the exact opposite of trust… it can contribute to confidence, but not to trust in the above sense). So there’s eight left and these are:
Size (“Who exactly are you going to trust?”. In quite some cases this might be an interesting question. Think of carriers partnering with others in areas of the world called “emerging markets” or just think of RSA and its shareholders. And this is why background checks are performed when you apply for a job in some agency; they want to find out who you interact with in your daily life and who/what might influence your decisions.).
Symmetry (“Do they trust us?”. This again is an interesting, yet often neglected, point. I first stumbled across this when performing an MPLS carrier evaluation back in 2007).
Transparency (“How much do we know about $TRUSTEE?”).
Consistency (“What happened in the past?”. This is the exact reason why to-be-employers ask for criminal records of the to-be-employees.).
Integrity (“[How] Do we notice if $TRUSTEE changes?”).
Value of Reward (“What do we gain by trusting?” If this one has enough weight, all the others might become irrelevant. Which is exactly the mechanism Ponzi schemes are based upon. Or your CIO’s decision “to go to the cloud within the next six months” – overlooking that the departments are already extensively using AWS, “for demo systems” only, of course 😉 – or, for that matter, her (your CIO’s decision) to virtualize highly classified systems by means of VMware products ;-). See also this post of Chris Hoff on “the CxO part in the game”.).
Components: (“Which resources does $TRUSTEE rely on?”).
Porosity (“How separated is $TRUSTEE from it’s environment?”).
Asking all these questions might either help to get a better understanding who to trust & why and thereby contribute to well-informed decision taking or might at least help to identify the areas where additional controls are needed (e.g. asking for enhanced reporting to be put into the contracts).
Applying this stuff to the RSA case
So, what does all this mean when reflecting on the RSA break-in? Why exactly is RSA’s trustworthiness potentially so heavily damaged?
As a little exercise, let’s just pick some of the above questions and try to figure the respective responses. Like I did in this post three days after RSA filed the 8-K report I will leave potential conclusions to the valued reader…
Here we go:
“Size”, so who exactly are you trusting when trusting “RSA, The Security Division of EMC”? Honestly, I do not know much about RSA’s share- and stakeholders. Still, even though not regarding myself as particularly prone to conspiracy theories, I think that Sachar Paulus, the Ex-CSO of SAP and now a professor for Corporate Security and Riskmanagement at the University of Applied Sciences Brandenburg, made some interesting observations in this blogpost.
“Symmetry” (do they trust us?): everybody who had the dubious pleasure to participate in one of those – hilarious – conference calls RSA held with large customers after the initial announcement in late March, going like
“no customer data was compromised.”
“what do you mean, do you mean no seed files where compromised?”
“as we stated, no customer data, that is PII was compromised.”
“So what about the seeds?”
“as we just said, no customer data was compromised.”
“and what about the seeds?”
“we can’t comment on this further due to the ongoing investigation. but we can assure you no customer data was compromised.”
might think of an answer on his/her own…
“Transparency”: well, see above. One might add: “did they ever tell you they kept a copy of your seed files?” but, hey, you never asked them, did you? I mean, even the (US …) defense contractors use this stuff, so why should one have asked such silly questions…
“Integrity”, which the ISECOM defines as “the amount and timely notice of change within the target”. Well… do I really have to comment on “the amount [of information]” and “timely notice” RSA delivered in the last weeks & months? Some people assume we might never have known of the break-in if they’d not been obliged to file a 8-K form (as standard breach-laws might not have kicked in, given – remember – “no customer data was exposed”…) and there’s speculation we might never have known that actually the seeds were compromised if the Lockheed Martin break-in hadn’t happened. I mean, most of us were pretty sure it _was_ about the seed files, but of course, it’s easy to say so in hindsight 😉
“Components” and “porosity”: see above.
Conclusions
If you have been wondering “why do my guts tell me we shouldn’t trust these guys anymore?” this post might serve as a little contribution to answering this question in a structured way. Furthermore the intent was to provide some introduction to the wonderful world of trust, control and confidence and its application in the infosec world. Stay tuned for more stuff to come in this series.
Have a great sunday everybody, thanks
This is the third (and last) part of the series (parts 1 & 2 here). We’ll provide the results from some additional tests supported by public cloud services, namely AWS (Amazon Web Services).
Lab Setup
The Amazon Elastic Compute Cloud (short: EC2) provides a flexible environment for the on demand provisioning of virtual machines of different performance levels. For our lab setup, a so-called extra large instance was used. According to Amazon, the technical specs are the following:
15 GB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
1,690 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.xlarge
Since the I/O performance of single disks had turned out to be the bottleneck in the “local” setup, eight Elastic Block Storage (short: EBS) volumes were created and attached to the instance. Each EBS volume is hosted within a specific availability zone and can be attached to instances running in the same zone. EBS volumes can be created and attached issuing two commands of the amazon ec2 command line tools. Therefore the amount of storage can be scaled up very easily. The only requirement (for our tests) is the existence of a sufficient number of EBS volumes which then contain parts of the pcap file to be analyzed.
Results
During the benchmarks, the performance was significantly lower than with the setup described in the previous post, even though eight different EBS volumes were used to avoid the bottleneck of a single storage volume. The overall performance of the test was seemingly limited by the I/O performance restriction within virtualized instances and virtualized storage systems. Following the overall cloud computing paradigm, performance limitations of this kind might be circumvented by using multiple resources which do the processing in parallel. This could be done by using multiple instances or by using frameworks like Amazon MapReduce which are designed to process huge sets of data. Applying this approach to the analysis of pcap files, the structure of the pcap format carries some inherent problems. The format consists of a binary representation of the data which is structured by the time of the captured packets and not by logical packet traces. Therefore it would be necessary to process the complete pcap file by each instance to extract all streams to identify which streams of the file are to be analyzed by the concrete worker instance. This prevents an efficient distribution of the analysis in multiple jobs or input files. If the captured network data would be stored in separate streams instead of one big pcap file, the processing using a map/reduce algorithm would be possible and thus potentially increase scalability significantly.
That said, finally here are the results of our testing (test methodology described in earlier post):
So it took much longer to extract the data from a 500 GB file which can be attributed to the increased latency times accessing centralized storage (from a SAN/over the network) when compared to locally connected SSDs.
Hopefully this little series provided some insight for you, dear readers. We’ll publish the full technical report as an ERNW Newsletter in the next months.
Have a good one, thanks
In the first post I’ve laid out the tools and lab setup, so in this one I’m going to discuss some results.
Description of overall test methodology
To evaluate the performance of the different setups used to analyze capture data, both tcpdump and pcap_extractor (see last post) were used. For the tests, five capture files were created using mergecap. Various sample traffic dumps were merged to five large files with different file sizes. All these files consisted of several capture files containing a variety of protocols (including iSCSI and FCoE packets). Capture files of ∼40, ∼80, ∼200, ∼500, and ∼800 GB size were created and were analyzed with both tools. For all tests the filtering expressions for tcpdump and pcap_extractor were configured to search for a specific source IP and a specific destination IP matching to iSCSI packets contained in the capture file. Additionally pcap_extractor was “instructed” to look for some search string (formatted like a credit card number).To address the performance bottleneck (again, see last post), that is the I/O throughput, two different setups of the testing environment (see above) were implemented, the first one going with a raid0 approach using four SSD hard drives, the second one with four individual SSD hard drives, each of them processing only a fourth of the analyzed capture file. Standard UNIX time command was invoked to measure the time of execution. Additionally the tools analyzing the data were started with the highest possible scheduling priority to ensure execution with the maximum of available resources. This is a sample command line invoking the test:
The most interesting results table is shown below:
Conclusions
So actually extracting a given search string from a 500 GB file could be done in about 21 minutes, employing readily available tools and using COTS hardware for about 3K EUR (as of March 2011). This means that an attacker disposing of (large) data sets resulting from previous eavesdropping attacks will most likely succeed in getting the exact data she’s going after. Furthermore the time needed scales in a lineary fashion with the file size, so that processing a 1 TB data volume presumably would have taken ∼42 minutes, a 2 TB file would have taken ∼84 minutes and so on. In addition, SSD prices are constantly declining, too.
Thus it could be shown that the perception that the sheer volume of data gained from eavesdropping attacks on high speed links might prohibit an attacker from analyzing this data is, well, simply not correct ;-).
Risk Assessment & Mitigating Controls
Several factors come into play when trying to assess the actual risk of this type of attack. Let’s put it like this: once an attacker disposes either of physical access to a fibre at some point or is able to get into the transport path by means of certain network based attacks – which are going to be covered in another, future post – collecting and analyzing the data is an easy task. If you have sufficient reasons for trusting the party actually implementing the connection (e.g. a carrier offering Metro Ethernet services) and “the overall circumstances” you might rely on the isolation properties provided by the service and topology. In case you either don’t have sufficient reasons to trust (some discussion on approaches to “evaluate trustworthiness” can be found here or here) or in highly regulated environments, using encryption technologies on layer 2 (like these or these) might be a safer approach.
In the next post we’ll discuss the cloud based test setup, together with its results. Stay tuned &
have a great day,
Hi,
didn’t find the time so far to post a short blog about HITB Amsterdam so far… but here we go.
Unfortunately I couldn’t arrive in AMS earlier than Thursday evening so I missed the first day (and – from what I heard – some great talks). However we went out for dinner that night with the likes of Andreas (Wiegenstein), Jim (Geovedi), Raoul (Chiesa), Travis (Goodspeed), Claudio (Criscione) and some more guys and I had some quite good conversations, both on technical matters and on Intra-European cultural differences ;-). Btw: thanks again to Martijn for taking care of the restaurant.
On Friday I listened to Travis’ talk on “Building a Promiscuous nRF24L01+ Packet Sniffer” (cool & scary stuff) and a part of this talk on iPhone data protection (well delivered as well). In the afternoon Daniel and I gave an updated version of the “Attacking 3G and 4G Telecommunication Networks ” presentation (the HITB version can be found here). Overall I can say that HITB was an excellently organized event with a great speaker line-up (not sure if we contributed to that one ;-)) and some innovative ideas (inviting a bunch of local hacker spaces among those). Dhillon is a fabulous host and I already regard HITB as one of the major European security events (next to Troopers, of course ;-)).
Have a great weekend everybody