Everybody who is interested in our newest tool ‘Loki’ is welcomed to head over to ERNW’s tool section and download it. Take this monster for a spin and let us know in the comments how you like it. Loki’s coding father Daniel is more than happy to answer your questions and criticism.
You don’t even know what Loki is?
In short: An advanced security testing tool for layer 3 protocols.
In long: Have a read in the Blackhat2010 presentation slides and mark TROOPERS11 in your calendar to meet the guys behind the research and for sure get a live demo of the capabilities – development is still ongoing, so prepare yourself for even more supported protocols and attack types.
And again: Talking about TROOPERS11… we’ve already selected the first round of speakers. Details to be published soon π
One of the biggest pains in the ass of most ISOs – and subsequently subject of fierce debates between business and infosec – is the topic of “Browser Security”, i.e. essentially the question “How to protect the organization from malicious codeΒ brought into the environment by users surfing the Internet?”.
Commonly the chain of events (of a typical malware infection act) can be broken down to the following steps:
1.) Some code – no matter if binary or script code – gets transferred (mostly: downloaded) to some system “from the Internet”, that means “over the network”.
2.) This code is executed by some local piece of software (where “execution” might just mean “parse a PDF” ;-).
[btw, if you missed it: after Black Hat Adobe announced an out-of-band patch scheduled for 08/16, so stay tuned for another Adobe Reader patch cycle next week…]
3.) This code causes harm (either on it’s own, either by reloaded payloads) to the local system, to the network the system resides in or to other networks.
Discussing potential security controls can be centered around these steps, so we have
a) The area of network based controls, that means all sorts of “malicious content protection” devices like proxies filtering (mainly HTTP and FTP) traffic based on signatures, URL blacklists etc., and/or network based intrusion prevention systems (IPSs).
Practically all organizations use some of this stuff (however quite a number of them – unfortunately – merely banks on these pieces). Let me state this clearly: overall using network based (filtering) controls contributes significantly to “overall protection from browser based threats” and we won’t discuss the advantages/disadvantages of this approach right here+now.
Still it should be noted that this is what we call a “detective/reactive control”, as it relies on somehow detecting the threat and scrubbing it after the detection act).
b) Controls in the “limit the capability to execute potentially harmful code” space. Which can be broken down to things like
– minimizing the attack surface (e.g. by not running Flash, iTunes etc. at all). The regular readers of this blog certainly knows our stance as for this approach ;-).
– configuration tweaks to limit the script execution capabilities of some components involved, like all the stuff to be found in IE’s zone model and associated configuration options (see this document for a detailed discussion of this approach).
– patching (the OS, the browser, the “multimedia extensions” like Flash and Quicktime, the PDF reader etc.) to prevent some “programmatic abuse” of the respective components.
Again, we won’t dive into an exhaustive discussion of the advantages/disadvantages of this approach right here+now.
c) Procedures or technologies striving to limit the harm in case an exploit happens “in browser space” (which, as of our definition, encompasses all add-ons like Flash, Quicktime etc.). This includes DEP, IE protected mode, sandboxing browsers etc.
Given the weaknesses the network based control approach might have (in particular in times of targeted attacks. oops, sorry, of course I mean: in times of the Advanced Persistent Threat [TM] ;-)) and the inability (or reluctance?) to tackle the problem on the “code execution” front-line in some environments, in the interim another potential control has gained momentum in “progressive infosec circles”: using virtualization technologies to isolate the browser from the (“core”) OS, other applications or just the filesystem.
Three main variants come to mind here: full OS virtualization techniques (represented, for example, by Oracle VirtualBox or VMware Workstation), application virtualization solutions (like Microsoft App-V or VMware ThinApp) and, thirdly, what I call “hosted browsing” (where some MS Terminal Server farm potentially located in a DMZ, or even “the cloud” may serve as “[browser] hosting infrastructure”).
In general, on an architecture level this is a simple application of the principle of “isolation” – and I really promise to discuss that set of architectural security principles we use at ERNW at some point in this blog ;-).
While I know that some of you, dear readers, use virtualization technologies to “browse safely” on a daily (but individual use) basis, there’s still some obstacles for large scale use of this approach, like how to store/transfer or print documents, how to integrate client certificates – in particular when on smart cards – into these scenarios, how to handle “aspects of persistence” (keeping cookies, bookmarks vs. not keeping potentially infected “browser session state”) etc.
And, even if all these problems can be solved, the big question would be: does it help, security-wise? Or, in infosec terms: to what degree is the risk landscape changed if such an approach would be used to tackle the “Browser Security Problem”?
To contribute to this discussion we’ve performed some tests with an application virtualization solution (VMware ThinApp) recently. The goal of the tests was to determine if exploits can be stopped from causing harm if they happened within a virtualized deployment, which modes of deployment to use, which additional tweaks to apply etc.
The results can be found in our next newsletter to be published at the end of this week. This post’s purpose was to provide some structure as for “securing the browser” approaches. and to remind you that – in the end of the day – each potential security control must be evaluated from two main angles: “What’s the associated business impact and operational effort?” and “How much does it mitigate risk[s]?”.
Have a great day,
Enno
… recently published here.
While I certainly agree with those comments stating that there’s a fishy element in the – conspiracy theory nurturing – story itself, this reminds me that Graeme Neilson (who gave the “Netscreen of the Dead” talk at Troopers, discussing modified firmware on Juniper and Fortinet devices) and I plan to give a talk on “Supply Chain (In-)Security” at this year’s Day-Con event. We still have to figure out with Angus if it fits into the agenda (and if we have enough material for an interesting 45 min storyline ;-)) though. Stay tuned for news on this here.
On a related note: for family reasons I won’t be able to make it to Vegas for Black Hat. Rene will take over my part in our talk “Burning Asgard – What happens when Loki breaks free” which covers some cool attacks on routing protocols and the release of an awesome tool Daniel wrote to implement those in a nice clicky-clicky way.
To all of you guys going to Vegas: have fun! and take care π
Today was an interesting day, for a number of reasons. Amongst those it stuck out that we were approached by two very large environments (both > 50K employees) to provide security review/advise, as they want to “virtualize their DMZs, by means of VMware ESX”.
[yes, more correctly I could/should have written: “virtualize some of their DMZ segments”. but this essentially means: “mostly all of their DMZs” in 6-12 months. and “their DMZ backend systems together with some internal servers” in 12-24 months. and “all of this” in 24-36 months. so it’s the same discussion anyway, just on a shifted timescale ;-)]
Out of some whim, I’d like to give a spontaneous response here (to the underlying question, which is: “is it a good idea to do this?”).
At first, for those of you who are working as ISOs, a word of warning. Some of you, dear readers, might recall the slide of my Day-Con3 keynote titled “Don’t go into fights you can’t win”.
[I’m just informed that those slides are not yet online. they will be soon… in the interim, to get an idea: the keynote’s title was “Tools of the Trade for Modern (C)ISOs” and it had a section “Communication & Tools” in it, with that mentioned slide].
This is one of the fights you (as an ISO) can’t win. Business/IT infrastructure/whoever_brought_this_on_the_table will. Get over it. The only thing you can do is “limit your losses” (more on this in second, or in another post).
Before, you are certainly eager to know: “now, what’s your answer to the question [good idea or not]?”.
I’m tempted to give a simple one here: “it’s all about risk [=> so perform risk analysis]”. This is the one we like to give in most situations (e.g. at conferences) when people expect a simple answer to a complex problem ;-).
However it’s not that easy here. In our daily practice, when calculating risk, we usually work with three parameters (each on a 1 [“very low”] to 5 [“very high” scale), that are: likelihood of some event (threat) occurring, vulnerability (environment disposes of, with regard to that event) and impact (if threat “successfully” happens).
Let’s assume the threat is “Compromise of [ESX] host, from attacker on guest”.
Looking at “our scenario” – that is “a number of DMZ systems is virtualized by means of VMware ESX” – the latter one (impact) might be the easiest one: let’s put in a “5” here. Under the assumption that at least one of the DMZ systems can get compromised by a skilled+motivated attacker at some point of time (if you would not expect this yourselves, why have you placed those systems in a DMZ then? π … under that assumption, one might put in a “2” for the probability/likelihood. Furthermore _we_ think that, in the light of stuff like thisΒ and the horrible security history VMware has for mostly all of their main products, it is fair to go with a “3” for the vulnerability.
This, in turn, gives a “2 * 3 *5 = 30” for the risk associated with the threat “Compromise of [ESX] host, from attacker on guest” (for a virtualized DMZ scenario, that is running guests with a high exposure to attacks).
In practically all environments performing risk analysis similar to the one described above (in some other post we might sometimes explain our approach – used by many other risk assessment practitioners as well – in more detail), a risk score of “30” would require some “risk treatment” other than “risk retention” (see ISO 27005 9.3 for our understanding of this term).
Still following the risk treatment options outlined in ISO 27005, there are left:
a) risk avoidance (staying away from the risk-inducing action at all). Well, this is probably not what the above mentioned “project initiator” will like to hear π … and, remember: this is a fight you can’t win.
b) risk transfer (hmm… handing your DMZ systems over to some 3rd party to run them virtualized might actually not really decrease the risk of the threat “Compromise of [ESX] host, from attacker on guest” π
c) risk reduction. But… so how? There’s not many options or additional/mitigating controls you can bring into this picture. The most important technical recommendation to be given here is the one of binding a dedicated NIC to every virtualized system (you already hear them yelling “why can’t we bring more than ~ 14 systems on a physical platform?”, don’t you? π ). Some minor, additional advise will be provided in another post, as will some discussion on the management side/aspects of “DMZ virtualization”. (notice how we’re cleverly trapping you into coming back here? π
So, if you are sent back and asked to “provide some mitigating controls”… you simply can’t. there’s not much that can be done. You’re mostly thrown back to that well-known (but not widely accepted) “instrument of security governance”, that is: trust.
In the end of the day you have to trust VMware, or not.
We don’t. We – for us – do not think that VMware ESX is a platform suited for “high secure isolation” (at least not at the moment).
The jury is still out on that one… but presumably you all know the truth, at your very inner self π
For completeness’ sake, here’s the general advice we give when we only have 60 seconds to answer the question “What do you think about the security aspects of moving systems to VMware ESX”. It’s split into “MUST” or (“DO NOT”) parts and “SHOULD” parts. See RFC 2119 for more on their meaning. Here we go:
1.) Assuming that you have a data/system/network classification scheme with four levels (like “1 = public” to “4 = strictly confidential/high secure”) you SHOULD NOT virtualize “level 4”. And think twice before virtualizing SOX relevant systems π
2.) If you still do this (virtualizing 4s), you MUST NOT mix those with other levels on the same physical platform.
3.) If you mix the other levels, then you SHOULD only mix two levels next to each other (2 & 3 or 1 & 2).
4.) DMZ systems SHOULD NOT be virtualized (on VMware ESX as of the current security state).
5.) If you still do this (virtualizing DMZ systems), you MUST NOT mix those with Non-DMZ systems.
For those of you who have already violated advice no. 4 but – reading this – settle back mumbling “at least we’re following advice no. 5″… wait, my friends, the same people forcing you before will soon knock at your door … and tell you about all those “significant cost savings” again… and again…
A number of customers has approached us with questions like “Those new MiTM attacks against SSL/TLS, what’s their impact as for the security of our SSL VPNs with client certificates”?
In the following we give our estimation, based on the information publicly available as of today.
On 11/04/09 two security researchers (Marsh Ray and Steve Dispensa) published a paper describing some previously (presumably/hopefully) unknown MiTM attacks against SSL/TLS. CVE-2009-3555 was assigned to the underlying vulnerabilities within SSL/TLS.
The attacks described might potentially allow an attacker to hijack an authenticated user’s (SSL/TLS) session. In an IETF draft published 11/09/09 and describing a potential protocol extension intended to mitigate the attacks the following is stated:
“SSL and TLS renegotiation are vulnerable to an attack in which the attacker forms a TLS connection with the target server, injects content of his choice, and then splices in a new TLS connection from a client.Β The server treats the client’s initial TLS handshake as a renegotiation and thus believes that the initial data transmitted by the attacker is from the same entity as the subsequent client data.”
Obviously this _could_ have dramatic security impact on most SSL VPN deployments. Still, within the research community the impact on SSL VPNs seems unclear at the very moment.
On monday, Cisco issued a somewhat nebulous security advisory classifying the Cisco AnyConnect VPN Client as _not_ vulnerable (but quite some other products). OpenVPN disposes, for some time already, of a particular directive (tls-auth) which is regarded as an effective way to mitigate the vulnerabilities. Other vendors (e.g. Juniper) have not yet issued any statements or advisories at all (see also hereΒ for an overview of different vendors’ patch status).
This might be a good sign (“no problems there”), it might just be they’re still researching the pieces.
After discussing the problems with other researches we expect more concrete attack scenarios to emerge in the near future and we expect some of those future attacks to work against _some_ SSL VPN products as well. In case you have some SSL VPN technology in use pls contact your vendor _immediately_ asking for an official statement on this stuff.
Sorry for not having better advise for you right now. We do not want to spread FUD. On the other hand it might be a cautious approach to “expect the worst” in this case. There’s vast consensus amongst researchers that this _is a big thing_ that most people did not expect in this protocol. A first public break-in based on the vulnerabilities has already been reported.
… this would not have happened. At least this is what $SOME_DLP_VENDOR might tell you.
Maybe, maybe not. It wouldn’t have happened if they’d followed “common security best practices” either. Like “not to process sensitive data on (presumably) private laptops” or “not to run file sharing apps on organizational ones” or “not to connect to organizational VPNs and home networks simultanously”. yadda yadda yadda.
Don’t get us wrong here. We’re well aware that these practices are not consistently followed in most organizations anyway. That’s part of human (and corporate) reality. And part of our daily challenge as infosec practitioners.
This incident just proves once more that quite some security problems have their origins in “inappropriate processes” which in turn are the results of “business needs”.
(all of which, of course, is a well-known platitude to you, dear reader ;-).
The problem of data leakage by file sharing apps is not new (e.g. see this paper), nor is the (at least our) criticism of DLP.
Did you notice how quiet it has become around DLP, recently? Even Rich Mogull – whom we still regard as _the authority_ on the subject – seems not to blog extensively about it anymore.
Possibly (hopefully), we can observe the silent death of another overhyped, unneeded “security technology”…
Yesterday I took a long run (actually I did the full distance here) and usually such exercises are good opportunities to “reflect on the world in general and the infosec dimension of it in particular”… at least as long as your blood sugar is still on a level to support somewhat reasonable brain activity π
Anyhow, one of the outcomes of the number of strange mental stages I went through was the idea of a series of blogposts on architectural or technological approaches that are widely regarded as “good security practice” but may – when looked at with a bit more of scrutiny – turn out to be based on what I’d call “outdated threat models”.
This series is intended to be a quite provocative one but, hey, that’s what blogs are for: provide food for thought…
First part is a rant on “Multi-factor authentication”.
In practically all large organizations’ policies, sections mandating for MFA/2FA in different scenarios can be found (not always being formulated very precisely, but that’s another story). Common examples include “for remote access” – I’m going to tackle this one in a future post – or “for access to high value servers”Β [most organizations do not follow this one too consistently anyway, to say the least ;-)] or “for privileged access to infrastructure devices”.
Let’s think about the latter one for a second. What’s the rationale behind the mandate for 2FA here?
It’s, as so often, risk reduction. Remember that risk = likelihood * vulnerability * impact, and remember that quite frequently, for infosec professionals,Β the “vulnerability factor” is the one to touch (as likelihood and impact might not be modifiable too much, depending on the threat in question and the environment).
At the time most organizations’ initial “information security policy” documents were written (at least 5-7 years ago), in many companies there were mostly large flat networks, password schemes for network devices were not aligned with “other corporate password schemes” and management access to devices often was performed via Telnet.
As “simple password authentication” (very understandably) was not regarded sufficiently vulnerability-reducing then, people saw a need for “a second layer of control”… which happened to be “another layer of authentication”… leading to the aforementioned policy mandate.
So, in the end of the day, here the demand for 2-factor auth is essentially a demand for “2 layers of control”.
Now, if – in the interim – there’s other layers of controls like “encrypted connections” [eliminating the threat of eavesdropping, but not the one of password bruteforcing] or “ACLs restricting which endpoints can connect at all” [very common practice in many networks nowadays and heavily reducing the vulnerability to password bruteforcing attacks], using those, combined with single-auth, might achieve the same level of vulnerability-reduction, thus same level of overall risk.
Which in turn would then make the need for 2FA (in this specific scenario) obsolete. Which shows that some security controls needed at some point of time might no more be reasonable once threat models have changed (e.g. once the threat of “eavesdropping on unencrypted mgmt traffic from a network segment populated by desktop computers” has mostly disappeared).
Still, you might ask: what’s so bad about this? Why does this “additional layer of authentication” hurt? Simple answer: added complexity and operational cost. Why do you think that 2-factor auth for network devices can _rarely_ be found in large carrier/service provider networks? For exactly these reasons… and those organizations have a _large_ interest in protecting the integrity of their network devices. Think about it…