Breaking

Application Virtualization as Browser Security Control?

One of the biggest pains in the ass of most ISOs – and subsequently subject of fierce debates between business and infosec – is the topic of “Browser Security”, i.e. essentially the question “How to protect the organization from malicious codeΒ  brought into the environment by users surfing the Internet?”.

Commonly the chain of events (of a typical malware infection act) can be broken down to the following steps:

1.) Some code – no matter if binary or script code – gets transferred (mostly: downloaded) to some system “from the Internet”, that means “over the network”.

2.) This code is executed by some local piece of software (where “execution” might just mean “parse a PDF” ;-).
[btw, if you missed it: after Black Hat Adobe announced an out-of-band patch scheduled for 08/16, so stay tuned for another Adobe Reader patch cycle next week…]

3.) This code causes harm (either on it’s own, either by reloaded payloads) to the local system, to the network the system resides in or to other networks.

Discussing potential security controls can be centered around these steps, so we have

a) The area of network based controls, that means all sorts of “malicious content protection” devices like proxies filtering (mainly HTTP and FTP) traffic based on signatures, URL blacklists etc., and/or network based intrusion prevention systems (IPSs).
Practically all organizations use some of this stuff (however quite a number of them – unfortunately – merely banks on these pieces). Let me state this clearly: overall using network based (filtering) controls contributes significantly to “overall protection from browser based threats” and we won’t discuss the advantages/disadvantages of this approach right here+now.
Still it should be noted that this is what we call a “detective/reactive control”, as it relies on somehow detecting the threat and scrubbing it after the detection act).

b) Controls in the “limit the capability to execute potentially harmful code” space. Which can be broken down to things like
– minimizing the attack surface (e.g. by not running Flash, iTunes etc. at all). The regular readers of this blog certainly knows our stance as for this approach ;-).
– configuration tweaks to limit the script execution capabilities of some components involved, like all the stuff to be found in IE’s zone model and associated configuration options (see this document for a detailed discussion of this approach).
– patching (the OS, the browser, the “multimedia extensions” like Flash and Quicktime, the PDF reader etc.) to prevent some “programmatic abuse” of the respective components.

Again, we won’t dive into an exhaustive discussion of the advantages/disadvantages of this approach right here+now.

c) Procedures or technologies striving to limit the harm in case an exploit happens “in browser space” (which, as of our definition, encompasses all add-ons like Flash, Quicktime etc.). This includes DEP, IE protected mode, sandboxing browsers etc.

Given the weaknesses the network based control approach might have (in particular in times of targeted attacks. oops, sorry, of course I mean: in times of the Advanced Persistent Threat [TM] ;-)) and the inability (or reluctance?) to tackle the problem on the “code execution” front-line in some environments, in the interim another potential control has gained momentum in “progressive infosec circles”: using virtualization technologies to isolate the browser from the (“core”) OS, other applications or just the filesystem.
Three main variants come to mind here: full OS virtualization techniques (represented, for example, by Oracle VirtualBox or VMware Workstation), application virtualization solutions (like Microsoft App-V or VMware ThinApp) and, thirdly, what I call “hosted browsing” (where some MS Terminal Server farm potentially located in a DMZ, or even “the cloud” may serve as “[browser] hosting infrastructure”).
In general, on an architecture level this is a simple application of the principle of “isolation” – and I really promise to discuss that set of architectural security principles we use at ERNW at some point in this blog ;-).

While I know that some of you, dear readers, use virtualization technologies to “browse safely” on a daily (but individual use) basis, there’s still some obstacles for large scale use of this approach, like how to store/transfer or print documents, how to integrate client certificates – in particular when on smart cards – into these scenarios, how to handle “aspects of persistence” (keeping cookies, bookmarks vs. not keeping potentially infected “browser session state”) etc.
And, even if all these problems can be solved, the big question would be: does it help, security-wise? Or, in infosec terms: to what degree is the risk landscape changed if such an approach would be used to tackle the “Browser Security Problem”?

To contribute to this discussion we’ve performed some tests with an application virtualization solution (VMware ThinApp) recently. The goal of the tests was to determine if exploits can be stopped from causing harm if they happened within a virtualized deployment, which modes of deployment to use, which additional tweaks to apply etc.
The results can be found in our next newsletter to be published at the end of this week. This post’s purpose was to provide some structure as for “securing the browser” approaches. and to remind you that – in the end of the day – each potential security control must be evaluated from two main angles: “What’s the associated business impact and operational effort?” and “How much does it mitigate risk[s]?”.
Have a great day,
Enno

Continue reading
Breaking

Spooky Story about Break-In in Military Contractor Facility

… recently published here.
While I certainly agree with those comments stating that there’s a fishy element in the – conspiracy theory nurturing – story itself, this reminds me that Graeme Neilson (who gave the “Netscreen of the Dead” talk at Troopers, discussing modified firmware on Juniper and Fortinet devices) and I plan to give a talk on “Supply Chain (In-)Security” at this year’s Day-Con event. We still have to figure out with Angus if it fits into the agenda (and if we have enough material for an interesting 45 min storyline ;-)) though. Stay tuned for news on this here.

On a related note: for family reasons I won’t be able to make it to Vegas for Black Hat. Rene will take over my part in our talk “Burning Asgard – What happens when Loki breaks free” which covers some cool attacks on routing protocols and the release of an awesome tool Daniel wrote to implement those in a nice clicky-clicky way.

To all of you guys going to Vegas: have fun! and take care πŸ˜‰

Enno

Continue reading
Building

Software Developers Don’t Use Available Security Features

According to SANS NewsBites Vol. XII, Issue 53 recently published there’s a lack of 3rd party developer support for some security features Microsoft introduced already years ago. We at ERNW have made similar observations when performing security assessments of COTS [commercial off-the-shelf] software. We therefore created a methodology, a proof of concept tool and a metric to test and to rate closed source software, where (amongst other approaches) these security features are checked and their (non-) presence contributes to an overall evaluation as for the trustworthiness of the applications in question. The concept “How to rate the security in closed source software” was presented to the public at Troopers10 and at Hack in the Box 2010 in Amsterdam. The slides can be found here.

Continue reading
Building

News from the Desktop, Edition 2010/07/21

Back on track as for one of our favorite rant subjects: desktop security. This stuff, commonly called the “LNK vulnerability”, has gained quite some momentum in the last days, including the release of a Metasploit module and a temporary raise of SANS Internet Storm Center‘s Infocon level to yellow (it’s back on green in the interim).

CVE-2010-2568 has been assigned and some technical details can be found here and here.

To give you a rough idea how this piece works, here’s a quote from the US-CERT advisory:

“Microsoft Windows fails to safely obtain icons for LNK files. When Windows displays Control Panel items, it will initialize each object for the purpose of providing dynamic icon functionality. This means that a Control Panel applet will execute code when the icon is displayed in Windows. Through use of an LNK file, an attacker can specify a malicious DLL that is to be processed within the context of the Windows Control Panel, which will result in arbitrary code execution. The specified code may reside on a USB drive, local or remote filesystem, a CD-ROM, or other locations. Viewing the location of a LNK file with Windows Explorer is sufficient to trigger the vulnerability.”

In short, as the Microsoft Malware Protection Center puts it: “[S]imply browsing to the removable media drive using an application that displays shortcut icons (like Windows Explorer) runs the malware without any additional user interaction.”

So actually, there is no exploitation in the sense of a buffer overflow of the LNK handling routines. The flaw just triggers downloading (and executing) some binary code. It’s basically about downloading some piece of code (to be executed on the local “compromised” box) from some location.
Figuring that brings us to an immediate discussion of potential mitigation strategies, which is – next to “some usual rant” πŸ˜‰ – the main intent of this post.

As so often the US-CERT’s advisory provides good guidance on mitigating controls. It lists the following (most with a comment from my side):

a) Disable the displaying of icons for shortcuts.

Do not do this! It will most probably break your (users’) desktop experience in a horrible way.
Chester Wisniewski has a nice image of a treated desktop in yesterday’s post on the subject.

b) Disable AutoRun

This is pretty much always a good idea (security-wise), but it – partially – limits only one attack vector (removable media).
Interestingly enough the MS KB article on disabling Autorun functionality in Windows was last reviewed on 2010/07/01 and the CVE number for the LNK vulnerability was assigned 2010/06/30. There’s strange coincidences out there… πŸ˜‰

And, of course, not allowing untrusted USB devices to be connected to the organization’s machines usually is a good thing as well.
[for the record: yes, I know, there are organizations out where business tells you this is not possible for some reason or another. which might be true; I will not go into this debate right here and now. I just want to remind you, dear reader, of some good ole basics ;-)].

In the meantime people expect (and see) the vulnerability being exploited over network shares so focusing on removable media/USB devices alone might be too narrow-focused anyway.

c) Use least privilege

Aka: do not work as admin. I don’t really have to comment on this here, do I? It’s a no-brainer. Well, let’s say: it should be a no-brainer… when gathering with some infosec people from a >100K user environment recently I learned they still have about 30% of users with local admin privs…
[and don’t get me wrong here: there might be good reasons for this. and those guys are not the only ones with such a landscape.]
To be discussed in more detail at another occasion.

d) Disable the WebClient service.

While I like this very much – given it’s a preventative control [“minimal machine approach”] and, btw, addresses other (past and potential future) vulnerabilities as well, e.g. those of MS10-045 published one week ago – it should be noted that this potentially breaks MS SharePoint (and other stuff as well).
So, unfortunately, again this will not be a feasible in quite some environments. Still, it might remind people that WebDAV is a technology that can be (ab-) used to access “network drives” in completely untrusted/untrustworthy locations.

e) Block outgoing SMB traffic

Well, yes…

The (today) updated MS advisory provides another one:

f) Blocking the download of LNK and PIF files

Again, this is soo obvious that I refrain from any comment.

Those of you following this blog or our public statements on desktop security (like this one) regularly might have noticed that – as so often – the two main ones quite some organizations rely on are not mentioned here:

Patching: for the simple reason there’s no patch as of today (and the imp is out of the bottle, to public knowledge, five days now).

Antimalware “protections”: not sure how this could prevent downloading and executing arbitrary binary code. The announcements of major AV vendors “we now have a signature for this” mostly address the Stuxnet stuff found in the initial exploit, nothing else. So this is mostly window-dressing.

So far so good (or bad), the main point of this post is – and yes, I’m aware I needed a long warm-up today πŸ˜‰ – the following: the security problem discussed here can be broken down to: “there’s a vulnerability that triggers an exploit that goes somewhere, downloads some code and executes it”.

Which, in turn, raises the fundamental question: why should some average corporate desktop computer be allowed to go to some arbitrary location, download code, and – above all – run this code.
Restricting where to get executable code from or, even better, just allowing a specified set of applications to run could (more or less) easily solve the kind of problems vulnerabilities like this one impose. And the technologies needed like – in MS space – Software Restriction Policies , Applocker or just (as part of SRPs) “path rules” restricting where to download executables from have been available for a long time.

As Marcus Ranum is much more eloquent than me – especially when it comes to ranting where he’s nearly unbeatable πŸ˜‰ – I allow myself to quote him literally, from the “Schneier-Ranum Face-Off” on “Is antivirus dead?”:

“Of course, most organizations don’t know (or haven’t got the courage to discover) what programs they allow–and, ultimately, isn’t that the root of their security problems? When I read the security news and hear that thus-and-such government agency is trying to decide if Facebook is a necessary application, it makes my head spin. In Marcus-land, where I come from, you decide what is a necessary application first, not after you have 40,000 employees who have gotten so used to it that they now think Twitter is a constitutionally protected right. Isn’t a virus or malware just unauthorized execution that someone managed to sneak onto your machine? If we adopt a model whereby there are programs that are authorized (i.e., on a whitelist) and the operating system should terminate everything else, then malware and viruses are history, unless their authors can somehow fool the administrator into authorizing them to run.
[…]
Whenever I talk about execution control/whitelisting with corporate types, someone says, ‘But we don’t really have a way of determining all the applications that we use!’ Really? Wow. That sounds like a policy that’s basically, ‘We have no idea what our computers are for.’ In other words: ‘We’ve given up, and as far as we’re concerned, our computers are an unmanaged mess.’ Or to put it another way, malware heaven. Can anyone even calculate the cost of malware and viruses (as well as the occasional office time spent playing online games) to businesses? That cost, ultimately, is paid solely in order to avoid the difficulty of determining what programs are authorized — what’s the purpose of the computer an employee is provided to use?

Here’s why I keep talking about execution control: it’s actually ridiculously easy compared to dealing with antivirus and antimalware. So why isn’t everyone doing it? Because it’d dramatically cut down on our ability to goof off. If executives knew how easy it was to cut back on productivity-wasting goof-off-ware, don’t you think it would be happening all over the place by now? If, instead, we tell them it’s hard to know what executables we use in the office…well, what nobody knows won’t hurt anyone.”

Well said, Marcus. Nothing to add here.

So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.

thanks for your time,

Enno

Continue reading
Misc

The Emperor’s New Security Indicators

Interesting research from Stuart Schechter et.al. here.
They evaluated the effect that the removal or modification of online banking sites’ security features had on the users’ behavior (as for entering or withholding their passwords). Maybe for some of you not too surprising it turned out that the vast majority of users entered their passwords even if obviously alarming clues were present on the websites.
This, again, shows how important it is to understand how users behave, what their motives and incentives are and how to build environments that help them acting securely. This even more applies to corporate space. At times, bringing an industrial/organizational psychologist in might be a much better investment than writing yet-another-ignored-piece-of-policy.

have a great sunny sunday everybody,

Enno

Continue reading
Building

Our Favorite Subject: [It’s all about] Risk

Some days ago my old friend Pete Herzog from ISECOM posted a blog entry titled “Hackers May Be Giants with Sharp Teeth” here which – along with some quite insightful reflections on the way kids perceive “bad people” – contains his usual rant on (the uselessness of) risk assessment.
Given that this debate (whether taking a risk-based infosec approach is a wise thing or not) is a constant element of our – Pete’s and mine – long lasting relationship I somehow feel enticed to respond πŸ˜‰

Pete writes about a 9-yr-old girl who – when asked about “bad people” – stated that those “look like everybody else” and, referring to risk assessment, he concludes “that you can’t predict the threat realibly and therefore you can’t determine it.”
I fully agree here. I mean, if you _could_ predict the threat realibly why perform risk assessment at all? Taking decisions would simply be following a path of math then. Unfortunately most infosec practitioners do neither dispose of a crystal ball – at least not a dependable one – nor of the future prediction capabilities this entity seems to have …
So… as long as we can’t “determine reliably” we have to use … risk assessments. “Risk” deals with uncertainty. Otherwise it would be called “matter of fact” πŸ˜‰
Here’s how the “official vocabulary of risk management” (ISO Guide 73:2009) defines risk: “effect of uncertainty on objectives”.
Note that central term “uncertainty”? That’s what risk is about. And that’s, again, what risk assessments deal with. Deal with uncertainty. In situations where – despite that uncertainty – well-informed decisions have to be taken. Which is a quite common situation in an ISO’s professional life πŸ˜‰
Effective risk assessment helps answering questions in scenarios characterized by some degree of uncertainty. Questions like “In which areas do we have to improve in the next 12 months?” in (risk assessment) inventory mode or “Regarding some technology change in our organization, which are the main risks in this context and how do they change?” in governance mode. [See this presentation for an initial discussion of these terms/RA modes].
So asking for “reliable threat prediction/determination” from risk assessments is just not the right anticipation. In contrast, structured RA can certainly be regarded as the best way to “take the right decisions in complex environments and thereby get the optimal [increase of] security posture, while being limited by time/resource/political constraints and, at the same time, facing some uncertainty”.

Btw: the definition from ISO 73:2009 – that is used by recently published ISO 31000 (Risk management β€” Principles and guidelines) too – nicely shows the transformation the term “risk” has undergone in the last decade. From “risk = combination of probability and consequence of an event [threat]” in ISO 73:2002 through ISO 27005:2008’s inclusion of a “vulnerability element” (called “ease of exploitation” or “level of vulnerability” in the appendix) to the one in ISO 73:2009 cited above (which, for the first time, does not only focus on negative outcomes of events, but considers positive outcomes as well. which in turn reflects the concept of “risk & reward” increasingly used in some advanced/innovative infosec circles and to be discussed in this blog at another occasion).
Most (mature) approaches used amongst infosec professionals currently follow the “risk = likelihood * vulnerability * impact” line. We, at ERNW, use this one as well.

Which brings me to the next inaccuracy in Pete’s blog entry. He writes: “Threats are not the same for everyone nor do they actually effect us all the same. So why do we put up with risk assessments?”.

Indeed (most) threats _are_ the same for everyone. Malware is around, hardware fails from time to time and humans make errors. Point is all this does _not_ affect everyone “the same way” (those with the right controls will not get hit hardly by malware, those with server clusters will survive failing hardware more easily, those with evolved change control processes might have a better posture when it comes to consequences from human error). And all this is reflected by the – context-specific – “vulnerability factor” in risk assessments (and, for that matter, sometimes by the “impact factor” as well). So while threats might be the same, the _associated risks_ might/will not be the same.
which, again, is the exact reason for performing risk assessments ;-))
If they _were_ the same one just would have to look up some table distributed by $SOME_INDUSTRY_ASSOCIATION.

So, overall, I’m not sure that I can follow Pete’s line of arguments here. Maybe we should have a panel discussion on this at next year’s Troopers πŸ˜‰

have a good one everybody,

thanks

Enno

Continue reading
Events

ERNW at NinjaCon (fka PlumberCon)

Yesterday we made our way to Vienna to participate and contribute to NinjaCon (formerly known as PlumberCon, before Nintendo Inc. claimed their rights ;)).

After our arrival Oliver held a five hour workshop on Penetration Testing and did the finishing touches on his slides about ‘Attacking Cisco Enterprise WLANs‘, which he will deliver later today together with Daniel. And last but not least Daniel will be the Packet Master of PacketWarsβ„’ Vienna taking place in the evening.

As sponsor of this young and vibrant conference we’re proud to share our equipment and know-how to support the networks on site.

Talking about young and vibrant: Last week we held one of our beloved internal workshops at ERNW to discuss the latest in ITSec and teamwork – but also to chat with colleagues or listen to a rant on $some-broken-technology of Enno. When having dinner on Tuesday we went crazy on planning for TROOPERS11. I don’t like it too much to talk about ‘good energy in the room’, but there was something really enthusiastic and insanely creative about it – and whatever it was, we gonna use it to make it even more enjoyable, educating and unforgettable than this year.

As we’re progressing at Vienna I’m going to update this blog post. So stay tuned!
Cheers, Florian & the team

UPDATE: NinjaCon is over. Besides the usual small hiccups at such an event it was a really great conference for all of us. Excellent speakers, an exciting location and the overall perfect atmosphere to interact, chat and learn really made the deal here. Big applause to the host @astera and her team!

Continue reading
Events

TROOPERS10 presentation materials finally online

I’m happy to announce that the presentations and a majority of the videos from TROOPERS10 are finally available to you.

You’ll find the slides at the conference’s website troopers.de, more precisely here. Plenty of videos were uploaded and are now ready for streaming at viddler.com/TROOPERS. Enjoy!

Please excuse the long waiting time – this is a big point on our ‘improvements for upcoming events’ list. Talking about improvements: If you have any suggestions, criticism or even praise for past or upcoming events – let us know in the comment section.

Thanks,
Florian

PS: At the moment I’m doing the finishing touches to some really nice photos from TROOPERS10. To stay up-to-date please subscribe to our RSS feed or if you’re into twitter: Follow @WEareTROOPERS πŸ˜‰

Continue reading
Building

News from Old Friends, Edition 2010/06/09

This is the first post of a – potential – series of rants on ubiquitous pieces of crap (security-wise), bothering pretty much every ISO I know.
I’m talking about “common desktop applications” and today’s topic is going to be the beloved Adobe Flash Player. Some of you who had the opportunity (or imposition πŸ˜‰ to listen to one my talks covering “modern enterprise security space” (e.g. this one) might remember me saying sth like “If a fairy godmother turned up and asked me for three things to get rid of in order to enhance overall corporate information security in a sustainable way, my answers would be…” and then giving Adobe Flash as the first mention. (before you ask: amongst the other candidates are Apple Quicktime, Windows GDI and “Javascript in Acrobat Reader”).

And, yes, I can already hear all the yelling “But we absolutely need Flash on our corporate desktops.”. Maybe that’s really really the case. Maybe not. I’ve fought that fight in many environments, and usually lost it. Kind-of been there, done that. I’d just like to point out that – from a security point of view – this is a risky thing.
On a personal level I still do not get why Flash is needed. I can certainly be regarded as a “typical executive user”, being online most parts of the day and performing all sorts of (what I think) “typical actions” like travel booking, online financial services etc. All this can be done with my 64-bit browser that just has no associated Flash player. Seems my mileage as for “corporate browser use” still varies from the one in many of those – “we absolutely need Flash on our corporate desktops” – organizations…
And even if your company’s marketing dept is powerful enough to ask for large scale deployment of that fancy technology (some of you certainly know the “We have our own Youtube channel” argument) I still have to understand why it’s needed on the desktops in the engineering or R+D departments. But oh well…

Still, all this ranting is a bit outside the intended scope of this post. Actually the trigger for the post was this advisory titled “Security Advisory for Flash Player, Adobe Reader and Acrobat” and released by Adobe some days ago.
Here’s a little quote from the summary:

“A critical vulnerability exists in the current versions of Flash Player […] for Windows, Macintosh, Linux and Solaris operating systems, and the authplay.dll component that ships with Adobe Reader and Acrobat v9.x for Windows, Macintosh and UNIX operating systems. This vulnerability […] could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild via limited, targeted attacks against Adobe Reader v9 on Windows.”

Oops, sorry, in fact the quote above was from this advisory, initially released on july 22, 2009.
The current one (from 06/04/2010) goes like this (as for the summary):

“A critical vulnerability exists in Adobe Flash Player 10.0.45.2 and earlier versions for Windows, Macintosh, Linux and Solaris operating systems, and the authplay.dll component that ships with Adobe Reader and Acrobat 9.x for Windows, Macintosh and UNIX operating systems. This vulnerability (CVE-2010-1297) could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild against both Adobe Flash Player, and Adobe Reader and Acrobat.”

Note the difference?
There’s practically none: same products affected, same component to blame, same workaround [deactivating authplay.dll], same “Adobe’s quality assurance element” [discovery of the stuff being exploited in the wild] responsible for public statement.
In short: SSDD.

Mitigation Approaches

Given I try to be a responsible citizen [and, for that matter, responsible security practitioner too ;-)] I’d like to discuss potential approaches as for the efficient mitigation of the risk of being attacked “actively in the wild” due to (not only) this vulnerability.
At ERNW, for many years we’ve been using sth we called “The small catechism of IT security” which was essentially a set of simple fundamental rules as for securing complex systems. This piece included, amongst others, these ones:

Minimal Machine.
Least Privilege.
Patching.

Following these lines some approaches come to mind and I’ll discuss some of those.

a) Do not run Flash at all. Yes, we had this discussion already. And, no: I do not live in a ivory tower. And I mainly consult to very large organizations.
Sure, this might be one of the fights you (as an ISO) just can’t win. But, heck, I still dare to post this on our very personal and ranty blog: Running Flash on corporate desktops is simply asking for trouble. Asking for trouble loudly. Very loudly.

It should be noted that, according to this, removing Adobe Flash (e.g. in the way described here) will not remove the instances of Flash Player that is installed with Adobe Reader 9 or other Adobe products.

There is always a lot of trade-offs in managing complex IT environments. There are business requirements – and, as we security folks know: business pretty much always wins (and this is fully ok, as security is not the most important thing in corporate life) -, there’s “cost considerations”, all sorts of politics and in the end of the day there’s our mission of getting the best possible security stance given all these considerations and trade-offs. Running vulnerable software to provide some business functions (while at the same time inducing the risk of getting owned) obviously is such a trade-off, and it’s a common one.

As for Flash one should just be aware that – in most environments – there’s only little business value of running it, but – in all environments – there’s quite some associated risk.

b) Do not run Flash embedded in PDFs (by deactivating authplay.dll as described in the advisories).

I think this is – security-wise – a very feasible approach (following that good ole security principle called “minimal machine”). Only problem might be that the stuff gets re-deployed/re-enabled next time you patch Adobe Reader. So operational processes might have to be adjusted to ensure it does not re-appear.
And, of course, this is an ugly one (deleting a dll), which might not be “aligned with your sw management and deployment processes” πŸ˜‰
This document mentions that deleting another dll as well avoids the crashes when invoking a file with SWF code in it. Haven’t tested this though.

Btw: this is a preventative control. Whereas patching is a reactive one. Most probably I don’t have to tell any reader of this blog that preventative controls tend to have a better cost-impact ratio than reactive ones, do I? πŸ˜‰

c) Patching. Hmm, unfortunately there is no patch as of today. And the stuff is “exploited in the wild” (Adobe, thank you! for letting us know, once more. What about just adding a checkbox somewhere in “Preferences” that allows to disable playing embedded SWF stuff at all?).
Furthermore patch cycles for Adobe products are quite long in most environments (due to the number of integration aspects and side effects).

So, dear reader who’s still sympathetic to patching (as for Adobe stuff): do not pass go, do not collect $200, but maybe re-read the last sentences of the two former points.

d) Use of an alternate PDF reader, like Foxit Reader. Looking at this I’m not sure if this is really better (security-wise) and most probably it’s not an option for most corporate environments anyway (for reasons outside the security realm).

e) Security measures/approaches from the “Least privilege” space like “running Adobe stuff on a low integrity level” (on Windows systems disposing of integrity levels, that are Win Vista or Windows 7). While this can certainly help and can be regarded as a nice preventative control, it has the big disadvantage that taking the route of “least privilege” usually has, that is added complexity and high operational cost… (which is, btw, why it practically never works out to a satisfactory degree).

f) Gateway-based controls. In a number of environments there will be quite some praying that “our malicious content protection saves us”. This may happen. or not. Taking the “detective/reactive way” (which is what most anti-malware controls do) has well-known weaknesses…
Sanitizing Flash (like Blitzableiter does) could be a much better approach. Hopefully technologies like this will gain some deployment in the near future.

And hopefully in the upcoming world of HTML5 we won’t see that high risk software piece called Flash player anymore (alas, experience tells there will be other similarly awful stuff. but that’s another story…)

have a great day,

Enno

Continue reading
Breaking

Some reflections on virtualization security, part 1

Today was an interesting day, for a number of reasons. Amongst those it stuck out that we were approached by two very large environments (both > 50K employees) to provide security review/advise, as they want to “virtualize their DMZs, by means of VMware ESX”.
[yes, more correctly I could/should have written: “virtualize some of their DMZ segments”. but this essentially means: “mostly all of their DMZs” in 6-12 months. and “their DMZ backend systems together with some internal servers” in 12-24 months. and “all of this” in 24-36 months. so it’s the same discussion anyway, just on a shifted timescale ;-)]

Out of some whim, I’d like to give a spontaneous response here (to the underlying question, which is: “is it a good idea to do this?”).

At first, for those of you who are working as ISOs, a word of warning. Some of you, dear readers, might recall the slide of my Day-Con3 keynote titled “Don’t go into fights you can’t win”.
[I’m just informed that those slides are not yet online. they will be soon… in the interim, to get an idea: the keynote’s title was “Tools of the Trade for Modern (C)ISOs” and it had a section “Communication & Tools” in it, with that mentioned slide].
This is one of the fights you (as an ISO) can’t win. Business/IT infrastructure/whoever_brought_this_on_the_table will. Get over it. The only thing you can do is “limit your losses” (more on this in second, or in another post).
Before, you are certainly eager to know: “now, what’s your answer to the question [good idea or not]?”.

I’m tempted to give a simple one here: “it’s all about risk [=> so perform risk analysis]”. This is the one we like to give in most situations (e.g. at conferences) when people expect a simple answer to a complex problem ;-).
However it’s not that easy here. In our daily practice, when calculating risk, we usually work with three parameters (each on a 1 [“very low”] to 5 [“very high” scale), that are: likelihood of some event (threat) occurring, vulnerability (environment disposes of, with regard to that event) and impact (if threat “successfully” happens).
Let’s assume the threat is “Compromise of [ESX] host, from attacker on guest”.
Looking at “our scenario” – that is “a number of DMZ systems is virtualized by means of VMware ESX” – the latter one (impact) might be the easiest one: let’s put in a “5” here. Under the assumption that at least one of the DMZ systems can get compromised by a skilled+motivated attacker at some point of time (if you would not expect this yourselves, why have you placed those systems in a DMZ then? πŸ˜‰ … under that assumption, one might put in a “2” for the probability/likelihood. Furthermore _we_ think that, in the light of stuff like thisΒ  and the horrible security history VMware has for mostly all of their main products, it is fair to go with a “3” for the vulnerability.
This, in turn, gives a “2 * 3 *5 = 30” for the risk associated with the threat “Compromise of [ESX] host, from attacker on guest” (for a virtualized DMZ scenario, that is running guests with a high exposure to attacks).

In practically all environments performing risk analysis similar to the one described above (in some other post we might sometimes explain our approach – used by many other risk assessment practitioners as well – in more detail), a risk score of “30” would require some “risk treatment” other than “risk retention” (see ISO 27005 9.3 for our understanding of this term).
Still following the risk treatment options outlined in ISO 27005, there are left:

a) risk avoidance (staying away from the risk-inducing action at all). Well, this is probably not what the above mentioned “project initiator” will like to hear πŸ˜‰ … and, remember: this is a fight you can’t win.

b) risk transfer (hmm… handing your DMZ systems over to some 3rd party to run them virtualized might actually not really decrease the risk of the threat “Compromise of [ESX] host, from attacker on guest” πŸ˜‰

c) risk reduction. But… so how? There’s not many options or additional/mitigating controls you can bring into this picture. The most important technical recommendation to be given here is the one of binding a dedicated NIC to every virtualized system (you already hear them yelling “why can’t we bring more than ~ 14 systems on a physical platform?”, don’t you? πŸ˜‰ ). Some minor, additional advise will be provided in another post, as will some discussion on the management side/aspects of “DMZ virtualization”. (notice how we’re cleverly trapping you into coming back here? πŸ˜‰
So, if you are sent back and asked to “provide some mitigating controls”… you simply can’t. there’s not much that can be done. You’re mostly thrown back to that well-known (but not widely accepted) “instrument of security governance”, that is: trust.

In the end of the day you have to trust VMware, or not.
We don’t. We – for us – do not think that VMware ESX is a platform suited for “high secure isolation” (at least not at the moment).
The jury is still out on that one… but presumably you all know the truth, at your very inner self πŸ˜‰
For completeness’ sake, here’s the general advice we give when we only have 60 seconds to answer the question “What do you think about the security aspects of moving systems to VMware ESX”. It’s split into “MUST” or (“DO NOT”) parts and “SHOULD” parts. See RFC 2119 for more on their meaning. Here we go:

1.) Assuming that you have a data/system/network classification scheme with four levels (like “1 = public” to “4 = strictly confidential/high secure”) you SHOULD NOT virtualize “level 4”. And think twice before virtualizing SOX relevant systems πŸ˜‰
2.) If you still do this (virtualizing 4s), you MUST NOT mix those with other levels on the same physical platform.
3.) If you mix the other levels, then you SHOULD only mix two levels next to each other (2 & 3 or 1 & 2).
4.) DMZ systems SHOULD NOT be virtualized (on VMware ESX as of the current security state).
5.) If you still do this (virtualizing DMZ systems), you MUST NOT mix those with Non-DMZ systems.

For those of you who have already violated advice no. 4 but – reading this – settle back mumbling “at least we’re following advice no. 5″… wait, my friends, the same people forcing you before will soon knock at your door … and tell you about all those “significant cost savings” again… and again…

Continue reading