One of the biggest pains in the ass of most ISOs – and subsequently subject of fierce debates between business and infosec – is the topic of “Browser Security”, i.e. essentially the question “How to protect the organization from malicious codeΒ brought into the environment by users surfing the Internet?”.
Commonly the chain of events (of a typical malware infection act) can be broken down to the following steps:
1.) Some code – no matter if binary or script code – gets transferred (mostly: downloaded) to some system “from the Internet”, that means “over the network”.
2.) This code is executed by some local piece of software (where “execution” might just mean “parse a PDF” ;-).
[btw, if you missed it: after Black Hat Adobe announced an out-of-band patch scheduled for 08/16, so stay tuned for another Adobe Reader patch cycle next week…]
3.) This code causes harm (either on it’s own, either by reloaded payloads) to the local system, to the network the system resides in or to other networks.
Discussing potential security controls can be centered around these steps, so we have
a) The area of network based controls, that means all sorts of “malicious content protection” devices like proxies filtering (mainly HTTP and FTP) traffic based on signatures, URL blacklists etc., and/or network based intrusion prevention systems (IPSs).
Practically all organizations use some of this stuff (however quite a number of them – unfortunately – merely banks on these pieces). Let me state this clearly: overall using network based (filtering) controls contributes significantly to “overall protection from browser based threats” and we won’t discuss the advantages/disadvantages of this approach right here+now.
Still it should be noted that this is what we call a “detective/reactive control”, as it relies on somehow detecting the threat and scrubbing it after the detection act).
b) Controls in the “limit the capability to execute potentially harmful code” space. Which can be broken down to things like
– minimizing the attack surface (e.g. by not running Flash, iTunes etc. at all). The regular readers of this blog certainly knows our stance as for this approach ;-).
– configuration tweaks to limit the script execution capabilities of some components involved, like all the stuff to be found in IE’s zone model and associated configuration options (see this document for a detailed discussion of this approach).
– patching (the OS, the browser, the “multimedia extensions” like Flash and Quicktime, the PDF reader etc.) to prevent some “programmatic abuse” of the respective components.
Again, we won’t dive into an exhaustive discussion of the advantages/disadvantages of this approach right here+now.
c) Procedures or technologies striving to limit the harm in case an exploit happens “in browser space” (which, as of our definition, encompasses all add-ons like Flash, Quicktime etc.). This includes DEP, IE protected mode, sandboxing browsers etc.
Given the weaknesses the network based control approach might have (in particular in times of targeted attacks. oops, sorry, of course I mean: in times of the Advanced Persistent Threat [TM] ;-)) and the inability (or reluctance?) to tackle the problem on the “code execution” front-line in some environments, in the interim another potential control has gained momentum in “progressive infosec circles”: using virtualization technologies to isolate the browser from the (“core”) OS, other applications or just the filesystem.
Three main variants come to mind here: full OS virtualization techniques (represented, for example, by Oracle VirtualBox or VMware Workstation), application virtualization solutions (like Microsoft App-V or VMware ThinApp) and, thirdly, what I call “hosted browsing” (where some MS Terminal Server farm potentially located in a DMZ, or even “the cloud” may serve as “[browser] hosting infrastructure”).
In general, on an architecture level this is a simple application of the principle of “isolation” – and I really promise to discuss that set of architectural security principles we use at ERNW at some point in this blog ;-).
While I know that some of you, dear readers, use virtualization technologies to “browse safely” on a daily (but individual use) basis, there’s still some obstacles for large scale use of this approach, like how to store/transfer or print documents, how to integrate client certificates – in particular when on smart cards – into these scenarios, how to handle “aspects of persistence” (keeping cookies, bookmarks vs. not keeping potentially infected “browser session state”) etc.
And, even if all these problems can be solved, the big question would be: does it help, security-wise? Or, in infosec terms: to what degree is the risk landscape changed if such an approach would be used to tackle the “Browser Security Problem”?
To contribute to this discussion we’ve performed some tests with an application virtualization solution (VMware ThinApp) recently. The goal of the tests was to determine if exploits can be stopped from causing harm if they happened within a virtualized deployment, which modes of deployment to use, which additional tweaks to apply etc.
The results can be found in our next newsletter to be published at the end of this week. This post’s purpose was to provide some structure as for “securing the browser” approaches. and to remind you that – in the end of the day – each potential security control must be evaluated from two main angles: “What’s the associated business impact and operational effort?” and “How much does it mitigate risk[s]?”.
Have a great day,
Enno