Misc

That “new worm”…

Recently I noticed this news titled “New email worm on the move”. At roughly the same time I received an email from a senior security responsible from a large customer asking for mitigation advice as they got “hit pretty hard” (by this exact piece of malware).
Given I’m mainly an infrastructure and architecture guy usually I’m not too involved in malware protection stuff (besides my continuous ranting that – from an architectural point of view – endpoint based antivirus has a bad security benefit vs. capex/opex ratio). So I’m by no means an expert in this field. Still I keep scratching my head when I read the associated announcements (like this, this or this) from major “antivirus”, “malware protection” or “endpoint security” vendors – to save typing, in the remainder of the post I call them SNAKE vendors (where “SNAKE” stands for “Smart Nimble APT Kombat Execution”… or sth equally ingenious of the valued reader’s choice… 😉

The following (not too) heretical questions come to mind:

a) What’s the corporate need to allow downloading .scr files at all? Maybe I’m missing sth here or I’m just not creative enough but I (still) don’t get it. Why not block .scr at the network boundaries at all?
[yes, I know, there’s no such thing like “well-defined network boundaries” any more, but here we’re talking about “HTTP based downloads” which happen to pass through – a few – centralized points in quite some environments].

a1) So, maybe blocking downloads of .scr files (as this document recommends, funnily enough together with the recommendation to “filter the URL” on gateways… which really seems an operationally feasible thing for complex environments… and a very effective one, for future malware, too ;-)) might be a viable mitigation path.
In my naïve world the approach of just allowing a certain (“positive”) set of file/MIME types for download would be even better, wouldn’t it?

This reminds me of a consulting project we did for a mid-sized bank (20K users) some years ago. They brought us in to evaluate options to increase their “malware protection stance” and we finally recommended a set of policy and gateway configuration adjustments (instead of buying a third commercial antimalware software which they had initially planned). Part of our recommendations was to restrict the file types to be accepted as email attachments. For a certain file type (from the MS Office family and known as a common malware spread vector at the time) they strongly resisted, stating “We need to allow this, our customers regularly send us documents of this type”. We then suggested monitoring the use of various filetypes-in-question for some time and it turned out that for this specific type they received three (in numbers: 3) legitimate emails within a six month period…

b) In their mentioned announcements all major vendors boast disposing of “updated signatures providing total protection” for this piece of malware.
Hmm… again, very naïvely, I might ask: so why did our customer get “hit pretty hard” (and, following the press, other organizations as well)? They are not a small shop (actually they’re one of the 50 largest corporations in the world), there’s a lot of smart people working in the infosec space over there and – of course! – they run  one of the main “best of breed” antimalware solutions on their desktops.
So why did they get hit? I leave the answer to the reader… just a hint: operational aspects might play a role, as always.

This brings me directly to the next question

c) Trend Micro write in their blog

“Upon further investigation, we found that the malware used for this attack was just an unpacked version of a file that we already detected as WORM_AUTORUN.NAD. It is possible that the cybercriminals behind this attack got hold of the code for WORM_AUTORUN.NAD and modified it for their usage.”

Indeed, looking at this entry in Microsoft’s malware encyclopedia from august 19th there are remarkable similarities.

So, dear SNAKE vendors: do I get it correctly that (most of) you need a new signature when there’s an unpacked version of some malicious piece of code, as opposed to a packed version (of the same code)?
Seems quite a difficult exercise for all those super-smart heuristic adaptive engines … in 2010…
Sorry, guys, how crazy is this? And it seems the stuff was initially observed back in july.
[did you note that they don’t even feel embarrased by admitting this, but proudly display this as a result of their research, which of course takes place in the best interest of their valued customers?]

For completeness’ sake it should be mentioned that this piece of malware (no, I won’t rant on the fact that – still, in 2010 – it seems not possible to have a common naming scheme amongst vendors) performs, amongst others, the following actions on an infected machine:
– turning of security services.
– modification of some security-relevant registry keys.
– sharing system folders.

On most Windows systems all those actions can only be performed by users… with administrative privileges…
Overall, this “classic piece of worm” might remind us, that maybe effective desktop protection should be achieved by

– controlling/restricting which types of code and data to bring into a given environment.
– or, at least, _where_ to get executable (types of) code/data from.
– which executables to run on a corporate machine at all (yes, I’m talking about application whitelisting here ;-).
– reflecting on the need of administrative privileges.

and _not_ by still spending even more money for SNAKE oil.

I renew my plea from this post:
So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.
 

Have a great day,

Enno

Continue reading
Breaking

“blackberry api to record phone calls”

This is currently the most frequent search term leading Internet users to the Troopers website.
Probably Sheran Gunasekera’s great presentation “Bugs & Kisses – Spying on BlackBerry users for fun” is the piece they are after. Whatever they look for, this search term may help to shed light to an aspect that seems a bit overlooked in the ongoing debate about governments (U.A.E., Saudi Arabia, India) trying to get their hands on communication acts performed with BlackBerries in their countries.
[For those interested in that discussion this blog entry of Bruce Schneier may serve as a starting point.]

Given that most readers of this blog using a BlackBerry will most likely do so with a BlackBerry Enterprise Server (BES) installation, in this post I’ll focus on those deployments and will subsequently not cover BlackBerry Internet Service (BIS) scenarios.

RIM, very understandably, stresses the fact that in the current BES architecture – presumably – they [RIM] can only process (thus “see”) the data stream encrypted (by symmetric ciphers regarded sufficiently secure) between the BES servers usually placed on corporate soil and the endpoint devices (the Blackberries themselves).

What they don’t mention is the simple fact that cryptographic techniques quite often only secure some data’s transport path, but not elements on the endpoints (where the data is unencrypted and further processed). What if either the (BlackBerry) devices provide means to eavesdrop on the traffic –  and Sheran discussed the relevant APIs in his talk, referring to the “Etisalat case” where in 2009 the major U.A.E. telecommunications provider distributed a software application for Blackberries that essentially allowed somebody [who?] to eavesdrop on emails by sending a copy of each email to a certain server – or even the BES itself is “somehow interfered with”? This article from Indiatimes at least suggests the latter possibility for BES servers located in India. Here’s a quoted excerpt:
“Significantly, the only time an enterprise email sent from a BlackBerry device remains in an un-encrypted or ‘readable’ format is when it resides in the enterprise server. ‘Feeding the email from the enterprise server to the ISP’s monitoring systems can, accordingly, help security agencies access the communication in pure text form’, DoT [India’s Department of Telecommunications] proposal said.”

So, in short, just discussing if BlackBerry based communication can be intercepted in transit may be a bit short-sighted. Thinking about the devices and the code they run (and who’s allowed to install applications, by what means/from which sources, yadda yadda yadda) or considering “some countries’ regulatory requirements” when deploying BES servers might be helpful, too.

It should be noted that we do not allege RIM any dishonest motives whatsoever (actually we have a quite positive stance as for the overall security posture of their products, if nothing else see for example this newsletter analysing the over-the-air generation of master encryption keys between the BES and the devices).
We just want to raise some awareness to the mentioned “blind spots” in the current debate.

Have a great day,

Enno

Continue reading
Breaking

Just a Quick Note on the Library Loading / Binary Planting Stuff

For those of you who missed it: Microsoft released the associated advisory yesterday, together with a hotfix introducing a new registry key that allows users to control the DLL search path algorithm. For a detailed explanation of the problem we refer to the excellent article on Ars Technica.

For the record: no, AV (anti-virus software) will – in most cases – not protect you from security problems related to this one. And, no, there is no easy patch for this one either.

Carefully reading the “Mitigating Factors” and “Workarounds” section in the MS advisory or this entry from our blog might provide ideas how to address this or similar stuff (in the future).

Wishing you all some sunny summer days,

Enno

Update: this article gives some more technical details and this one describes some real attack paths against popular applications. Sorry, guys, good luck with fighting this one with traditional AV…

Continue reading
Breaking

Research on “Application Virtualization” – Results online now

Just wanted to let you know that we sent out ERNW Newsletter 32 end of last week. As we promised it includes the results of  research regarding the question “Is browser virtualization a valid security control in order to mitigate browser based security risks?”.

Simon did a great job with writing the latest newsletter. It’s a 30-page document which should help you to have a basis for well-informed decisions when it comes to the deployment of an application virtualization technology.

Download a signed version of the PDF here, or visit the archive to browse other issues of our highly technical newsletters.

Best wishes,
Florian

Continue reading
Breaking

Try Loki!

Loki is set free!

Everybody who is interested in our newest tool ‘Loki’ is welcomed to head over to ERNW’s tool section and download it. Take this monster for a spin and let us know in the comments how you like it. Loki’s coding father Daniel is more than happy to answer your questions and criticism.

You don’t even know what Loki is?

In short: An advanced security testing tool for layer 3 protocols.

In long: Have a read in the Blackhat2010 presentation slides and mark TROOPERS11 in your calendar to meet the guys behind the research and for sure get a live demo of the capabilities – development is still ongoing, so prepare yourself for even more supported protocols and attack types.

And again: Talking about TROOPERS11… we’ve already selected the first round of speakers. Details to be published soon 🙂

Have a great day!
Florian

Continue reading
Breaking

Application Virtualization as Browser Security Control?

One of the biggest pains in the ass of most ISOs – and subsequently subject of fierce debates between business and infosec – is the topic of “Browser Security”, i.e. essentially the question “How to protect the organization from malicious code  brought into the environment by users surfing the Internet?”.

Commonly the chain of events (of a typical malware infection act) can be broken down to the following steps:

1.) Some code – no matter if binary or script code – gets transferred (mostly: downloaded) to some system “from the Internet”, that means “over the network”.

2.) This code is executed by some local piece of software (where “execution” might just mean “parse a PDF” ;-).
[btw, if you missed it: after Black Hat Adobe announced an out-of-band patch scheduled for 08/16, so stay tuned for another Adobe Reader patch cycle next week…]

3.) This code causes harm (either on it’s own, either by reloaded payloads) to the local system, to the network the system resides in or to other networks.

Discussing potential security controls can be centered around these steps, so we have

a) The area of network based controls, that means all sorts of “malicious content protection” devices like proxies filtering (mainly HTTP and FTP) traffic based on signatures, URL blacklists etc., and/or network based intrusion prevention systems (IPSs).
Practically all organizations use some of this stuff (however quite a number of them – unfortunately – merely banks on these pieces). Let me state this clearly: overall using network based (filtering) controls contributes significantly to “overall protection from browser based threats” and we won’t discuss the advantages/disadvantages of this approach right here+now.
Still it should be noted that this is what we call a “detective/reactive control”, as it relies on somehow detecting the threat and scrubbing it after the detection act).

b) Controls in the “limit the capability to execute potentially harmful code” space. Which can be broken down to things like
– minimizing the attack surface (e.g. by not running Flash, iTunes etc. at all). The regular readers of this blog certainly knows our stance as for this approach ;-).
– configuration tweaks to limit the script execution capabilities of some components involved, like all the stuff to be found in IE’s zone model and associated configuration options (see this document for a detailed discussion of this approach).
– patching (the OS, the browser, the “multimedia extensions” like Flash and Quicktime, the PDF reader etc.) to prevent some “programmatic abuse” of the respective components.

Again, we won’t dive into an exhaustive discussion of the advantages/disadvantages of this approach right here+now.

c) Procedures or technologies striving to limit the harm in case an exploit happens “in browser space” (which, as of our definition, encompasses all add-ons like Flash, Quicktime etc.). This includes DEP, IE protected mode, sandboxing browsers etc.

Given the weaknesses the network based control approach might have (in particular in times of targeted attacks. oops, sorry, of course I mean: in times of the Advanced Persistent Threat [TM] ;-)) and the inability (or reluctance?) to tackle the problem on the “code execution” front-line in some environments, in the interim another potential control has gained momentum in “progressive infosec circles”: using virtualization technologies to isolate the browser from the (“core”) OS, other applications or just the filesystem.
Three main variants come to mind here: full OS virtualization techniques (represented, for example, by Oracle VirtualBox or VMware Workstation), application virtualization solutions (like Microsoft App-V or VMware ThinApp) and, thirdly, what I call “hosted browsing” (where some MS Terminal Server farm potentially located in a DMZ, or even “the cloud” may serve as “[browser] hosting infrastructure”).
In general, on an architecture level this is a simple application of the principle of “isolation” – and I really promise to discuss that set of architectural security principles we use at ERNW at some point in this blog ;-).

While I know that some of you, dear readers, use virtualization technologies to “browse safely” on a daily (but individual use) basis, there’s still some obstacles for large scale use of this approach, like how to store/transfer or print documents, how to integrate client certificates – in particular when on smart cards – into these scenarios, how to handle “aspects of persistence” (keeping cookies, bookmarks vs. not keeping potentially infected “browser session state”) etc.
And, even if all these problems can be solved, the big question would be: does it help, security-wise? Or, in infosec terms: to what degree is the risk landscape changed if such an approach would be used to tackle the “Browser Security Problem”?

To contribute to this discussion we’ve performed some tests with an application virtualization solution (VMware ThinApp) recently. The goal of the tests was to determine if exploits can be stopped from causing harm if they happened within a virtualized deployment, which modes of deployment to use, which additional tweaks to apply etc.
The results can be found in our next newsletter to be published at the end of this week. This post’s purpose was to provide some structure as for “securing the browser” approaches. and to remind you that – in the end of the day – each potential security control must be evaluated from two main angles: “What’s the associated business impact and operational effort?” and “How much does it mitigate risk[s]?”.
Have a great day,
Enno

Continue reading
Breaking

Spooky Story about Break-In in Military Contractor Facility

… recently published here.
While I certainly agree with those comments stating that there’s a fishy element in the – conspiracy theory nurturing – story itself, this reminds me that Graeme Neilson (who gave the “Netscreen of the Dead” talk at Troopers, discussing modified firmware on Juniper and Fortinet devices) and I plan to give a talk on “Supply Chain (In-)Security” at this year’s Day-Con event. We still have to figure out with Angus if it fits into the agenda (and if we have enough material for an interesting 45 min storyline ;-)) though. Stay tuned for news on this here.

On a related note: for family reasons I won’t be able to make it to Vegas for Black Hat. Rene will take over my part in our talk “Burning Asgard – What happens when Loki breaks free” which covers some cool attacks on routing protocols and the release of an awesome tool Daniel wrote to implement those in a nice clicky-clicky way.

To all of you guys going to Vegas: have fun! and take care 😉

Enno

Continue reading
Building

Software Developers Don’t Use Available Security Features

According to SANS NewsBites Vol. XII, Issue 53 recently published there’s a lack of 3rd party developer support for some security features Microsoft introduced already years ago. We at ERNW have made similar observations when performing security assessments of COTS [commercial off-the-shelf] software. We therefore created a methodology, a proof of concept tool and a metric to test and to rate closed source software, where (amongst other approaches) these security features are checked and their (non-) presence contributes to an overall evaluation as for the trustworthiness of the applications in question. The concept “How to rate the security in closed source software” was presented to the public at Troopers10 and at Hack in the Box 2010 in Amsterdam. The slides can be found here.

Continue reading
Building

News from the Desktop, Edition 2010/07/21

Back on track as for one of our favorite rant subjects: desktop security. This stuff, commonly called the “LNK vulnerability”, has gained quite some momentum in the last days, including the release of a Metasploit module and a temporary raise of SANS Internet Storm Center‘s Infocon level to yellow (it’s back on green in the interim).

CVE-2010-2568 has been assigned and some technical details can be found here and here.

To give you a rough idea how this piece works, here’s a quote from the US-CERT advisory:

“Microsoft Windows fails to safely obtain icons for LNK files. When Windows displays Control Panel items, it will initialize each object for the purpose of providing dynamic icon functionality. This means that a Control Panel applet will execute code when the icon is displayed in Windows. Through use of an LNK file, an attacker can specify a malicious DLL that is to be processed within the context of the Windows Control Panel, which will result in arbitrary code execution. The specified code may reside on a USB drive, local or remote filesystem, a CD-ROM, or other locations. Viewing the location of a LNK file with Windows Explorer is sufficient to trigger the vulnerability.”

In short, as the Microsoft Malware Protection Center puts it: “[S]imply browsing to the removable media drive using an application that displays shortcut icons (like Windows Explorer) runs the malware without any additional user interaction.”

So actually, there is no exploitation in the sense of a buffer overflow of the LNK handling routines. The flaw just triggers downloading (and executing) some binary code. It’s basically about downloading some piece of code (to be executed on the local “compromised” box) from some location.
Figuring that brings us to an immediate discussion of potential mitigation strategies, which is – next to “some usual rant” 😉 – the main intent of this post.

As so often the US-CERT’s advisory provides good guidance on mitigating controls. It lists the following (most with a comment from my side):

a) Disable the displaying of icons for shortcuts.

Do not do this! It will most probably break your (users’) desktop experience in a horrible way.
Chester Wisniewski has a nice image of a treated desktop in yesterday’s post on the subject.

b) Disable AutoRun

This is pretty much always a good idea (security-wise), but it – partially – limits only one attack vector (removable media).
Interestingly enough the MS KB article on disabling Autorun functionality in Windows was last reviewed on 2010/07/01 and the CVE number for the LNK vulnerability was assigned 2010/06/30. There’s strange coincidences out there… 😉

And, of course, not allowing untrusted USB devices to be connected to the organization’s machines usually is a good thing as well.
[for the record: yes, I know, there are organizations out where business tells you this is not possible for some reason or another. which might be true; I will not go into this debate right here and now. I just want to remind you, dear reader, of some good ole basics ;-)].

In the meantime people expect (and see) the vulnerability being exploited over network shares so focusing on removable media/USB devices alone might be too narrow-focused anyway.

c) Use least privilege

Aka: do not work as admin. I don’t really have to comment on this here, do I? It’s a no-brainer. Well, let’s say: it should be a no-brainer… when gathering with some infosec people from a >100K user environment recently I learned they still have about 30% of users with local admin privs…
[and don’t get me wrong here: there might be good reasons for this. and those guys are not the only ones with such a landscape.]
To be discussed in more detail at another occasion.

d) Disable the WebClient service.

While I like this very much – given it’s a preventative control [“minimal machine approach”] and, btw, addresses other (past and potential future) vulnerabilities as well, e.g. those of MS10-045 published one week ago – it should be noted that this potentially breaks MS SharePoint (and other stuff as well).
So, unfortunately, again this will not be a feasible in quite some environments. Still, it might remind people that WebDAV is a technology that can be (ab-) used to access “network drives” in completely untrusted/untrustworthy locations.

e) Block outgoing SMB traffic

Well, yes…

The (today) updated MS advisory provides another one:

f) Blocking the download of LNK and PIF files

Again, this is soo obvious that I refrain from any comment.

Those of you following this blog or our public statements on desktop security (like this one) regularly might have noticed that – as so often – the two main ones quite some organizations rely on are not mentioned here:

Patching: for the simple reason there’s no patch as of today (and the imp is out of the bottle, to public knowledge, five days now).

Antimalware “protections”: not sure how this could prevent downloading and executing arbitrary binary code. The announcements of major AV vendors “we now have a signature for this” mostly address the Stuxnet stuff found in the initial exploit, nothing else. So this is mostly window-dressing.

So far so good (or bad), the main point of this post is – and yes, I’m aware I needed a long warm-up today 😉 – the following: the security problem discussed here can be broken down to: “there’s a vulnerability that triggers an exploit that goes somewhere, downloads some code and executes it”.

Which, in turn, raises the fundamental question: why should some average corporate desktop computer be allowed to go to some arbitrary location, download code, and – above all – run this code.
Restricting where to get executable code from or, even better, just allowing a specified set of applications to run could (more or less) easily solve the kind of problems vulnerabilities like this one impose. And the technologies needed like – in MS space – Software Restriction Policies , Applocker or just (as part of SRPs) “path rules” restricting where to download executables from have been available for a long time.

As Marcus Ranum is much more eloquent than me – especially when it comes to ranting where he’s nearly unbeatable 😉 – I allow myself to quote him literally, from the “Schneier-Ranum Face-Off” on “Is antivirus dead?”:

“Of course, most organizations don’t know (or haven’t got the courage to discover) what programs they allow–and, ultimately, isn’t that the root of their security problems? When I read the security news and hear that thus-and-such government agency is trying to decide if Facebook is a necessary application, it makes my head spin. In Marcus-land, where I come from, you decide what is a necessary application first, not after you have 40,000 employees who have gotten so used to it that they now think Twitter is a constitutionally protected right. Isn’t a virus or malware just unauthorized execution that someone managed to sneak onto your machine? If we adopt a model whereby there are programs that are authorized (i.e., on a whitelist) and the operating system should terminate everything else, then malware and viruses are history, unless their authors can somehow fool the administrator into authorizing them to run.
[…]
Whenever I talk about execution control/whitelisting with corporate types, someone says, ‘But we don’t really have a way of determining all the applications that we use!’ Really? Wow. That sounds like a policy that’s basically, ‘We have no idea what our computers are for.’ In other words: ‘We’ve given up, and as far as we’re concerned, our computers are an unmanaged mess.’ Or to put it another way, malware heaven. Can anyone even calculate the cost of malware and viruses (as well as the occasional office time spent playing online games) to businesses? That cost, ultimately, is paid solely in order to avoid the difficulty of determining what programs are authorized — what’s the purpose of the computer an employee is provided to use?

Here’s why I keep talking about execution control: it’s actually ridiculously easy compared to dealing with antivirus and antimalware. So why isn’t everyone doing it? Because it’d dramatically cut down on our ability to goof off. If executives knew how easy it was to cut back on productivity-wasting goof-off-ware, don’t you think it would be happening all over the place by now? If, instead, we tell them it’s hard to know what executables we use in the office…well, what nobody knows won’t hurt anyone.”

Well said, Marcus. Nothing to add here.

So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.

thanks for your time,

Enno

Continue reading
Misc

The Emperor’s New Security Indicators

Interesting research from Stuart Schechter et.al. here.
They evaluated the effect that the removal or modification of online banking sites’ security features had on the users’ behavior (as for entering or withholding their passwords). Maybe for some of you not too surprising it turned out that the vast majority of users entered their passwords even if obviously alarming clues were present on the websites.
This, again, shows how important it is to understand how users behave, what their motives and incentives are and how to build environments that help them acting securely. This even more applies to corporate space. At times, bringing an industrial/organizational psychologist in might be a much better investment than writing yet-another-ignored-piece-of-policy.

have a great sunny sunday everybody,

Enno

Continue reading