Building

The Case For/Against Split Tunneling

Once again, in some customer environment the question of allowing/prohibiting split tunneling for (in this case: IPsec) VPN connections popped up today. Given our strict stance when it comes to “fundamental architectural security principles” the valued reader might easily imagine we’re no big fans of allowing split tunneling (term abbreviated in the following by “ST”), as this usually constitutes a severe violation of the “isolation principle”, further aggravated by the fact that this (violation) takes place on a “trust boundary” (of trusted/untrusted networks).
Still, we’re security practitioners (and not everybody has such a firm belief in the value of “fundamental architectural security principles” as we have), so we had to deal with the proponents’ arguments. In particular as one of them mentioned additional costs (in case of disallowed ST forcing all 80K VPN users’ web browsing through some centralized corporate infrastructure) of US$ 40,000,000.
[yes, you read that correctly: 40 million. I’ve still no idea where this – in my perception: crazy – number comes from]. Anyhow, how to deal with this?
Internally we performed a rapid risk assessment (RRA) focused on two main threats, that were:

a) Targeted attack against $CORP, performed by means of network backdoor access by piece of malware acting as relay point.

a) Client gets infected by drive-by malware as not being protected by (corporate) infrastructure level controls.
[for obvious reasons I’ll not provide the values derived from the exercise here]
While the former seems the main reason why NIST document SP 800-77 “Guide to IPsec VPNs” recommends _against_ allowing ST, the RRA showed that the latter overall constitutes the bigger risk. To illustrate this I’ll give some numbers from the excellent Google Research paper “All Your iFRAMEs Point to Us” which is probably the best source for data on the distribution and propagation of drive-by malware.

Here’s the paper’s abstract:

“As the web continues to play an ever increasing role in information exchange, so too is it
becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a
detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by
downloads are caused by URLs that attempt to exploit their visitors and cause malware to be
installed and run automatically. Our analysis of billions of URLs over a 10 month period shows
that a non-trivial amount, of over 3 million malicious URLs, initiate drive-by downloads. An even
more troubling finding is that approximately 1.3% of the incoming search queries to Google’s
search engine returned at least one URL labeled as malicious in the results page. We also explore
several aspects of the drive-by downloads problem. We study the relationship between the user
browsing habits and exposure to malware, the different techniques used to lure the user into the
malware distribution networks, and the different properties of these networks.”

and I’m going to quote some parts of it in the following, to line some (in the end of the day: not so) small calculations.

The authors observed that from “the top one million URLs appearing in the search engine results, about 6,000 belong to sites that have been verified as malicious at some point during our data collection. Upon closer inspection, we found that these sites appear at uniformly distributed ranks within the top million web sites—with the most popular landing page having a rank of 1,588.”

Now, how many of those (top million websites and in particular of the 0.6%) will be visited _every day_ by a 80,000 user population? And how many of those visits will presumably happen over “the split tunnel”, thus without corporate infrastructure level security controls?
Looking at the post infection impact they noticed that the “number of Windows executables downloaded after visiting a malicious URL […] [was] 8 on average, but as large as 60 in the extreme case”.
To be honest I mostly cite this opportunisticly to refer to this previous post or this one 😉

And I will easily refrain from any comment on this observation ;-):
“We subject each binary for each of the anti-virus scanners using the latest virus definitions on that day.  […] The graph reveals […] an average detection rate of 70% for the best engine.”

Part of their conclusion is that the “in-depth analysis of over 66 million URLs (spanning a 10 month period) reveals that the scope of the problem is significant. For instance, we find that 1.3% of the incoming search queries to Google’s search engine return at least one link to a malicious site.”

Let’s use this for some simple math: let’s assume 16K Users being online (20% of those 80K overall VPN users) on a given day, performing only 10 Google queries per user. That’s 160K queries/day. 1.3% of that, i.e. about 2K queries will return a link to at least one malicious site. If one out of 10 users clicks on that one, that means 200 attempted infections _per day_. If you cover 70% of that by local AV, 60 attempts (the remaining 30%) succeed. This gives 60 successful infections _each day_.
So, from our perspective, deliberately allowing a large user base to circumvent corporate infrastructure security controls (only relying on some local anti-malware piece) might simply… not be a good idea…

thanks

Enno

Continue reading
Events

Back from Day-Con

… which was, as in the years before, an awesome event. Great talks, great people, great fun.
Bruce Potter gave a keynote which did exactly what a good keynote should do: make the audience think and entertain it at the same time.
[Those readers familiar with ERNW’s security model will certainly notice that we do not necessarily agree with everything he said. We still think that – in particular in times where infosec resources are scarce anyway – putting your bets on prevention provides a better cost/[security] benefit ratio than going for extensive detection capabilities.
Fix the doors first, then think about installing a CCTV.
Still, human nature tends to exchange “good security with low visibility” for “poor security with potentially good visibility” quite easily… as can be noted every day in many environments.]

Sergey provided an excellent & insightful piece on security in times of very large numbers of embedded devices (like smart meters).
And, last but not least: football is coming home. The “ERNW Troopers” team consisting of Rene Graf and Michael “Bob the Builder” Schaefer managed to win the event’s PacketWars contest. Congrats! Great job, guys.

have a great weekend everybody,

Enno

For the record: Graeme’s and my presentation on Supply Chain Security can be found here.

Continue reading
Misc

ERNW to contribute to government sponsored research project on telco security

Today we dare to (mis-) use the blog for a shameless self promotion 😉
We’re happy to announce that ERNW will contribute to a government sponsored research project called ASMONIA (which stands for the German title of the project that is Angriffsanalyse und Schutzkonzepte für MObilfunkbasierte Netzinfrastrukturen unterstützt durch kooperativen InformationsAustausch [Attack analysis and Security concepts for MObile Network infrastructures, supported by collaborative Information exchAnge]. those readers familiar with that kind of projects will have an idea of the importance of such acronyms ;-).

Our input in the project will happen in the areas of threat and risk analysis in 4G mobile telecommunication networks and, of course, we will “carefully evaluate practical attacks” in some parts of those networks ;-).
We just got a bunch of devices to undergo some lab testing in the next months. And you might expect some presentations on results from the project, e.g. for ShmooCon we plan to file a talk on “Attacking and Securing Juniper Backbone Routers”.

Stay tuned & have a great day,

Enno

Continue reading
Events

Some recent presentations

Just a short notice today on some recent presentations from our team. As some of you might know we regularly give talks at conferences. This not only encompasses highly sophisticated security events like Black Hat or Troopers. Additionally – on our mission for a safer world – we try to spread the (security) word at various industry events that are usually focused on some aspect of the large and ramified IT world, not necessarily equipped with a strong focus on information security.
A number of such events took place in the last few weeks and here’s some links on presentations given there. While not being as technically deep as the average Black Hat or Troopers attendee might expect, we still hope that one or another valued reader finds them useful (pls note that some parts are in German).

This one is a talk given by myself on “Compliance in the Cloud” in the course of the “Azure Day” of BASTA which is one of the largest and most important developer events here in Germany. The presentation discusses what to keep in mind if compliance with some “regulatory frameworks” is strived for when going to “the [public] cloud”.

Here‘s a piece on virtualization security, namely the architectural changes on basic security principles induced by (server) virtualization. It was provided at the “IIR Admin Tech Talk 2010” and, again, I myself was the speaker.

Rene Graf, who’s a member of the “Architecture and Risk Team” at ERNW and a long-time large-environment security guy, gave this overview talk on “Industrial Firewalls” at the LANline TechForum “Industrial Ethernet” which took place in Stuttgart.

Last but not least, Matthias Luft (being another member of the same team and pursuing his academic career in parallel) delivered this talk on DLP at ISSE in Berlin, together with Thorsten Holz.

Have a great day everybody,

Enno

Btw: our next stop will be at fabulous Day-Con. If any of our readers from the US – very appropriately – is worried about missing it, pls shoot me an email. Given our long term friendship with Angus we might be able to provide you a ticket.

Continue reading
Breaking

Back to the roots

Finding exploitable vulnerabilities is getting harder. This statement of Dennis Fisher published on Kaspersky’s Threatpost blog summarizes a trend in the development lifecycle of software . The last published vulnerabilities that were gaining some attention in the public had all one thing in common, they were quite hard to exploit. The so called jailbreakme vulnerability was based on several different vulnerabilities that had to be chained together to break out of the iPhone sandbox, escalate its privileges and run arbitrary code. Modern software and especially modern operating systems are more secure, they contain less software flaws and more protection features that make reliable exploitation a big problem that can only be solved by very skilled hackers. Decades ago it was just like this, but intelligent tools and sharing of the needed knowledge enabled even low skilled people to develop working exploits and attack vulnerable systems. Nowadays we are going back to the roots where only a few very knowledgeable people are able to circumvent modern security controls, but that doesn’t mean that all problems are gone. Attackers are moving to design flaws like the DLL highjacking problem, so only the class of attacks is changing from the old school memory corruption vulnerabilities to logical flaws that still can be exploited easily. But the number of exploitable vulnerabilities is decreasing, so this might be a sign that we are on the right way to develop reliable and secure systems and that developing companies are adopting Microsofts Secure Development Lifecycle (SDL) to produce more secure software. As stated in my previous blogpost the protection features are available, but not used very often. But if they are used and if the developers are strictly following the recommendations of the SDL, this trend of “harder to exploit vulnerabilities” proves that it can be a success story to do so.

Have a bug-free day 😉
Michael

Continue reading
Misc

Intel’s Known Good Approach — Chances for a Paradigm Shift?

During the keynote of the Intel Developer Forum, Intel’s CEO Paul Otellini explained their motivation for the acquisition of McAfee. Basically, Intel wants to provide a possibility to shift computer security from a known bad model to something that is a known good model.

Coming back to some of our recent blog posts, we think that a reliable and working approach to implement application whitelisting would increase security in corporate environments — especially when thinking of the latest vulnerabilities with exploit code in the wild that could not be catched up by any AV solution. As covered by this article, the possibility that such an approach succeeds depends heavily on the critical mass that would use it. The widespread x86 architecture therefore is the perfect plattform for accomplishing a widely used known good m Continue reading “Intel’s Known Good Approach — Chances for a Paradigm Shift?”

Continue reading
Breaking

MS10-063, Prevention

One of the four vulnerabilities rated “critical” from yesterday’s MS patchday, that is MS10-063, has an interesting “Workarounds” section as for MS Internet Explorer. There it’s stated:

“Disabling the support for the parsing of embedded fonts in Internet Explorer prevents this application from being used as an attack vector.”

which, according to the advisory, should/can be done by setting the “Font Downloading” parameter to “Disable”.

Which is exactly what this document suggests. So taking a preventive approach, once more, might have saved some concerns (“Will we be targeted by this one”) and patch/testing time…

Have a great day,

Enno

Continue reading
Misc

That “new worm”…

Recently I noticed this news titled “New email worm on the move”. At roughly the same time I received an email from a senior security responsible from a large customer asking for mitigation advice as they got “hit pretty hard” (by this exact piece of malware).
Given I’m mainly an infrastructure and architecture guy usually I’m not too involved in malware protection stuff (besides my continuous ranting that – from an architectural point of view – endpoint based antivirus has a bad security benefit vs. capex/opex ratio). So I’m by no means an expert in this field. Still I keep scratching my head when I read the associated announcements (like this, this or this) from major “antivirus”, “malware protection” or “endpoint security” vendors – to save typing, in the remainder of the post I call them SNAKE vendors (where “SNAKE” stands for “Smart Nimble APT Kombat Execution”… or sth equally ingenious of the valued reader’s choice… 😉

The following (not too) heretical questions come to mind:

a) What’s the corporate need to allow downloading .scr files at all? Maybe I’m missing sth here or I’m just not creative enough but I (still) don’t get it. Why not block .scr at the network boundaries at all?
[yes, I know, there’s no such thing like “well-defined network boundaries” any more, but here we’re talking about “HTTP based downloads” which happen to pass through – a few – centralized points in quite some environments].

a1) So, maybe blocking downloads of .scr files (as this document recommends, funnily enough together with the recommendation to “filter the URL” on gateways… which really seems an operationally feasible thing for complex environments… and a very effective one, for future malware, too ;-)) might be a viable mitigation path.
In my naïve world the approach of just allowing a certain (“positive”) set of file/MIME types for download would be even better, wouldn’t it?

This reminds me of a consulting project we did for a mid-sized bank (20K users) some years ago. They brought us in to evaluate options to increase their “malware protection stance” and we finally recommended a set of policy and gateway configuration adjustments (instead of buying a third commercial antimalware software which they had initially planned). Part of our recommendations was to restrict the file types to be accepted as email attachments. For a certain file type (from the MS Office family and known as a common malware spread vector at the time) they strongly resisted, stating “We need to allow this, our customers regularly send us documents of this type”. We then suggested monitoring the use of various filetypes-in-question for some time and it turned out that for this specific type they received three (in numbers: 3) legitimate emails within a six month period…

b) In their mentioned announcements all major vendors boast disposing of “updated signatures providing total protection” for this piece of malware.
Hmm… again, very naïvely, I might ask: so why did our customer get “hit pretty hard” (and, following the press, other organizations as well)? They are not a small shop (actually they’re one of the 50 largest corporations in the world), there’s a lot of smart people working in the infosec space over there and – of course! – they run  one of the main “best of breed” antimalware solutions on their desktops.
So why did they get hit? I leave the answer to the reader… just a hint: operational aspects might play a role, as always.

This brings me directly to the next question

c) Trend Micro write in their blog

“Upon further investigation, we found that the malware used for this attack was just an unpacked version of a file that we already detected as WORM_AUTORUN.NAD. It is possible that the cybercriminals behind this attack got hold of the code for WORM_AUTORUN.NAD and modified it for their usage.”

Indeed, looking at this entry in Microsoft’s malware encyclopedia from august 19th there are remarkable similarities.

So, dear SNAKE vendors: do I get it correctly that (most of) you need a new signature when there’s an unpacked version of some malicious piece of code, as opposed to a packed version (of the same code)?
Seems quite a difficult exercise for all those super-smart heuristic adaptive engines … in 2010…
Sorry, guys, how crazy is this? And it seems the stuff was initially observed back in july.
[did you note that they don’t even feel embarrased by admitting this, but proudly display this as a result of their research, which of course takes place in the best interest of their valued customers?]

For completeness’ sake it should be mentioned that this piece of malware (no, I won’t rant on the fact that – still, in 2010 – it seems not possible to have a common naming scheme amongst vendors) performs, amongst others, the following actions on an infected machine:
– turning of security services.
– modification of some security-relevant registry keys.
– sharing system folders.

On most Windows systems all those actions can only be performed by users… with administrative privileges…
Overall, this “classic piece of worm” might remind us, that maybe effective desktop protection should be achieved by

– controlling/restricting which types of code and data to bring into a given environment.
– or, at least, _where_ to get executable (types of) code/data from.
– which executables to run on a corporate machine at all (yes, I’m talking about application whitelisting here ;-).
– reflecting on the need of administrative privileges.

and _not_ by still spending even more money for SNAKE oil.

I renew my plea from this post:
So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.
 

Have a great day,

Enno

Continue reading
Breaking

“blackberry api to record phone calls”

This is currently the most frequent search term leading Internet users to the Troopers website.
Probably Sheran Gunasekera’s great presentation “Bugs & Kisses – Spying on BlackBerry users for fun” is the piece they are after. Whatever they look for, this search term may help to shed light to an aspect that seems a bit overlooked in the ongoing debate about governments (U.A.E., Saudi Arabia, India) trying to get their hands on communication acts performed with BlackBerries in their countries.
[For those interested in that discussion this blog entry of Bruce Schneier may serve as a starting point.]

Given that most readers of this blog using a BlackBerry will most likely do so with a BlackBerry Enterprise Server (BES) installation, in this post I’ll focus on those deployments and will subsequently not cover BlackBerry Internet Service (BIS) scenarios.

RIM, very understandably, stresses the fact that in the current BES architecture – presumably – they [RIM] can only process (thus “see”) the data stream encrypted (by symmetric ciphers regarded sufficiently secure) between the BES servers usually placed on corporate soil and the endpoint devices (the Blackberries themselves).

What they don’t mention is the simple fact that cryptographic techniques quite often only secure some data’s transport path, but not elements on the endpoints (where the data is unencrypted and further processed). What if either the (BlackBerry) devices provide means to eavesdrop on the traffic –  and Sheran discussed the relevant APIs in his talk, referring to the “Etisalat case” where in 2009 the major U.A.E. telecommunications provider distributed a software application for Blackberries that essentially allowed somebody [who?] to eavesdrop on emails by sending a copy of each email to a certain server – or even the BES itself is “somehow interfered with”? This article from Indiatimes at least suggests the latter possibility for BES servers located in India. Here’s a quoted excerpt:
“Significantly, the only time an enterprise email sent from a BlackBerry device remains in an un-encrypted or ‘readable’ format is when it resides in the enterprise server. ‘Feeding the email from the enterprise server to the ISP’s monitoring systems can, accordingly, help security agencies access the communication in pure text form’, DoT [India’s Department of Telecommunications] proposal said.”

So, in short, just discussing if BlackBerry based communication can be intercepted in transit may be a bit short-sighted. Thinking about the devices and the code they run (and who’s allowed to install applications, by what means/from which sources, yadda yadda yadda) or considering “some countries’ regulatory requirements” when deploying BES servers might be helpful, too.

It should be noted that we do not allege RIM any dishonest motives whatsoever (actually we have a quite positive stance as for the overall security posture of their products, if nothing else see for example this newsletter analysing the over-the-air generation of master encryption keys between the BES and the devices).
We just want to raise some awareness to the mentioned “blind spots” in the current debate.

Have a great day,

Enno

Continue reading
Breaking

Just a Quick Note on the Library Loading / Binary Planting Stuff

For those of you who missed it: Microsoft released the associated advisory yesterday, together with a hotfix introducing a new registry key that allows users to control the DLL search path algorithm. For a detailed explanation of the problem we refer to the excellent article on Ars Technica.

For the record: no, AV (anti-virus software) will – in most cases – not protect you from security problems related to this one. And, no, there is no easy patch for this one either.

Carefully reading the “Mitigating Factors” and “Workarounds” section in the MS advisory or this entry from our blog might provide ideas how to address this or similar stuff (in the future).

Wishing you all some sunny summer days,

Enno

Update: this article gives some more technical details and this one describes some real attack paths against popular applications. Sorry, guys, good luck with fighting this one with traditional AV…

Continue reading