… the corrupt DEA agent in Luc Besson’s great movie “Léon (The Professional)”. I’m sure quite some of you, dear readers, know the plot…
Just before the final shootout, when sending the first men of the NYPD ESU team into Léon’s apartment, he tells them to “Be careful!”. After learning those men got killed he just comments: “I told you”.
[btw: before yelling to bring “EEEEEEEVERYONE!!!!”, as those familiar with the piece will certainly remember ;-)].
I’m fully aware that I risk playing “the arrogant scumbag card” today and that it’s generally not very nice to refer to one’s own earlier statements with an “I told you” attitude (especially if harm was caused to some party), but this is exactly how I feel when reading these news. And – pls believe me – it’s an expression of utmost despair.
How often do organizations have to be told that running Adobe Flash might not be the greatest idea in the world, security-wise? How many statistics like this one (see section “Vulnerabilities” in the bottom part of it) have to see the light of the world until people realize that (quoting from this blogpost) “running Flash on corporate desktops is simply asking for trouble. Asking for trouble loudly. Very loudly.”?
When we wrote this document on configuring IE8 securely we pointed out that using Adobe Flash required a risk acceptance, from our perspective. Man, how I was attacked! for this very statement afterwards in the customer environment that document was initially developed for. I’ve since mentioned Flash in this blog here, here and here.
Furthermore we’ll include a talk on Flash in next year’s Troopers line-up, I promise. And be it only to avoid this post sounding like a crusade of a bitter old man… (yes, this was a wordplay referring to some character from the movie ;-).
A few days ago the European Network and Information Security Agency (ENISA) published this quite interesting document with the exact title. Here’s what it covers:
“The booming smartphone industry has a special way of delivering software to end-users: appstores. Popular appstores have hundreds of thousands of apps for anything from online banking to mosquito repellent, and the most popular stores (Apple Appstore, Google Android market) claim billions of app downloads. But appstores have not escaped the attention of cyber attackers. Over the course of 2011 numerous malicious apps were found, across a variety of smartphone models. Using malicious apps, attackers can easily tap into the vast amount of private data processed on smartphones such as confidential business emails, location data, phone calls, SMS messages and so on. Starting from a threat model for appstores, this paper identifies five lines of defence that must be in place to address malware in appstores: app review, reputation, kill-switches, device security and jails.”
Just read through it and while I’ve never been a big fan of STRIDE (mainly due the application centric approach which simply is not my cup of tea) I have to say it’s applied elegantly to the “app ecosystem” described in the paper.
The doc somewhat accompanies this one titled “Smartphones: Information security risks, opportunities and recommendations for users” (released by ENISA in late 2010), which is a valuable resource in itself.
Overall excellent work from those guys in Heraklion, providing good insight from and for practitioners in the field.
just a short, somewhat non-technical, post today: I really like this response Ross Anderson gave to the “UK Cards Association” asking Cambridge University for taking offline a thesis of one of their students. It (the letter) pretty much summarizes how security research should be treated and backed by those interested in a more secure world we live in.
On a personal note I’d like to add that Ross’ main volume “Security Engineering: A Guide to Building Dependable Distributed Systems”, initially published in 2001 and updated in the interim with a second edition in 2008, has been the most influential security book for me on my long way in the infosec space (which started back in 1997, with some workshops on firewalls I gave for IT auditors). If I could take only one infosec book to a lonely island, it would be this one.
[not sure which one to take if I could only take one book at all 😉 … maybe Thomas Mann’s “Doktor Faustus”… will get back to this once I’ve figured an answer ;-)]
Back in a few days with the next part on IPv6, have a good one everybody
The British Standards Institution recently published “Cloud Computing. A Practical Introduction to the Legal Issues”. I ordered an electronic copy yesterday (I did that here, for GBP 30) and after a first glance can say there’s lots of valuable information in it.
Merry christmas to everybody, have some peaceful and relaxing days
Given the upcoming public release of ISECOM‘s Open Source Security Testing Methodology Manual (OSSTMM) version 3, I took the opportunity to have a closer look at it. While we at ERNW never adopted the OSSTMM for our own way of performing security assessments (mostly due to the fact that performing assessments is our main business since 2001 and our approach has been developed and constantly honed since then so that we’re simply used to doing it “our way”) I’ve followed parts of ISECOM’s work quite closely as some of the brightest minds in the security space are contributing to it and they come up with innovative ideas regularly.
So I was eager to get an early copy of it to spend some weekend time going through it (where I live we have about 40 cm of snow currently so there’s “plenty of occasions for a cosy reading session” ;-))
One can read the OSSTMM (at least) two ways: as a manual for performing security testing or as a “whole philosophy of approaching [information] security”. I did the latter and will comment on it in a two-part post, covering the things I liked first and taking a more critical perspective on some portions in the second. Here we go with the first, in an unordered manner:
a) The OSSTMM (way of performing tests) is structured. There’s not many disciplines out there where a heavily structured approach is so much needed & desirable (and, depending on “the circumstances” so rarely found) so this absolutely is a good thing.
b) The OSSTMM has a metrics-based approach. We think that reasonable decision taking in the infosec space is greatly facilitated by “reducing complexity to meaningful numbers” so this again is quite valuable.
c) One of the core numbers allows to display “waste” (see this post why this is helpful).
d) It makes you think (which, btw, is exactly why I invited Pete to give the keynote at this year’s Troopers). Reading it will certainly advance your infosec understanding. There’s lots of wisdom in it…
In many aspects, the OSSTMM is another “step in the right direction” provided by ISECOM. Stay tuned for another post on the parts where we think it could be sharpened.
Two days ago I gave the keynote at an industry event, reflecting on the changing role of traditional security controls in the age of virtualization and the cloud. As this was an updated version of the stuff distributed in the conference proceedings, some people have asked for it. Voilà, here we go.
Today we dare to (mis-) use the blog for a shameless self promotion 😉
We’re happy to announce that ERNW will contribute to a government sponsored research project called ASMONIA (which stands for the German title of the project that is Angriffsanalyse und Schutzkonzepte für MObilfunkbasierte Netzinfrastrukturen unterstützt durch kooperativen InformationsAustausch [Attack analysis and Security concepts for MObile Network infrastructures, supported by collaborative Information exchAnge]. those readers familiar with that kind of projects will have an idea of the importance of such acronyms ;-).
Our input in the project will happen in the areas of threat and risk analysis in 4G mobile telecommunication networks and, of course, we will “carefully evaluate practical attacks” in some parts of those networks ;-).
We just got a bunch of devices to undergo some lab testing in the next months. And you might expect some presentations on results from the project, e.g. for ShmooCon we plan to file a talk on “Attacking and Securing Juniper Backbone Routers”.
During the keynote of the Intel Developer Forum, Intel’s CEO Paul Otellini explained their motivation for the acquisition of McAfee. Basically, Intel wants to provide a possibility to shift computer security from a known bad model to something that is a known good model.
Coming back to some of our recentblog posts, we think that a reliable and working approach to implement application whitelisting would increase security in corporate environments — especially when thinking of the latest vulnerabilities with exploit code in the wild that could not be catched up by any AV solution. As covered by this article, the possibility that such an approach succeeds depends heavily on the critical mass that would use it. The widespread x86 architecture therefore is the perfect plattform for accomplishing a widely used known good m Continue reading “Intel’s Known Good Approach — Chances for a Paradigm Shift?”
Recently I noticed this news titled “New email worm on the move”. At roughly the same time I received an email from a senior security responsible from a large customer asking for mitigation advice as they got “hit pretty hard” (by this exact piece of malware).
Given I’m mainly an infrastructure and architecture guy usually I’m not too involved in malware protection stuff (besides my continuous ranting that – from an architectural point of view – endpoint based antivirus has a bad security benefit vs. capex/opex ratio). So I’m by no means an expert in this field. Still I keep scratching my head when I read the associated announcements (like this, this or this) from major “antivirus”, “malware protection” or “endpoint security” vendors – to save typing, in the remainder of the post I call them SNAKE vendors (where “SNAKE” stands for “Smart Nimble APT Kombat Execution”… or sth equally ingenious of the valued reader’s choice… 😉
The following (not too) heretical questions come to mind:
a) What’s the corporate need to allow downloading .scr files at all? Maybe I’m missing sth here or I’m just not creative enough but I (still) don’t get it. Why not block .scr at the network boundaries at all?
[yes, I know, there’s no such thing like “well-defined network boundaries” any more, but here we’re talking about “HTTP based downloads” which happen to pass through – a few – centralized points in quite some environments].
a1) So, maybe blocking downloads of .scr files (as this document recommends, funnily enough together with the recommendation to “filter the URL” on gateways… which really seems an operationally feasible thing for complex environments… and a very effective one, for future malware, too ;-)) might be a viable mitigation path.
In my naïve world the approach of just allowing a certain (“positive”) set of file/MIME types for download would be even better, wouldn’t it?
This reminds me of a consulting project we did for a mid-sized bank (20K users) some years ago. They brought us in to evaluate options to increase their “malware protection stance” and we finally recommended a set of policy and gateway configuration adjustments (instead of buying a third commercial antimalware software which they had initially planned). Part of our recommendations was to restrict the file types to be accepted as email attachments. For a certain file type (from the MS Office family and known as a common malware spread vector at the time) they strongly resisted, stating “We need to allow this, our customers regularly send us documents of this type”. We then suggested monitoring the use of various filetypes-in-question for some time and it turned out that for this specific type they received three (in numbers: 3) legitimate emails within a six month period…
b) In their mentioned announcements all major vendors boast disposing of “updated signatures providing total protection” for this piece of malware.
Hmm… again, very naïvely, I might ask: so why did our customer get “hit pretty hard” (and, following the press, other organizations as well)? They are not a small shop (actually they’re one of the 50 largest corporations in the world), there’s a lot of smart people working in the infosec space over there and – of course! – they run one of the main “best of breed” antimalware solutions on their desktops.
So why did they get hit? I leave the answer to the reader… just a hint: operational aspects might play a role, as always.
“Upon further investigation, we found that the malware used for this attack was just an unpacked version of a file that we already detected as WORM_AUTORUN.NAD. It is possible that the cybercriminals behind this attack got hold of the code for WORM_AUTORUN.NAD and modified it for their usage.”
Indeed, looking at this entry in Microsoft’s malware encyclopedia from august 19th there are remarkable similarities.
So, dear SNAKE vendors: do I get it correctly that (most of) you need a new signature when there’s an unpacked version of some malicious piece of code, as opposed to a packed version (of the same code)?
Seems quite a difficult exercise for all those super-smart heuristic adaptive engines … in 2010…
Sorry, guys, how crazy is this? And it seems the stuff was initially observed back in july.
[did you note that they don’t even feel embarrased by admitting this, but proudly display this as a result of their research, which of course takes place in the best interest of their valued customers?]
For completeness’ sake it should be mentioned that this piece of malware (no, I won’t rant on the fact that – still, in 2010 – it seems not possible to have a common naming scheme amongst vendors) performs, amongst others, the following actions on an infected machine:
– turning of security services.
– modification of some security-relevant registry keys.
– sharing system folders.
On most Windows systems all those actions can only be performed by users… with administrative privileges…
Overall, this “classic piece of worm” might remind us, that maybe effective desktop protection should be achieved by
– controlling/restricting which types of code and data to bring into a given environment.
– or, at least, _where_ to get executable (types of) code/data from.
– which executables to run on a corporate machine at all (yes, I’m talking about application whitelisting here ;-).
– reflecting on the need of administrative privileges.
and _not_ by still spending even more money for SNAKE oil.
I renew my plea from this post:
So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.
Interesting research from Stuart Schechter et.al. here.
They evaluated the effect that the removal or modification of online banking sites’ security features had on the users’ behavior (as for entering or withholding their passwords). Maybe for some of you not too surprising it turned out that the vast majority of users entered their passwords even if obviously alarming clues were present on the websites.
This, again, shows how important it is to understand how users behave, what their motives and incentives are and how to build environments that help them acting securely. This even more applies to corporate space. At times, bringing an industrial/organizational psychologist in might be a much better investment than writing yet-another-ignored-piece-of-policy.