Building

Reflections on the vulnerability factor (notes on RRA, part 3)

Today I’m going to discuss the (presumably) most complex and difficult-to-handle of the three parameters contributing to a risk (as of the RRA), that is the “vulnerability [factor]”.
First it should be noted that “likelihood” and “vulnerability” must (“mentally”) be clearly separated which means that “likelihood” denotes: likelihood of threat showing up _without_ consideration of existing controls. Security controls already present will affect the vulnerability factor (in particular if they are effective ;-), but _not_ the likelihood.
First reflect on “how often will somebody stand at the door of our data center with the will to enter?” or “how often will a piece of malware show up at our perimeter?” or “how often will it happen that an operator commits a mistake?” and assign an associated value to the likelihood.
Then, _in a separate_ step, think about: “will that person be able to enter my data center?” (maybe it’s an external support engineer and, given their high workload, your admins are willing to violate the external_people_only_allowed_to_access_dc_when_attended policy. which – of course – is purely fictional and will never happen in your organization ;-)) or “how effective are our perimeter controls as for malware?” (are they? ;-)) or “hmm… what’s the maturity of our change management processes?” and assign an associated value to the vulnerability factor.
As stated in an earlier post: this will allow for identifying areas where to act and thus allow for efficient overall steering of infosec resources.
Mixing likelihood and vulnerability might lead to self complacent stuff like “oh, evidently likelihood of unauthorized access to datacenter is ‘1’ as we have that brand new shiny access control system” …

Now, how to rate the “vulnerability” (on a scale from 1 to 5)?
In my experience a “rough descriptive scale” like

1: Extensive controls in place, threat can only materialize if multiple failures coincide.
2: Multiple controls, but highly skilled+motivated attacker might overcome those.
3: Some control(s) in place, but highly skilled+motivated attacker will overcome those. Overall exposure might play a role.
4: Controls in place but they have limitations. High exposure given and/or medium skilled attacker required.
5: Maybe controls, but with limitations if at all. High Exposure and/or low skills required.
works quite well for exercises with participants somewhat experienced with the approach. If there are people in the room (or call) who’ve already performed RRA (or, for that matter, similar exercises) a joint understanding what a “2” or “4” mean in the context of figuring “the vulnerability [factor]” can be attained quickly. This might even work when most of the participants are complete newcomers to risk assessments. In this case discussing some examples is the key for gaining that joint understanding.
That’s why in RRAs – where getting results in a timely manner is crucial (see introductory notes in part 1 on the complexity vs. efficiency trade-off) – we usually use some scale like the one outlined above.

Still, this – perfectly legitimately – may seem a bit “unscientific” to some people. For example, I currently do a lot of work in an organization where relevant people involved in the (risk assessment) process heavily struggle with the above scale, feeling it does not permit “a justified evaluation of the vulnerability factor due to being too vague”. In general, in such environments going with a “weighted summation method” is a good idea. More or less this works as follows:

a) identify some factors contributing to “vulnerability” (or “attack potential”) like “overall (network connectivity) exposure of system” or skills/time needed by an attacker and so on.
b) assign individual value to those factors, perform some mathematical operation on them (usually simply adding them), map the result to a 1-5 scale and voilà, here’s the – “justified and calculated” – vulnerability factor.

Again, an example might help. Let’s assume the threats-to-be-discussed are different types of attacks (as opposed to all the other classes of threats like acts of god, hardware failures etc.). One might look at three vulnerability-contributing factors, that are “type of attacker”, “exposure of system” and “extent of current controls” and assign values to each three of them as shown in the following – sample – table:

Attacker (knowledge) Exposure Extent of controls Value
Script kid Internet facing none     5
Bot Business partners Some, but insufficient     4
Skilled + motivated attacker Only own organization some but not resistant to skilled and motivated attacker     3
Organized crime Own organization, restricted multiple controls, but single failure might lead to attack succeeding     2
Nation state/agency Very restricted or mgmt access only multiple controls, multiple failures needed at the same time for attack to succeed     1

 

with the following “mapping scale”:
Sum of values in the range 1-3 gives overall value: 1.
4-6 gives a 2.
7-9:   3.
10-12: 4.
13-15: 5.

Now, suppose we discuss the vulnerability of certain type of attack against a certain asset (here: some system). In case an attacker shows up (the likelihood of this event would be expressed by the respective value that is not included in this example) a skilled and motivated attacker is assumed to perform the attack (leading to a “3” in the first column). Furthermore the system is exposed to business partners (=> “4” in second column) and, due to the high sensitivity of the data processed, it has multiple layers of security controls (=> “1” for “extent of controls”). Adding these values gives on overall “8” which in turn means a vulnerability [factor] “3”.

For an internet facing system (“5) which we expect to be attacked/attackable by script kids (“5”) and which does not dispose of good controls (thus “4”) adding the respective values gives a 14 and subsequently an overall vulnerability of “5”.

An extensive presentation of this approach can be found in the clause B.4 of the Common Criteria Evaluation Methodolgy (CEM). There the respective values “characterising [the] attack potential” are:

a) Time taken to identify and exploit (Elapsed Time);
b) Specialist technical expertise required (Specialist Expertise);
c) Knowledge of the TOE design and operation (Knowledge of the TOE);
d) Window of opportunity;
e) IT hardware/software or other equipment required for exploitation.

with a fairly advanced description of the different variants for these values and an elaborated point scale.
To the best of my knowledge (and I might be biased given I’m a “BSI certified Common Criteria evaluator”) this is the best description of the “weighted summation method” in the infosec space. Feel free to let me know if there’s a better (or older) source for this.

Back to the topic, I’d like to state that while I certainly have quite some sympathy for this approach, using it for risk assessments – obviously – requires (depending on the type of people involved and their “discussion culture” 😉 potentially much) more time for the actual exercise. Which in turn might endanger the overall goal of delivering timely results. That’s why the latter way of rating the vulnerability is not used in the RRA.

====

Stay tuned for the next part to follow soon,

thanks

Enno

Continue reading
Building

ERNW Rapid Risk Assessment (RRA), Some Additional Notes, Part 2

This is the second part of the series (part 1 here) providing some background on the way we perform risk assessments. It can be seen as a direct continuation of the last post; today I cover the method of estimation and the scale & calculation formula used.

1.1 Method of Estimation

Again, two main approaches exist[1]:

  • Qualitative estimation which uses a scale of qualifying attributes (e.g. Low, Medium, High) to describe the magnitude of each of the contributing factors listed above. [ISO 27005, p. 14] states that qualitative estimation may be used
    • As an initial screening activity to identify risks that require more detailed analysis.
    • Where this kind of analysis is appropriate for decisions.
    • Where the numerical data or resources are inadequate for a quantitative estimation.

As the latter is pretty much always the case for information security risks, in the infosec space usually qualitative estimation can be found. A sample qualitative scale (1–5, mapping to “very low” to “very high”) for the vulnerability factor will be provided in the next part of this series.

  • Quantitative estimation which uses a scale with numerical values (rather than the descriptive scales used in qualitative estimation) for impact and likelihood[2], using data from a variety of sources. [ISO 27005, p. 14] states that “quantitative estimation in most cases uses historical incident data, providing the advantage that it can be related directly to the information security objectives and concerns of the organization. A disadvantage is the lack of such data on new risks or information security weaknesses. A disadvantage of the quantitative approach may occur where factual, auditable data is not available thus creating an illusion of worth and accuracy of the risk assessment.”

 

1.2 Scale & Calculation Formula Used

Each of the contributing factors (that are likelihood, vulnerability [factor] and impact) will be rated on a scale from 1 (“very low”) to 5 (“very high”). Experience shows that other scales either are not granular enough (as is the case for the scale “1–3”) or lead to endless discussions if too granular (as is the case for the scale “1–10”).

Most (qualitative) approaches use a “1–5” scale[3].

It should be noted that usually all values (for likelihood, vulnerability and impact) are mapped to concrete definitions; examples to be provided in the next part. Furthermore the “impact value” will not be split into “subvalues” for different security objectives (like individual values for “impact on availability”, “impact on confidentiality” and so on) in order to preserve the efficiency of the overall approach.

To get the resulting risk, all values will be multiplied (which is the most common way of calculating risks anyway). It should be noted that there are some objections with regard to this approach which, again, will be discussed in the next part.

The values themselves should be discussed by a group of experts (appropriate to the exercise-to-be-performed) and should not be assigned by a single person. The usefulness and credibility of the whole approach heavily depends on the credibility and expertise of the people participating in an exercise!

Wherever possible some lines of reasoning should be given for each value assigned, for documentation and future use purposes.


[1] [ISO 27000], section 8.2.2.1 provides a good overview.

[2] Models with quantitative estimation don’t use a “vulnerability factor” (as this one usually can’t be expressed  in a quantitative way).

[3] And so does the example 2 (section E.2.2) of [ISO27005] which can be compared to the methodology described here.

====

Feel free to get back to us with any comments, criticism, case studies, whatever. Thanks,

Enno

Continue reading
Building

ERNW Rapid Risk Assessment (RRA), Some Additional Notes, Part 1

At several occasions we’ve been asked to provide some background on the Rapid Risk Assessment (RRA) methodology we frequently use for a transparent (and documented) understanding of risks in certain situations and to deliver structured input for subsequent decision taking. As I had to write down (in another context) some notes on risk assessments and – from our perspective – practical, reasonable ways of performing them, I take the opportunity to lay out a bit the underlying ideas of the RRA approach. Which, btw, is no rocket science at all. Honestly, I sometimes wonder why stuff like this isn’t practiced everywhere, on a daily basis 😉

Here we go, second part to follow shortly. Feel free to get back to us with any comments, criticism, case studies, whatever. Thanks,

Enno

============

1. Introduction

There’s a number of heterogeneous definitions of the term “risk”, quite some of them with an inconsistent or ambiguous meaning and use[1]. In the following we will rely on the definitions furnished by the standard documents ISO 31000 Risk management — Principles and guidelines (providing a widely recognized paradigm for risk management practitioners from different backgrounds and industry sectors[2]) and ISO/IEC 27005 Information technology — Security techniques — Information security risk management (with a dedicated focus on the information security context).

ISO 31000 simply defines risk simply as

             “effect of uncertainty on objectives”

where uncertainty is “the state, even partial, of deficiency of information related to, under­standing or knowledge of an event, its consequence, or likelihood”[3] [ISO31000, p. 2].

Accepting uncertainty as being the main constituent of risk is a fundamental prerequisite for our approach outlined below. It must be well understood that first a certain degree of uncertainty is intrinsic to dealing with risks[4] and second that there’s always a trade-off bet­ween the – given resource constraints and human bounded rationality[5] – necessary reduction of complexity and (presumed) accuracy during an exercise of risk assessment.

[ISO31000, p. 8] emphasizes that “the success of risk management will depend on the ef­fectiveness of the […] framework” and [ISO31010[6], p. 18] concludes that “a simple method, well done, may provide better results than a more sophisticated procedure poorly done.”

Or, to express it “the blog way”: going with a simple method and thereby preserving the ability to perform exercises in a time-efficient manner, while accepting some fuzziness, will usually provide better results (e.g. for well-informed decision making) than striving for the big hit of a comprehensive risk enlightenment considering numerous potential dependencies and illuminating various dimensions of security objectives (which usually gets finished at the very moment Godot arrives).

2. Sources of Threats

In general two main possible approaches can be identified here:

  1. Use of a well-defined threat catalogue (usually one and the same at different points of execution) which might be provided by an industry association, a standards body or a government agency regulating a certain industry sector. While this may serve the common advantages of a standards based approach (accelerated setup of overall procedure, easy acceptance within peer community etc.), [ISO31010, p. 31] lists some major drawbacks of this course of action, laying out that check-lists    
    • tend to inhibit imagination in the identification of risks;
    • address the ‘known knowns’, not the ‘known unknowns’ or the ‘unknown unknown’.
    • encourage ‘tick the box’ type behavior
    • tend to be observation based, so miss problems that are not readily seen.

 

  • Adoption of (mostly) individual threats for individual risk assessment performances, depending on the amount of available resources, the context and “the question to be answered by means of the exercise”. This certainly requires more creativity and most notably experience on the participating contributors’ side, but will generally produce better and more holistic results in a more time-efficient way.

One essential element of the RRA is to (only + strictly) follow the latter approach (individual threats, depending on context). This means that some key players of the exercise-to-be-performed have to figure out the main threats before the proper risk assessment’s performance (usually by email) and that additional threats are not allowed later on. Again, in our perception, this is one of the critical success factors of the approach!

3. Contributing Factors

ISO 27005 (currently, that is as of 2008) defines information security risk as the

                    “potential that a given threat will exploit vulnerabilities of an asset […]
                     and thereby cause harm to the organization”.

Following this, three main factors contribute to the risk associated with a given threat:

  • the threat’s potential
  • the vulnerabilities to-be-exploited
  • the harm caused once the threat successfully materializes.

There’s a vast consensus amongst infosec risk assessment practitioners – and this is reflected by the way the wikipedia article on “risk” explains information security risk – that it hence makes sense to work with an explicit “vulnerability factor” expressing how vulnerable an asset is in case a threat shows up, for two main reasons:

  • When thinking about threats, this allows to differentiate between “external pheno­mena” (malware is around, hardware fails occasionally, humans make errors) and “internal conditions” (“our malware controls might be insufficient”, “we don’t have clustering of some important servers”, “our change control procedures are circumvented too often”).
  • This differentiation allows for governance and steering in the phase of risk treatment (“we can’t change [the badness of] the world, but we can mitigate our vulnerability”;  which then is expressed by a diminished vulnerability factor and subsequently reduced overall risk.)

This furthermore facilitates looking at some asset’s (e.g. a product’s) intrinsic pro­perties (leading to vulnerabilities) without knowing too many details about the environment the asset is operated in.
  


[1] The wikipedia article on the subject might serve as a starting point.

[2] Based on AS/NZS 4360 which in turn is regarded as a major contribution to the mainstream concept of risk in the 20th century.
The definition of the term “risk” within ISO 31000 is taken from ISO GUIDE 73:2009 and it can be expected that future versions of ISO 27005 will incorporate this definition (and the underlying idea) as well.

[3] It should be noted that the terms (and concepts) of “risk” and “uncertainty” might dispose of some duality on their own (see [COFTA07, p. 54ff.] for a detailed discussion on this). Still, we strictly follow the ISO 73 approach here.

[4] Where “assessing them” is one step in “dealing with risks”.

[5] [COFTA07, p.29] employs the concept of a “transactional horizon” to express the inherent limitations. Furthermore see [KAHNEMANN] or [SIMON] on bounded rationality.

[6] ISO 31010 gives an overview of risk assessment techniques.

Continue reading
Building

The Case For/Against Split Tunneling

Once again, in some customer environment the question of allowing/prohibiting split tunneling for (in this case: IPsec) VPN connections popped up today. Given our strict stance when it comes to “fundamental architectural security principles” the valued reader might easily imagine we’re no big fans of allowing split tunneling (term abbreviated in the following by “ST”), as this usually constitutes a severe violation of the “isolation principle”, further aggravated by the fact that this (violation) takes place on a “trust boundary” (of trusted/untrusted networks).
Still, we’re security practitioners (and not everybody has such a firm belief in the value of “fundamental architectural security principles” as we have), so we had to deal with the proponents’ arguments. In particular as one of them mentioned additional costs (in case of disallowed ST forcing all 80K VPN users’ web browsing through some centralized corporate infrastructure) of US$ 40,000,000.
[yes, you read that correctly: 40 million. I’ve still no idea where this – in my perception: crazy – number comes from]. Anyhow, how to deal with this?
Internally we performed a rapid risk assessment (RRA) focused on two main threats, that were:

a) Targeted attack against $CORP, performed by means of network backdoor access by piece of malware acting as relay point.

a) Client gets infected by drive-by malware as not being protected by (corporate) infrastructure level controls.
[for obvious reasons I’ll not provide the values derived from the exercise here]
While the former seems the main reason why NIST document SP 800-77 “Guide to IPsec VPNs” recommends _against_ allowing ST, the RRA showed that the latter overall constitutes the bigger risk. To illustrate this I’ll give some numbers from the excellent Google Research paper “All Your iFRAMEs Point to Us” which is probably the best source for data on the distribution and propagation of drive-by malware.

Here’s the paper’s abstract:

“As the web continues to play an ever increasing role in information exchange, so too is it
becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a
detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by
downloads are caused by URLs that attempt to exploit their visitors and cause malware to be
installed and run automatically. Our analysis of billions of URLs over a 10 month period shows
that a non-trivial amount, of over 3 million malicious URLs, initiate drive-by downloads. An even
more troubling finding is that approximately 1.3% of the incoming search queries to Google’s
search engine returned at least one URL labeled as malicious in the results page. We also explore
several aspects of the drive-by downloads problem. We study the relationship between the user
browsing habits and exposure to malware, the different techniques used to lure the user into the
malware distribution networks, and the different properties of these networks.”

and I’m going to quote some parts of it in the following, to line some (in the end of the day: not so) small calculations.

The authors observed that from “the top one million URLs appearing in the search engine results, about 6,000 belong to sites that have been verified as malicious at some point during our data collection. Upon closer inspection, we found that these sites appear at uniformly distributed ranks within the top million web sites—with the most popular landing page having a rank of 1,588.”

Now, how many of those (top million websites and in particular of the 0.6%) will be visited _every day_ by a 80,000 user population? And how many of those visits will presumably happen over “the split tunnel”, thus without corporate infrastructure level security controls?
Looking at the post infection impact they noticed that the “number of Windows executables downloaded after visiting a malicious URL […] [was] 8 on average, but as large as 60 in the extreme case”.
To be honest I mostly cite this opportunisticly to refer to this previous post or this one 😉

And I will easily refrain from any comment on this observation ;-):
“We subject each binary for each of the anti-virus scanners using the latest virus definitions on that day.  […] The graph reveals […] an average detection rate of 70% for the best engine.”

Part of their conclusion is that the “in-depth analysis of over 66 million URLs (spanning a 10 month period) reveals that the scope of the problem is significant. For instance, we find that 1.3% of the incoming search queries to Google’s search engine return at least one link to a malicious site.”

Let’s use this for some simple math: let’s assume 16K Users being online (20% of those 80K overall VPN users) on a given day, performing only 10 Google queries per user. That’s 160K queries/day. 1.3% of that, i.e. about 2K queries will return a link to at least one malicious site. If one out of 10 users clicks on that one, that means 200 attempted infections _per day_. If you cover 70% of that by local AV, 60 attempts (the remaining 30%) succeed. This gives 60 successful infections _each day_.
So, from our perspective, deliberately allowing a large user base to circumvent corporate infrastructure security controls (only relying on some local anti-malware piece) might simply… not be a good idea…

thanks

Enno

Continue reading
Building

Software Developers Don’t Use Available Security Features

According to SANS NewsBites Vol. XII, Issue 53 recently published there’s a lack of 3rd party developer support for some security features Microsoft introduced already years ago. We at ERNW have made similar observations when performing security assessments of COTS [commercial off-the-shelf] software. We therefore created a methodology, a proof of concept tool and a metric to test and to rate closed source software, where (amongst other approaches) these security features are checked and their (non-) presence contributes to an overall evaluation as for the trustworthiness of the applications in question. The concept “How to rate the security in closed source software” was presented to the public at Troopers10 and at Hack in the Box 2010 in Amsterdam. The slides can be found here.

Continue reading
Building

News from the Desktop, Edition 2010/07/21

Back on track as for one of our favorite rant subjects: desktop security. This stuff, commonly called the “LNK vulnerability”, has gained quite some momentum in the last days, including the release of a Metasploit module and a temporary raise of SANS Internet Storm Center‘s Infocon level to yellow (it’s back on green in the interim).

CVE-2010-2568 has been assigned and some technical details can be found here and here.

To give you a rough idea how this piece works, here’s a quote from the US-CERT advisory:

“Microsoft Windows fails to safely obtain icons for LNK files. When Windows displays Control Panel items, it will initialize each object for the purpose of providing dynamic icon functionality. This means that a Control Panel applet will execute code when the icon is displayed in Windows. Through use of an LNK file, an attacker can specify a malicious DLL that is to be processed within the context of the Windows Control Panel, which will result in arbitrary code execution. The specified code may reside on a USB drive, local or remote filesystem, a CD-ROM, or other locations. Viewing the location of a LNK file with Windows Explorer is sufficient to trigger the vulnerability.”

In short, as the Microsoft Malware Protection Center puts it: “[S]imply browsing to the removable media drive using an application that displays shortcut icons (like Windows Explorer) runs the malware without any additional user interaction.”

So actually, there is no exploitation in the sense of a buffer overflow of the LNK handling routines. The flaw just triggers downloading (and executing) some binary code. It’s basically about downloading some piece of code (to be executed on the local “compromised” box) from some location.
Figuring that brings us to an immediate discussion of potential mitigation strategies, which is – next to “some usual rant” 😉 – the main intent of this post.

As so often the US-CERT’s advisory provides good guidance on mitigating controls. It lists the following (most with a comment from my side):

a) Disable the displaying of icons for shortcuts.

Do not do this! It will most probably break your (users’) desktop experience in a horrible way.
Chester Wisniewski has a nice image of a treated desktop in yesterday’s post on the subject.

b) Disable AutoRun

This is pretty much always a good idea (security-wise), but it – partially – limits only one attack vector (removable media).
Interestingly enough the MS KB article on disabling Autorun functionality in Windows was last reviewed on 2010/07/01 and the CVE number for the LNK vulnerability was assigned 2010/06/30. There’s strange coincidences out there… 😉

And, of course, not allowing untrusted USB devices to be connected to the organization’s machines usually is a good thing as well.
[for the record: yes, I know, there are organizations out where business tells you this is not possible for some reason or another. which might be true; I will not go into this debate right here and now. I just want to remind you, dear reader, of some good ole basics ;-)].

In the meantime people expect (and see) the vulnerability being exploited over network shares so focusing on removable media/USB devices alone might be too narrow-focused anyway.

c) Use least privilege

Aka: do not work as admin. I don’t really have to comment on this here, do I? It’s a no-brainer. Well, let’s say: it should be a no-brainer… when gathering with some infosec people from a >100K user environment recently I learned they still have about 30% of users with local admin privs…
[and don’t get me wrong here: there might be good reasons for this. and those guys are not the only ones with such a landscape.]
To be discussed in more detail at another occasion.

d) Disable the WebClient service.

While I like this very much – given it’s a preventative control [“minimal machine approach”] and, btw, addresses other (past and potential future) vulnerabilities as well, e.g. those of MS10-045 published one week ago – it should be noted that this potentially breaks MS SharePoint (and other stuff as well).
So, unfortunately, again this will not be a feasible in quite some environments. Still, it might remind people that WebDAV is a technology that can be (ab-) used to access “network drives” in completely untrusted/untrustworthy locations.

e) Block outgoing SMB traffic

Well, yes…

The (today) updated MS advisory provides another one:

f) Blocking the download of LNK and PIF files

Again, this is soo obvious that I refrain from any comment.

Those of you following this blog or our public statements on desktop security (like this one) regularly might have noticed that – as so often – the two main ones quite some organizations rely on are not mentioned here:

Patching: for the simple reason there’s no patch as of today (and the imp is out of the bottle, to public knowledge, five days now).

Antimalware “protections”: not sure how this could prevent downloading and executing arbitrary binary code. The announcements of major AV vendors “we now have a signature for this” mostly address the Stuxnet stuff found in the initial exploit, nothing else. So this is mostly window-dressing.

So far so good (or bad), the main point of this post is – and yes, I’m aware I needed a long warm-up today 😉 – the following: the security problem discussed here can be broken down to: “there’s a vulnerability that triggers an exploit that goes somewhere, downloads some code and executes it”.

Which, in turn, raises the fundamental question: why should some average corporate desktop computer be allowed to go to some arbitrary location, download code, and – above all – run this code.
Restricting where to get executable code from or, even better, just allowing a specified set of applications to run could (more or less) easily solve the kind of problems vulnerabilities like this one impose. And the technologies needed like – in MS space – Software Restriction Policies , Applocker or just (as part of SRPs) “path rules” restricting where to download executables from have been available for a long time.

As Marcus Ranum is much more eloquent than me – especially when it comes to ranting where he’s nearly unbeatable 😉 – I allow myself to quote him literally, from the “Schneier-Ranum Face-Off” on “Is antivirus dead?”:

“Of course, most organizations don’t know (or haven’t got the courage to discover) what programs they allow–and, ultimately, isn’t that the root of their security problems? When I read the security news and hear that thus-and-such government agency is trying to decide if Facebook is a necessary application, it makes my head spin. In Marcus-land, where I come from, you decide what is a necessary application first, not after you have 40,000 employees who have gotten so used to it that they now think Twitter is a constitutionally protected right. Isn’t a virus or malware just unauthorized execution that someone managed to sneak onto your machine? If we adopt a model whereby there are programs that are authorized (i.e., on a whitelist) and the operating system should terminate everything else, then malware and viruses are history, unless their authors can somehow fool the administrator into authorizing them to run.
[…]
Whenever I talk about execution control/whitelisting with corporate types, someone says, ‘But we don’t really have a way of determining all the applications that we use!’ Really? Wow. That sounds like a policy that’s basically, ‘We have no idea what our computers are for.’ In other words: ‘We’ve given up, and as far as we’re concerned, our computers are an unmanaged mess.’ Or to put it another way, malware heaven. Can anyone even calculate the cost of malware and viruses (as well as the occasional office time spent playing online games) to businesses? That cost, ultimately, is paid solely in order to avoid the difficulty of determining what programs are authorized — what’s the purpose of the computer an employee is provided to use?

Here’s why I keep talking about execution control: it’s actually ridiculously easy compared to dealing with antivirus and antimalware. So why isn’t everyone doing it? Because it’d dramatically cut down on our ability to goof off. If executives knew how easy it was to cut back on productivity-wasting goof-off-ware, don’t you think it would be happening all over the place by now? If, instead, we tell them it’s hard to know what executables we use in the office…well, what nobody knows won’t hurt anyone.”

Well said, Marcus. Nothing to add here.

So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.

thanks for your time,

Enno

Continue reading
Building

Our Favorite Subject: [It’s all about] Risk

Some days ago my old friend Pete Herzog from ISECOM posted a blog entry titled “Hackers May Be Giants with Sharp Teeth” here which – along with some quite insightful reflections on the way kids perceive “bad people” – contains his usual rant on (the uselessness of) risk assessment.
Given that this debate (whether taking a risk-based infosec approach is a wise thing or not) is a constant element of our – Pete’s and mine – long lasting relationship I somehow feel enticed to respond 😉

Pete writes about a 9-yr-old girl who – when asked about “bad people” – stated that those “look like everybody else” and, referring to risk assessment, he concludes “that you can’t predict the threat realibly and therefore you can’t determine it.”
I fully agree here. I mean, if you _could_ predict the threat realibly why perform risk assessment at all? Taking decisions would simply be following a path of math then. Unfortunately most infosec practitioners do neither dispose of a crystal ball – at least not a dependable one – nor of the future prediction capabilities this entity seems to have …
So… as long as we can’t “determine reliably” we have to use … risk assessments. “Risk” deals with uncertainty. Otherwise it would be called “matter of fact” 😉
Here’s how the “official vocabulary of risk management” (ISO Guide 73:2009) defines risk: “effect of uncertainty on objectives”.
Note that central term “uncertainty”? That’s what risk is about. And that’s, again, what risk assessments deal with. Deal with uncertainty. In situations where – despite that uncertainty – well-informed decisions have to be taken. Which is a quite common situation in an ISO’s professional life 😉
Effective risk assessment helps answering questions in scenarios characterized by some degree of uncertainty. Questions like “In which areas do we have to improve in the next 12 months?” in (risk assessment) inventory mode or “Regarding some technology change in our organization, which are the main risks in this context and how do they change?” in governance mode. [See this presentation for an initial discussion of these terms/RA modes].
So asking for “reliable threat prediction/determination” from risk assessments is just not the right anticipation. In contrast, structured RA can certainly be regarded as the best way to “take the right decisions in complex environments and thereby get the optimal [increase of] security posture, while being limited by time/resource/political constraints and, at the same time, facing some uncertainty”.

Btw: the definition from ISO 73:2009 – that is used by recently published ISO 31000 (Risk management — Principles and guidelines) too – nicely shows the transformation the term “risk” has undergone in the last decade. From “risk = combination of probability and consequence of an event [threat]” in ISO 73:2002 through ISO 27005:2008’s inclusion of a “vulnerability element” (called “ease of exploitation” or “level of vulnerability” in the appendix) to the one in ISO 73:2009 cited above (which, for the first time, does not only focus on negative outcomes of events, but considers positive outcomes as well. which in turn reflects the concept of “risk & reward” increasingly used in some advanced/innovative infosec circles and to be discussed in this blog at another occasion).
Most (mature) approaches used amongst infosec professionals currently follow the “risk = likelihood * vulnerability * impact” line. We, at ERNW, use this one as well.

Which brings me to the next inaccuracy in Pete’s blog entry. He writes: “Threats are not the same for everyone nor do they actually effect us all the same. So why do we put up with risk assessments?”.

Indeed (most) threats _are_ the same for everyone. Malware is around, hardware fails from time to time and humans make errors. Point is all this does _not_ affect everyone “the same way” (those with the right controls will not get hit hardly by malware, those with server clusters will survive failing hardware more easily, those with evolved change control processes might have a better posture when it comes to consequences from human error). And all this is reflected by the – context-specific – “vulnerability factor” in risk assessments (and, for that matter, sometimes by the “impact factor” as well). So while threats might be the same, the _associated risks_ might/will not be the same.
which, again, is the exact reason for performing risk assessments ;-))
If they _were_ the same one just would have to look up some table distributed by $SOME_INDUSTRY_ASSOCIATION.

So, overall, I’m not sure that I can follow Pete’s line of arguments here. Maybe we should have a panel discussion on this at next year’s Troopers 😉

have a good one everybody,

thanks

Enno

Continue reading
Building

News from Old Friends, Edition 2010/06/09

This is the first post of a – potential – series of rants on ubiquitous pieces of crap (security-wise), bothering pretty much every ISO I know.
I’m talking about “common desktop applications” and today’s topic is going to be the beloved Adobe Flash Player. Some of you who had the opportunity (or imposition 😉 to listen to one my talks covering “modern enterprise security space” (e.g. this one) might remember me saying sth like “If a fairy godmother turned up and asked me for three things to get rid of in order to enhance overall corporate information security in a sustainable way, my answers would be…” and then giving Adobe Flash as the first mention. (before you ask: amongst the other candidates are Apple Quicktime, Windows GDI and “Javascript in Acrobat Reader”).

And, yes, I can already hear all the yelling “But we absolutely need Flash on our corporate desktops.”. Maybe that’s really really the case. Maybe not. I’ve fought that fight in many environments, and usually lost it. Kind-of been there, done that. I’d just like to point out that – from a security point of view – this is a risky thing.
On a personal level I still do not get why Flash is needed. I can certainly be regarded as a “typical executive user”, being online most parts of the day and performing all sorts of (what I think) “typical actions” like travel booking, online financial services etc. All this can be done with my 64-bit browser that just has no associated Flash player. Seems my mileage as for “corporate browser use” still varies from the one in many of those – “we absolutely need Flash on our corporate desktops” – organizations…
And even if your company’s marketing dept is powerful enough to ask for large scale deployment of that fancy technology (some of you certainly know the “We have our own Youtube channel” argument) I still have to understand why it’s needed on the desktops in the engineering or R+D departments. But oh well…

Still, all this ranting is a bit outside the intended scope of this post. Actually the trigger for the post was this advisory titled “Security Advisory for Flash Player, Adobe Reader and Acrobat” and released by Adobe some days ago.
Here’s a little quote from the summary:

“A critical vulnerability exists in the current versions of Flash Player […] for Windows, Macintosh, Linux and Solaris operating systems, and the authplay.dll component that ships with Adobe Reader and Acrobat v9.x for Windows, Macintosh and UNIX operating systems. This vulnerability […] could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild via limited, targeted attacks against Adobe Reader v9 on Windows.”

Oops, sorry, in fact the quote above was from this advisory, initially released on july 22, 2009.
The current one (from 06/04/2010) goes like this (as for the summary):

“A critical vulnerability exists in Adobe Flash Player 10.0.45.2 and earlier versions for Windows, Macintosh, Linux and Solaris operating systems, and the authplay.dll component that ships with Adobe Reader and Acrobat 9.x for Windows, Macintosh and UNIX operating systems. This vulnerability (CVE-2010-1297) could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild against both Adobe Flash Player, and Adobe Reader and Acrobat.”

Note the difference?
There’s practically none: same products affected, same component to blame, same workaround [deactivating authplay.dll], same “Adobe’s quality assurance element” [discovery of the stuff being exploited in the wild] responsible for public statement.
In short: SSDD.

Mitigation Approaches

Given I try to be a responsible citizen [and, for that matter, responsible security practitioner too ;-)] I’d like to discuss potential approaches as for the efficient mitigation of the risk of being attacked “actively in the wild” due to (not only) this vulnerability.
At ERNW, for many years we’ve been using sth we called “The small catechism of IT security” which was essentially a set of simple fundamental rules as for securing complex systems. This piece included, amongst others, these ones:

Minimal Machine.
Least Privilege.
Patching.

Following these lines some approaches come to mind and I’ll discuss some of those.

a) Do not run Flash at all. Yes, we had this discussion already. And, no: I do not live in a ivory tower. And I mainly consult to very large organizations.
Sure, this might be one of the fights you (as an ISO) just can’t win. But, heck, I still dare to post this on our very personal and ranty blog: Running Flash on corporate desktops is simply asking for trouble. Asking for trouble loudly. Very loudly.

It should be noted that, according to this, removing Adobe Flash (e.g. in the way described here) will not remove the instances of Flash Player that is installed with Adobe Reader 9 or other Adobe products.

There is always a lot of trade-offs in managing complex IT environments. There are business requirements – and, as we security folks know: business pretty much always wins (and this is fully ok, as security is not the most important thing in corporate life) -, there’s “cost considerations”, all sorts of politics and in the end of the day there’s our mission of getting the best possible security stance given all these considerations and trade-offs. Running vulnerable software to provide some business functions (while at the same time inducing the risk of getting owned) obviously is such a trade-off, and it’s a common one.

As for Flash one should just be aware that – in most environments – there’s only little business value of running it, but – in all environments – there’s quite some associated risk.

b) Do not run Flash embedded in PDFs (by deactivating authplay.dll as described in the advisories).

I think this is – security-wise – a very feasible approach (following that good ole security principle called “minimal machine”). Only problem might be that the stuff gets re-deployed/re-enabled next time you patch Adobe Reader. So operational processes might have to be adjusted to ensure it does not re-appear.
And, of course, this is an ugly one (deleting a dll), which might not be “aligned with your sw management and deployment processes” 😉
This document mentions that deleting another dll as well avoids the crashes when invoking a file with SWF code in it. Haven’t tested this though.

Btw: this is a preventative control. Whereas patching is a reactive one. Most probably I don’t have to tell any reader of this blog that preventative controls tend to have a better cost-impact ratio than reactive ones, do I? 😉

c) Patching. Hmm, unfortunately there is no patch as of today. And the stuff is “exploited in the wild” (Adobe, thank you! for letting us know, once more. What about just adding a checkbox somewhere in “Preferences” that allows to disable playing embedded SWF stuff at all?).
Furthermore patch cycles for Adobe products are quite long in most environments (due to the number of integration aspects and side effects).

So, dear reader who’s still sympathetic to patching (as for Adobe stuff): do not pass go, do not collect $200, but maybe re-read the last sentences of the two former points.

d) Use of an alternate PDF reader, like Foxit Reader. Looking at this I’m not sure if this is really better (security-wise) and most probably it’s not an option for most corporate environments anyway (for reasons outside the security realm).

e) Security measures/approaches from the “Least privilege” space like “running Adobe stuff on a low integrity level” (on Windows systems disposing of integrity levels, that are Win Vista or Windows 7). While this can certainly help and can be regarded as a nice preventative control, it has the big disadvantage that taking the route of “least privilege” usually has, that is added complexity and high operational cost… (which is, btw, why it practically never works out to a satisfactory degree).

f) Gateway-based controls. In a number of environments there will be quite some praying that “our malicious content protection saves us”. This may happen. or not. Taking the “detective/reactive way” (which is what most anti-malware controls do) has well-known weaknesses…
Sanitizing Flash (like Blitzableiter does) could be a much better approach. Hopefully technologies like this will gain some deployment in the near future.

And hopefully in the upcoming world of HTML5 we won’t see that high risk software piece called Flash player anymore (alas, experience tells there will be other similarly awful stuff. but that’s another story…)

have a great day,

Enno

Continue reading