Last week we had the opportunity and pleasure to present some of our research results at BlackHat US 2014 (besides of meeting a lot of old friends and having a great researchers’ dinner).
Enno and Antonios gave their presentation on IDPS evasion by IPv6 Extension Headers, described here.
The material can be found here: Slides, tools (the main tool used was Chiron, authored by Antonios) & whitepaper.
Ayhan and me presented our results of the security analysis of Cisco’s EnergyWise protocol. The protocol enables network-wide power monitoring and control (ie turning servers off or on, putting phones to standby — basically controlling the power state of all EnergyWise-enabled or PoE devices). The main problem (besides a DoS vulnerability we found in IOS, see official Cisco advisory) is its PSK-based authentication model, which enables an attacker to cause large-scale blackouts in data centers if the deployment is lacking certain controls (for example our good old favorite, segmentation…). There will be a longer blogpost/newsletter on this topic soon.
The material can be found here: Slides & tools
In the course of our virtualization research, we came across a certain technical issue we couldn’t find an easy solution on knowledge bases and the like. However, as we found the question several times on the web, the following post gives just a short hint on a technical detail.
If you want to connect two virtual machines in VMware Fusion using a serial port (e.g. for debugging purposes), Fusion doesn’t provide you an GUI option to configure that. However, if you just add the following config to the debugger system’s VMX file:
During a recent research project we performed an in-depth security assessment of Microsoft’s virtualization technologies, including Hyper-V and Azure. While we already had experience in discovering security vulnerabilities in other virtual environments (e.g. here and here), this was our first research project on the Microsoft virtualization stack and we took care to use a structured evaluation strategy to cover all potential attack vectors.
Part of our research concentrated on the Hyper-V hypervisor itself and we discovered a critical vulnerability which can be exploited by an unprivileged virtual machine to crash the hypervisor and potentially compromise other virtual machines on the same physical host. This bug was recently patched, see MS13-092 and our corresponding post. Continue reading “Exploiting Hyper-V: How We Discovered MS13-092”
continuing our tradition from last year (see here and here), we summarized more of our hardening recommendations for you. This guide is covering Tomcat 7 and is supposed to provide a solid base of hardening measures. It includes configuration examples and all necessary commands for each control, specifically for the most recent branch of Tomcat as there were some significant changes. Download: ERNW_Checklist_Tomcat7_Hardening.pdf
First of all, I hope you all had a good start to 2014. Having some time off “between the years” (which is a German saying for the time between Christmas and NYE), I caught up on several virtualization security topics.
While virtualization is widely accepted as a sufficiently secure technology in many areas of IT operations (also for sensitive applications or exposed systems, like DMZs) by 2014, there are several recent vulnerabilities and incidents that are worth mentioning.
First of all, a rather old vulnerability (codename “VMDK Has Left the Building“) was eventually patched by VMware, the day before Christmas’ eve (honi soit qui mal y pense… 😉 ). While the initially described file inclusion vulnerability cannot be exploited anymore, first tests in our lab show that attempts to exploit the vulnerability lead to a complete freeze of the shared ESXi host. We still need to dig deeper into the patch and will keep you posted.
On November’s patch Tuesday, an important vulnerability in Hyper-V was patched by Microsoft. The bulletin does not provide a lot of details as for the vulnerability, but the relevant sentence is this one: “An attacker who successfully exploited this vulnerability could execute arbitrary code as System in another virtual machine (VM) on the shared Hyper-V host.”. This does not allow code execution in the hypervisor. However, Hyper-V’s architecture comprises the so-called root partition, which is a privileged virtual machine used for all kinds of management functionality. This means that code execution in this particular virtual machine most probably will still give an attacker complete control over the hypervisor. Even without this root partition, the vulnerability would be one of the worst-case vulnerabilities in the age of Cloud computing, provided that MS Azure employs Hyper-V (which can be considered a fair assumption. Still we have no distinct knowledge here). Again, we’ll have a closer look at this one in the near future.
At the end of December, OpenSSL suffered from a virtualization-related incident. The shared hypervisor was compromised using a weak password of the hosting provider. While password-related attacks are not specific to virtualized environments, it emphasizes the need for secure management practices for virtualization components. This sounds like a very basic recommendation, but many security assessments we conducted in this space resulted in the need to include “attacks against management interfaces” in the top ERNW virtualization risks, which we cover in our virtualization and cloud security workshops. Also we mentioned this in some presentations and research results.
As the described events show, virtualization security will remain an important topic in 2014 (even though marketing material suggest to simply adopt virtualization – I won’t give any links here, you’ve probably already seen plenty 😉 ). We will cover several aspects during this year’s Troopers edition. While our workshop on “Exploiting Hypervisors” is already online (for the detailed description, see here), one talk is missing: Due to some rather strict NDAs, we can’t provide any details so far (but if you’ve read the MS13-092 credits carefully, it shouldn’t be too hard to guess 😉 ).
I hope you’re looking forward to 2014 as much as I do, stay tuned,
I’m currently catching up on a lot of papers and presentation from the Usenix Security Symposium in order to finish the blog post series I started last week (summarizing WOOT and LEET). One presentation, which unfortunately is not available online [edit: see also update, videos are available now], included several particularly relevant messages that I want to share in this dedicated post. Chris Evans, the head of the Google Chrome security team (herein short: GCST), described some new approaches they employed for their security team operations, some lessons learned, and how others can benefit from it as well (actually the potential of these messages to make the world a safer place was my motivation to write this post, even though I got teased for supposedly being a Google fanboy 😉 ):
Fix it yourself
The efficient security work carried out by the GCST could not be achieved if not all members of the team would also have a background as software developer/engineer/architect or in operations. This changes the character of the GCST work from “consulting” to “engineering” and enables the team to commit actual code changes instead of just consulting the developers on how to fix open issues (refer also to the next item). For the consultant work I do (and for assessment anyways 😉 ), I also follow this approach: When facing a certain problem set, have a look at the technological basics. Reading {code|ACLs|the stack|packets} helps in most cases to get a better understanding of the big picture as well.
Remove the middle man
The Google security work is carried out in a very straight way: All interaction is performed directly in a (publicly accessible, see later items) bug tracking system. This reduces management overhead and ensures direct interaction with the community as well. The associated process is very streamlined: Each reported bug is assigned to a member of the GCST which is then responsible for fixing it ASAP.
Be transparent
The bug tracking system is used for externally reported bugs as well as for internally discovered ones. This ensures a high level of transparency of Google’s security work and increases the level of trust users put into Chrome (transparency is also an important factor in the trust model we use). In addition, the practice of keeping found vulnerabilities secret and patching silently should be outdated anyways…
Go the extra mile
The subtext of this item basically was “live your marketing statements”. As ERNW is a highly spirit-driven environment, we can fully emphasize this point. Without our spirit (a big thx to the whole team at this point!), the “extra mile” (or push-up, pound, exploit, poc, …) would not be possible. Yet, this spirit must be supported and lived by the whole company: starting at the management level that supports and approves this spirit, down to every single employee who loves her/his profession (and can truly believe in making the world a safe place). As for Google, Chris described a rather impressive war story on how they combined some very sophisticated details of a PoC youtube video of a Chrome exploit without further details in order to find the relevant bug. (Nice quote in that context: “Release the Tavis” 😉 )
Celebrate the community
… and don’t sue them 😉 I don’t think there’s much to say on this item as you apparently are a reader of this blog. However, you have now an official resource when it comes to discussions whether you can disclose certain details about a security topic.
I think the messages listed above are worth to be incorporated in daily security management and operations and there is even some proof that they apparently worked for Google and hence may also improve your work.
Have a good one,
Matthias
Quick Update: All videos, including this talk, are now available.
Truncating TLS Connections to Violate Beliefs in Web Applications Ben Smyth and Alfredo Pironti, INRIA Paris-Rocquencourt
This presentation was also given at BlackHat some weeks ago. It outlines a very interesting class of attacks against web applications abusing the TLS specification which states that “failure to properly close a connection no longer requires that a session not be resumed […] to conform with widespread implementation practice”. This characteristic enables new attack vectors on shared systems where certain outgoing (TLS encrypted) packets can be dropped in order to prevent applications from e.g. correctly finishing transaction (such as log out procedures) or even modifying the request bodies by dropping the last parts.
I have the pleasure to visit this year’s USENIX Security Symposium in Washington, DC. Besides the nice venue close to the national mall, there are also several co-located workshops. Every night I will try and provide a summary of those presentations I regard as most interesting. However, I hope to manage to keep up with it as there are a lot of interesting events, people to meet, and still some projects to keep up with. The short summaries below are from the 6th USENIX Workshop on Large-Scale Exploits and Emergent Threats.
A recent post describing some nasty vulnerabilities in HP multifunction devices (MFDs) brings back memories of a presentation Micele and I gave at Troopers11 on MFD security. The published vulnerabilities are highly relevant (such as unauthenticated retrieval of administrative credentials) and reminded me of some of the basic recommendations we gave. MFD vulnerabilities are regularly discovered, and it is often basic stuff such as hardcoded $SECRET_INFORMATION (don’t get me wrong here, I fully appreciate the quality of the published research, but it is just surprising — let’s go with this attribute 😉 — that those types of vulnerabilities still occur that often). Yet many environments do not patch their MFDs or implement other controls. As it is not an option to not use MFDs (they are already present in pretty much every environment, and the vast majority of vendors periodically suffer from vulnerabilities), let’s recall some of our recommendations as those would have mitigated the risk resulting from the published vulnerability:
Isolation & Filtering: Think about a dedicated MFD segment, where only ports required for printing are allowed incoming. I suppose 80/8080 would not have been in that list.
Patching: Yes, also MFDs need to be patched. Sounds trivial, yet it does not happen in many environments.
One recommendation we did not come up with initially are dedicated VIP MFDs, but this is something we have actually observed in the interim. As the MFDs process a good part of the information in your environment — hence also sensitive information — some environments have dedicated VIP MFDs, which are only used by/exposed to board members or the like. (As a side note, many MFDs also save all print jobs on the internal hard drive and do not retrieve them in a secure way. For example, we also mentioned in our presentation that the main MFD once used in our office kept copies of everything ever printed/scanned/faxed on it)
In the course of a recent endpoint assessment, we also had a OS X 10.8 client system as a target. While we still rely on the Firewire “capability” of unlocking systems on a regular base (using this great tool), we noticed that Apple released a patch to disable Firewire DMA access whenever the system is in a locked state (e.g. with an active screensaver or no user logged in). As we test the Firewire DMA access vulnerability quite often (at least we thought so 😉 ) to prepare for demonstrations in the board room or client assessments, we were quite surprised that we must have actually missed that nice update. In order to verify the effectiveness of the patch, we ran our typical test bed and can quite happily confirm that the update successfully mitigates Firewire DMA access in locked system states.
Beside breaking into unpatched OS X client using Firewire DMA access ;-), we also noticed some lack of hardening guides related to Apples current OS X version 10.8, so we also compiled a basic checklist for OS X hardening measures which we want to share with you: ERNW_Checklist_OSX_Hardening.pdf