Events

Security Team 2.0

I’m currently catching up on a lot of papers and presentation from the Usenix Security Symposium in order to finish the blog post series I started last week (summarizing WOOT and LEET). One presentation, which unfortunately is  not available online [edit: see also update, videos are available now], included several particularly relevant messages that I want to share in this dedicated post. Chris Evans, the head of the Google Chrome security team (herein short: GCST), described some new approaches they employed for their security team operations, some lessons learned, and how others can benefit from it as well (actually the potential of these messages to make the world a safer place was my motivation to write this post, even though I got teased for supposedly being a Google fanboy 😉 ):

Fix it yourself

The efficient security work carried out by the GCST could not be achieved if not all members of the team would also have a background as software developer/engineer/architect or in operations. This changes the character of the GCST work from “consulting” to “engineering” and enables the team to commit actual code changes instead of just consulting the developers on how to fix open issues (refer also to the next item). For the consultant work I do (and for assessment anyways 😉 ), I also follow this approach: When facing a certain problem set, have a look at the technological basics. Reading {code|ACLs|the stack|packets} helps in most cases to get a better understanding of the big picture as well.

Remove the middle man

The Google security work is carried out in a very straight way: All interaction is performed directly in a (publicly accessible, see later items) bug tracking system. This reduces management overhead and ensures direct interaction with the community as well. The associated process is very streamlined: Each reported bug is assigned to a member of the GCST which is then responsible for fixing it ASAP.

Be transparent

The bug tracking system is used for externally reported bugs as well as for internally discovered ones. This ensures a high level of transparency of Google’s security work and increases the level of trust users put into Chrome (transparency is also an important factor in the trust model we use). In addition, the practice of keeping found vulnerabilities secret and patching silently should be outdated anyways…

Go the extra mile

The subtext of this item basically was “live your marketing statements”. As ERNW is a highly spirit-driven environment, we can fully emphasize this point. Without our spirit (a big thx to the whole team at this point!), the “extra mile” (or push-up, pound, exploit, poc, …) would not be possible. Yet, this spirit must be supported and lived by the whole company: starting at the management level that supports and approves this spirit, down to every single employee who loves her/his profession (and can truly believe in making the world a safe place). As for Google, Chris described a rather impressive war story on how they combined some very sophisticated details of a PoC youtube video of a Chrome exploit without further details in order to find the relevant bug. (Nice quote in that context: “Release the Tavis” 😉 )

Celebrate the community

… and don’t sue them 😉 I don’t think there’s much to say on this item as you apparently are a reader of this blog. However, you have now an official resource when it comes to discussions whether you can disclose certain details about a security topic.
I think the messages listed above are worth to be incorporated in daily security management and operations and there is even some proof that they apparently worked for Google and hence may also improve your work.

Have a good one,
Matthias

 

Quick Update: All videos, including this talk, are now available.

Continue reading
Breaking

Vulnerabilities & attack vectors of VPNs (Pt 1)

This is the first part of an article that will give an overview of known vulnerabilities and potential attack vectors against commonly used Virtual Private Network (VPN) protocols and technologies. This post will cover vulnerabilities and mitigation controls of the Point-to-Point Tunneling Protocol (PPTP) and IPsec. The second post will cover SSL-based VPNs like OpenVPN and the Secure Socket Tunneling Protocol (SSTP). As surveillance of Internet communications has become an important issue, besides the traditional goals of information security, typically referred as  confidentiality, integrity and authenticity, another security goal has become explicitly desirable: Perfect Forward Secrecy (PFS). PFS may be achieved if the initial session-key agreement generates unique keys for each session. This ensures that even if the private key would be compromised, older sessions (that one may have captured) can’t be decrypted. The concept of PFS will be covered in the second post.
Continue reading “Vulnerabilities & attack vectors of VPNs (Pt 1)”

Continue reading
Building

SLES 11 Hardening Guide

SUSE Linux Enterprise Server (SLES) has been around since 2000. As it is designed to be used in an enterprise environment the security of these systems must be kept at a high level. SLES implements a lot of basic security measures that are common in most Linux systems, but are these enough to protect your business? We think that with a little effort you can raise the security of your SLES installation a lot.

We have compiled the most relevant security settings in a SLES 11 hardening guide for you. The guide is supposed to provide a solid base of hardening measures. It includes configuration examples and all necessary commands for each measure. We have split the measures into three categories: Authentication, System Security and Network Security. These are the relevant parts to look for when hardening a system. The hardening guide also includes lists of default services that will help to decide which services to turn off, which is an essential step to minimize the attack surface of your system.

See all of the steps that we compiled for you in our hardening guide for SLES 11: ERNW_Checklist_SLES11_Hardening.pdf

Continue reading
Events

WOOT’13

Continuing yesterday’s post, I compiled a short summary of relevant WOOT’13 presentations.


Truncating TLS Connections to Violate Beliefs in Web Applications
Ben Smyth and Alfredo Pironti, INRIA Paris-Rocquencourt

This presentation was also given at BlackHat some weeks ago. It outlines a very interesting class of attacks against web applications abusing the TLS specification which states that “failure to properly close a connection no longer requires that a session not be resumed […] to conform with widespread implementation practice”. This characteristic enables new attack vectors on shared systems where certain outgoing (TLS encrypted) packets can be dropped in order to prevent applications from e.g. correctly finishing transaction (such as log out procedures) or even modifying the request bodies by dropping the last parts.

Continue reading “WOOT’13”

Continue reading
Events

LEET’13

I have the pleasure to visit this year’s USENIX Security Symposium in Washington, DC. Besides the nice venue close to the national mall, there are also several co-located workshops. Every night I will try and provide a summary of those presentations I regard as most interesting. However, I hope to manage to keep up with it as there are a lot of interesting events, people to meet, and still some projects to keep up with. The short summaries below are from the 6th USENIX Workshop on Large-Scale Exploits and Emergent Threats.

Continue reading “LEET’13”

Continue reading
Breaking

MFD Vulnerabilities

A recent post describing some nasty vulnerabilities in HP multifunction devices (MFDs) brings back memories of a presentation Micele and I gave at Troopers11 on MFD security. The published vulnerabilities are highly relevant  (such as unauthenticated retrieval of administrative credentials) and reminded me of some of the basic recommendations we gave. MFD vulnerabilities are regularly discovered, and it is often basic stuff such as hardcoded $SECRET_INFORMATION (don’t get me wrong here, I fully appreciate the quality of the published research, but it is just surprising — let’s go with this attribute 😉 — that those types of vulnerabilities still occur that often). Yet many environments do not patch their MFDs or implement other controls. As it is not an option to not use MFDs (they are already present in pretty much every environment, and the vast majority of vendors periodically suffer from vulnerabilities), let’s recall some of our recommendations as those would have mitigated the risk resulting from the published vulnerability:

  • Isolation & Filtering: Think about a dedicated MFD segment, where only ports required for printing are allowed incoming. I suppose 80/8080 would not have been in that list.
  • Patching: Yes, also MFDs need to be patched. Sounds trivial, yet it does not happen in many environments.

One recommendation we did not come up with initially are dedicated VIP MFDs, but this is something we have actually observed in the interim. As the MFDs process a good part of the information in your environment — hence also sensitive information — some environments have dedicated VIP MFDs, which are only used by/exposed to board members or the like. (As a side note, many MFDs also save all print jobs on the internal hard drive and do not retrieve them in a secure way. For example, we also mentioned in our presentation that the main MFD once used in our office kept copies of everything ever printed/scanned/faxed on it)

Referring to some other posts: We told ya 😉

Have a good one,

Matthias

Continue reading
Breaking

Cross-Site Request Forgery with Cross-Origin Resource Sharing

During one of our last projects in a large environment we encountered an interesting flaw. Although it was not possible to exploit it in this particular context, it’s worth to be mentioned here. The finding was about Cross-Site Request Forgery, a quite well-known attack that forces a user to execute unintended actions within the authenticated context of a web application. With a little help of social engineering (like sending a link via email, chat, embedded code in documents, etc…) an attacker may force the user to execute actions of the attacker’s choice.
Continue reading “Cross-Site Request Forgery with Cross-Origin Resource Sharing”

Continue reading
Events

IPv6 Hackers Meeting @ IETF 87 in Berlin / Slides

That meeting was actually a great event. Once more, big thanks! to Fernando for organizing it and to EANTC for providing the logistics.
A couple of unordered notes to follow:

a) The slides of our contribution can be found here. Again, pls note that this is work in progress and we’re happy to receive any kind of feedback.
[given Fernando explicitly mentioned Troopers, we’ve allowed ourselves to put some reference to it into this version of the slide deck…]

b) the scripts Stefan currently puts together will be released here once they’ve undergone more testing ;-).

c) Sander Steffann mentioned that Juniper SRX models do have IPv6 support for management protocols. According to this link this seems somewhat correct.

d) we had that discussion about (which) ASA inspects work with IPv6.
Here‘s a link providing some info for 8.4 software releases, this is the respective one for 9.0.

e) I was really impressed by the work performed by these guys and I think that ft6 (“Firewalltester for IPv6”) is a great contribution to the IPv6 security (testing) space.
And, of course, Marc’s latest additions to THC-IPV6 shouldn’t go unnoticed ;-). And I learned he can not only code, but cook as well.

===

Eric Vyncke commented “To be repeated”. We fully second that ;-).

thanks

Enno

Continue reading
Building

Basic OS X Hardening & DMA

In the course of a recent endpoint assessment, we also had a OS X 10.8 client system as a target. While we still rely on the Firewire “capability” of unlocking systems on a regular base (using this great tool), we noticed that Apple released a patch to disable Firewire DMA access whenever the system is in a locked state (e.g. with an active screensaver or no user logged in). As we test the Firewire DMA access vulnerability quite often (at least we thought so 😉 ) to prepare for demonstrations in the board room or client assessments, we were quite surprised that we must have actually missed that nice update. In order to verify the effectiveness of the patch, we ran our typical test bed and can quite happily confirm that the update successfully mitigates Firewire DMA access in locked system states.

Beside breaking into unpatched OS X client using Firewire DMA access ;-), we also noticed some lack of hardening guides related to Apples current OS X version 10.8, so we also compiled a basic checklist for OS X hardening measures which we want to share with you:
ERNW_Checklist_OSX_Hardening.pdf

Enjoy,
Matthias

Continue reading