Building

IPAM Requirements in IPv6 Networks

I recently had a discussion with some practitioners about requirements to IP Address Management (IPAM) solutions which are specific for IPv6 networks. We came up with the following:

Mandatory: Track all dynamic IPv6 assignments (SLAAC + PrivExtensions, DHCP etc.), by polling neighbor caches from network devices. Support SNMPv3 for this task.
Optional (read: nice-to-have): support other methods than SNMP to gather this info (e.g. SSH-ing into devices and execution of appropriate “show” commands).

Mandatory: Display connected switch port (incl. device name or CDP-type info) for all addresses.

Mandatory: Be able to sort addresses according to their categories, e.g. “show all SLAAC systems vs. all systems with DHCPv6 addresses”.
Optional: Be able to easily identify systems which have several types _simultaneously_ (e.g. “static + SLAAC address”, “SLAAC + DHCP managed address”).

Mandatory: Full support for RFC 5952 notation in all UIs (both entry and display of addresses).
Optional: be able to display addresses in other formats in reports or exported files (e.g. CSV files).

===

Hope that some of you might find this useful when reflecting on the topic; have a great day everybody

Enno

Continue reading
Events

DayCon VII

Some of us had the pleasure to participate in this year’s Daycon VII, three days of Real Hacking and Relevant Content, in Dayton, OH. The event began on September 16th with the Packetwars bootcamp. We had the chance to teach some really promising young students and to prepare them for the Packetwars battle that was scheduled four days later. The students had to go through topics like Windows security, network security and web application security both practical and in theory.

Continue reading “DayCon VII”

Continue reading
Building

HackRF – A Must-Have Gadget

Dear readers,

today we welcomed Michael Ossmann at the ERNW headquarter for an exclusive workshop on his HackRF gadget. Everybody was quite excited to get hands-on with this shiny piece of hardware, which is currently crowd-funded on Kickstarter. For everybody who’s not familiar with Software Defined Radio (SDR): Let’s regard it as the ultimate tool when working with radio signals.

Michael Ossmann
Michael Ossmann in the house.

Let’s quote Michael’s campaign website:

Transmit or receive any radio signal from 30 MHz to 6000 MHz on USB power with HackRF. HackRF is an open source hardware project to build a Software Defined Radio (SDR) peripheral.

SDR is the application of Digital Signal Processing to radio waveforms. It is similar to the software-based digital audio techniques that became popular a couple of decades ago. Just as a sound card in a computer digitizes audio waveforms, a software radio peripheral digitizes radio waveforms. It’s like a very fast sound card with the speaker and microphone replaced by an antenna. A single software radio platform can be used to implement virtually any wireless technology (Bluetooth, ZigBee, cellular technologies, FM radio, etc.).

Continue reading “HackRF – A Must-Have Gadget”

Continue reading
Breaking

Some Security Impacts of HTML5 CORS or How to use a Browser as a Proxy

With HTML 5 the current web development moves from server side generated content and layout to client side generated. Most of the so called HTML5 powered websites use JavaScript and CSS for generating beautiful looking and responsive user experiences. This ultimately leads to the point were developers want to include or request third-party resources. Unfortunately all current browsers prevent scripts to request external resources through a security feature called the Same-Origin-Policy. This policy specifies that client side code could only request resources from the domain being executed from. This means that a script from example.com can not load a resource from google.com via AJAX(XHR/XmlHttpRequest).

As a consequence, the W3 consortium extended the specification that a set of special headers allows  access from a cross domain via AJAX, the so called Cross-Origin Resource Sharing (CORS). These headers belong to the Access-Control-Allow-* header group. A simple example of a server response allowing to access the resource data.xml from the example.com domain is shown below:

CORS Response

The most important header for CORS is the Access-Control-Allow-Origin header. It specifies from which domains access to the requested resource is allowed (in the previous example only scripts from the example.com domain could access the resource). If the script was executed from another domain, the browser raises a security exception and drops the response from the server; otherwise JavaScript routines could read the response body as usual. In both cases the request was sent and reached the server. Consequently, server side actions are taking place in both situations.

To prevent this behavior, the specification includes an additional step. Before sending the actual request, a browser has to send an OPTIONS request to the resource (so called preflight request). If the browser detects (from the response of the preflight) that the actual request would conflict with the policy, the security exception is raised immediately and the original request never gets transmitted.

Additionally the OPTIONS response could include a second important header for CORS: Access-Control-Allow-Credentials.
This header allows the browser to sent authentication/identification data with the desired request (HTTP-Authentication or cookies). And the best: this works also with HTTP-Only flags :).

As you may notice, the whole security is located in the Access-Control-Allow-Origin header, which specifies from which domains client side code is allowed to access the resource’s content. The whole problem arises when developers either due to laziness or simply due to unawareness) set a wildcard value:

CORS Response Wildcard

This value allows all client side scripts to get the content of the resource (in combination with Access-Control-Allow-Credentials also to restricted resources).

That’s where I decided to create a simple proof of concept tool that turns the victim browser into a proxy for CORS enabled sites. The tool uses two parts. First, the server.py which is used as an administrative console for the attacker to his victims. Second, the jstunnel.js which contains the client side code for connecting to the attacker’s server and for turning the browser into a proxy.

After starting the server.py you could access the administrative console via http://localhost:8888. If no victims are connected you will see an almost empty page. Immediately after a victim executes the jstunnel.js file (maybe through a existing XSS vulnerability or because he is visiting a website controlled by you…) he will be displayed in a list on the left side. If you select a connected victim in the list, several options become available in the middle of the page:

JS Proxy

 

  1. Some information about the victim
  2. Create an alert popup
  3. Create a prompt
  4. Try to get the cookies of the client from the site where the jstunnel.js gets executed
  5. Use the victim as a proxy
  6. Execute JS in the victims browser
  7. View the current visible page (like screenshot, but it is rendered in your browser)

If you select the proxy option and specify a URL to proxy to, an additional port on the control server will be opened. From now on, all requests which you send to this port will be transferred to the victim and rerequested from his browser. The content of the response will be transferred back to the control server and displayed in your browser.
As an additional feature, all images being accessible by JavaScript, will be encoded in base64 and also transferred.

If you like to test it yourself:

  1. You need Python 3.x and tornado
  2. You will find a vulnerable server in the test directory

This kind of functionality is also implemented in the Browser Exploitation Framework (BeEF), but I liked to have some lightweight and simple proof of concept to demonstrate the issue.

Download:

UPDATE:

I’ve published a video on https://www.ernw.de/download/cors_blured.webm (6.2MB) and the most current source code on GitHub. Just to be clear, the used application in the video isn’t vulnerable. I’ve patched it to get an example.

Regards,
Timo

Continue reading
Events

Security Team 2.0

I’m currently catching up on a lot of papers and presentation from the Usenix Security Symposium in order to finish the blog post series I started last week (summarizing WOOT and LEET). One presentation, which unfortunately is  not available online [edit: see also update, videos are available now], included several particularly relevant messages that I want to share in this dedicated post. Chris Evans, the head of the Google Chrome security team (herein short: GCST), described some new approaches they employed for their security team operations, some lessons learned, and how others can benefit from it as well (actually the potential of these messages to make the world a safer place was my motivation to write this post, even though I got teased for supposedly being a Google fanboy 😉 ):

Fix it yourself

The efficient security work carried out by the GCST could not be achieved if not all members of the team would also have a background as software developer/engineer/architect or in operations. This changes the character of the GCST work from “consulting” to “engineering” and enables the team to commit actual code changes instead of just consulting the developers on how to fix open issues (refer also to the next item). For the consultant work I do (and for assessment anyways 😉 ), I also follow this approach: When facing a certain problem set, have a look at the technological basics. Reading {code|ACLs|the stack|packets} helps in most cases to get a better understanding of the big picture as well.

Remove the middle man

The Google security work is carried out in a very straight way: All interaction is performed directly in a (publicly accessible, see later items) bug tracking system. This reduces management overhead and ensures direct interaction with the community as well. The associated process is very streamlined: Each reported bug is assigned to a member of the GCST which is then responsible for fixing it ASAP.

Be transparent

The bug tracking system is used for externally reported bugs as well as for internally discovered ones. This ensures a high level of transparency of Google’s security work and increases the level of trust users put into Chrome (transparency is also an important factor in the trust model we use). In addition, the practice of keeping found vulnerabilities secret and patching silently should be outdated anyways…

Go the extra mile

The subtext of this item basically was “live your marketing statements”. As ERNW is a highly spirit-driven environment, we can fully emphasize this point. Without our spirit (a big thx to the whole team at this point!), the “extra mile” (or push-up, pound, exploit, poc, …) would not be possible. Yet, this spirit must be supported and lived by the whole company: starting at the management level that supports and approves this spirit, down to every single employee who loves her/his profession (and can truly believe in making the world a safe place). As for Google, Chris described a rather impressive war story on how they combined some very sophisticated details of a PoC youtube video of a Chrome exploit without further details in order to find the relevant bug. (Nice quote in that context: “Release the Tavis” 😉 )

Celebrate the community

… and don’t sue them 😉 I don’t think there’s much to say on this item as you apparently are a reader of this blog. However, you have now an official resource when it comes to discussions whether you can disclose certain details about a security topic.
I think the messages listed above are worth to be incorporated in daily security management and operations and there is even some proof that they apparently worked for Google and hence may also improve your work.

Have a good one,
Matthias

 

Quick Update: All videos, including this talk, are now available.

Continue reading
Breaking

Vulnerabilities & attack vectors of VPNs (Pt 1)

This is the first part of an article that will give an overview of known vulnerabilities and potential attack vectors against commonly used Virtual Private Network (VPN) protocols and technologies. This post will cover vulnerabilities and mitigation controls of the Point-to-Point Tunneling Protocol (PPTP) and IPsec. The second post will cover SSL-based VPNs like OpenVPN and the Secure Socket Tunneling Protocol (SSTP). As surveillance of Internet communications has become an important issue, besides the traditional goals of information security, typically referred as  confidentiality, integrity and authenticity, another security goal has become explicitly desirable: Perfect Forward Secrecy (PFS). PFS may be achieved if the initial session-key agreement generates unique keys for each session. This ensures that even if the private key would be compromised, older sessions (that one may have captured) can’t be decrypted. The concept of PFS will be covered in the second post.
Continue reading “Vulnerabilities & attack vectors of VPNs (Pt 1)”

Continue reading
Building

SLES 11 Hardening Guide

SUSE Linux Enterprise Server (SLES) has been around since 2000. As it is designed to be used in an enterprise environment the security of these systems must be kept at a high level. SLES implements a lot of basic security measures that are common in most Linux systems, but are these enough to protect your business? We think that with a little effort you can raise the security of your SLES installation a lot.

We have compiled the most relevant security settings in a SLES 11 hardening guide for you. The guide is supposed to provide a solid base of hardening measures. It includes configuration examples and all necessary commands for each measure. We have split the measures into three categories: Authentication, System Security and Network Security. These are the relevant parts to look for when hardening a system. The hardening guide also includes lists of default services that will help to decide which services to turn off, which is an essential step to minimize the attack surface of your system.

See all of the steps that we compiled for you in our hardening guide for SLES 11: ERNW_Checklist_SLES11_Hardening.pdf

Continue reading
Events

WOOT’13

Continuing yesterday’s post, I compiled a short summary of relevant WOOT’13 presentations.


Truncating TLS Connections to Violate Beliefs in Web Applications
Ben Smyth and Alfredo Pironti, INRIA Paris-Rocquencourt

This presentation was also given at BlackHat some weeks ago. It outlines a very interesting class of attacks against web applications abusing the TLS specification which states that “failure to properly close a connection no longer requires that a session not be resumed […] to conform with widespread implementation practice”. This characteristic enables new attack vectors on shared systems where certain outgoing (TLS encrypted) packets can be dropped in order to prevent applications from e.g. correctly finishing transaction (such as log out procedures) or even modifying the request bodies by dropping the last parts.

Continue reading “WOOT’13”

Continue reading
Events

LEET’13

I have the pleasure to visit this year’s USENIX Security Symposium in Washington, DC. Besides the nice venue close to the national mall, there are also several co-located workshops. Every night I will try and provide a summary of those presentations I regard as most interesting. However, I hope to manage to keep up with it as there are a lot of interesting events, people to meet, and still some projects to keep up with. The short summaries below are from the 6th USENIX Workshop on Large-Scale Exploits and Emergent Threats.

Continue reading “LEET’13”

Continue reading