Breaking

Extracting Data from Very Large Pcap Files – Part 1: Tools and Hardware

There is a common misconception that the sheer amount of data coupled with multiplexed channels (e.g. WDM technology) make successful eavesdropping attacks on high speed Ethernet links – like those connecting data centers – highly unlikely. This is mainly based on the assumption that the amount of resources (e.g. RAM, [sufficiently fast] storage or CPU power) needed to process large files of captured data is a limiting factor. However, to the best of our knowledge, no practical evaluation of these assumptions has so far been performed.

Therefore we conducted some research and started writing a paper (to be released as a technical report shortly) that aims to answer the following questions:

– Can the processing of large amounts of captured data be done “in a feasible way” ?

– How much time and which type of hardware is needed to perform this task?

– Can this be done with readily available tools or is custom code helpful or even required? If so, how should that code operate?

– Can this task be facilitated by means of public cloud services?

We performed a number of tests with files of different sizes and entropy. Tests were both carried out with different sets of dedicated hardware and by means of public cloud services. The paper describes the tools used, the various test setups and, of course, the results. A final section includes some conclusions derived from the insights provided by the test sets.It is assumed that an attacker has already gained access enabling her to eavesdrop on the high speed data link. A detailed description how this can be done can be found e.g. here or here. The focus of our paper is on the subsequent extraction of useful data from the resulting dump file. It is further assumed the collected data is available in standard pcap format.
We’ll summarize some of the stuff in a series of three blog posts, each discussing certain aspects of the overall research task. In the first one we’ll describe the tools and hardware used. In the second we’ll give the results from the test lab with our hardware while the third part describes the tests performed in the (AWS) cloud and provides the conclusions. Furthermore we’ll give a presentation of the results, including a demo (probably the extraction of credit card information from a file with the size 500 GB which roughly equates to a live migration of 16 virtual machines with 32 GB RAM each) at the Infoguard Security Lounge taking place on 8th of June in Zug/Switzerland.
Last but not least before it get’s technical: the majority of the work was performed by Daniel, Hendrik and Matthias. I myself had mostly a “supervisor role” 😉 So kudos to them!

COTS packet analysis tools
A number of tests utilizing available command-line tools (tethereal, tshark, tcpdump and the like) were performed. It turned out that, performance-wise, “classic” tcpdump showed the most promising results. During the following, in-depth testing phase two problems of tcpdump showed up:

– It’s single-threaded so it can’t use multiple processors of a system (for parallel processing). Given the actual bottlenecks to be related with I/O anyway (see below) this was not regarded to be major problem.

– Standard pcap filters do not allow for “keyword search” which somehow limits the attack scenarios (attacker might not be able to search for credit card numbers, user names etc. but would have to perform an IP parameter based search first and then hand over to another tool which might cause an unacceptable delay in the overall analysis process). To address this limitation Daniel wrote a small piece of code that we – not having found an elegant name like Loki so far 😉 – called pcap_extractor.

Pcap_extractor
This is basically the fastest possible implementation of a pcap file reader. It opens a libpcap file handle for the designated input file, applies a libpcap filter to it and loops through all the filter matching packets, writing them to an output pcap file. Contrary to tcpdump and most other libpcap based analysis tools, it provides the possibility to search for a given string inside the matching packets, for example a credit card number or a username. If such a search string is applied, only packets matching the libpcap filter and containing the search string are written to the output file.A call to search a pcap file for iSCSI packets which contain a certain credit card number and write them to the output file would then look like:

# pcap_extractor -i input-file.pcap -o output-file.pcap -f “tcp port 3260” -s “5486123456789012“

The source code of pcap_extractor can be downloaded here.

Identifying the bottleneck(s)

While measuring the performance of multiple pcap analysis tools the profiling of system calls indicated that the tools spend between 85% and 98% of the search time on waiting for I/O. In case of the fastest tool that means 98% of the time the tool does nothing, but waiting for dump data. So the I/O bandwidth turned out to be the major bottleneck in the initial test setups.

Actual lab setup

The final test system was designed to provide as much I/O bandwidth as possible and was composed of:

Intel Core i7-990X Extreme Edition, 6x 3.46 GHz

12GB (3 * 4GB) DDR3 1600MHz, PC3-12800

ASRock X58 Extreme6 S1366 mainboard

4 * Intel 510 Series Elm Crest SSD 250GB

The mainboard and the SSDs were chosen to support SATA3 with a theoretical maximal I/O bandwidth of 6 Gbit/s. FreeBSD was used as operating system.

===
In this post we’ve “prepared the battle ground” (as for the tools and hardware to be used) for the actual testing, in the next one we’ll discuss the results. Stay tuned & have a great day

 
Enno

 

Continue reading
Breaking

Sisters’ Act of MFD Security

Recently Micele and I were researching for our talk about the current state of security of Multifunction Devices (MFDs). Since we’re both seasoned pentesters who are quite familar with MFDs, we were really surprised that very little new research is going on on the topic of MFD security. While diving deeper into the topic, we found a very simple explanation for this: As in 2002, it is still possible to download print or scan jobs using PJL, many devices still offer default FTP or Telnet access, and, of course, stored files can be recovered from MFD hard drives — on an enterprise wide scale. To even strengthen our impression of the current state of MFD security, most devices crashed or did go wild while performing some scans — and we do not talk about fuzzing here.

This devastating result lead to the question how MFDs can be secured. Since there are a lot of MFD hardening resources out there, even from vendors, we decided to put together a comprehensive hardening guide for MFDs. To raise the level of awareness, we put together a lot of examples on attacks on MFDs and then focused on the development of our own MFD security guide which is based on the seven sisters. The result of this approach can be found here. And of course, soon there will be a ERNW newsletter to cover this topic in a more academic and structured way 😉

Continue reading
Breaking

RSA: Anatomy of an Attack

Lots of stuff has been written about this blog post from RSA describing the (potential) details of the attack, so I will refrain from detailed comments on this piece that Marsh Ray nicely called “some of the most egregious hyperbole I’ve read in infosec”.

Just one short note. Presumably the attack, in an early stage, used a “spreadsheet [that] contained a zero-day exploit that installs a backdoor through an Adobe Flash vulnerability (CVE-2011-0609)”.

I’ve written about Flash here.

nuff said, thanks

 

Enno

Continue reading
Breaking

Reflections on the RSA Break-in

Some of you may have heard of the break-in at RSA and may now be wondering “what does this mean to us?” and “what can be done?”. Not being an expert on RSA SecurID at all – I’ve been involved in some projects, however not on the technical implementation side but on the architecture or overall [risk] management side – I’ll still try to contribute to the debate 😉

Feel free to correct me either by comment or by personal email in case the following contains factual errors.

Fundamentals

My understanding of the way RSA SecurID tokens work is roughly this:

a) The authentication capabilities provided by the system (as part of an overall infrastructure where authentication plays a role) are based on two factors: a one-time-password (OTP) generated in regular intervals by both a token and some (backend) authentication server and a PIN known by a user.

c) the OTP generation process takes some initialization value called the “seed” and the current time as input and calculates – by means of some algorithm at whose core probably sits a hash function – the OTP itself.

d) the algorithm seems publicly known (there are some cryptanalytic papers listed in the Wikipedia article on RSA SecurID and a generator – needing the seed as input – has been available for some time now). Even if it wasn’t public we should assume that Kerckhoff’s principle exists for some reason 😉

e) So, in the end of the day, an OTP of a given token at a given point of time can be calculated once the seed of this specific token is known.

This means: to some (large) degree, the whole security of the OTP relies on the secrecy of the seed which, obviously, must be kept. [For the overall authentication process there’s still the PIN, but this one can be assumed to be the “weaker part” of the whole thing.]

Flavors

RSA SecurID tokens, and those of other vendors as well, are sold in two main variants:

– as hardware devices (in different sizes, colors etc.) Here the seed is encoded as part of the manufacturing process and there must be some import process of token serial numbers and their associated seeds into the authentication server (located at the organization using the product for authentication), and some subsequent mapping of a user + PIN to a certain token (identified by serial number, I assume). The seeds are then generated on the product vendor’s (e.g. RSA’s) side in an early stage of the manufacturing process and distributed as part of the product delivery process. Not sure why a vendor (like RSA) should keep those associations of (token) serial numbers and their seeds (as I said, I’m not an expert in this area so I might overlook sth here, even sth fairly obvious ;-)) once the product delivery process is completed, but I assume this nevertheless happens to some extent. And I assume this is part of the potential impact of the current incident, see below.
– as so-called “soft tokens”, that are software instances running on a PC or mobile device and generating the OTP. For this purpose, again the seed is needed and to the best of my knowledge there’s, in the RSA space at least, two ways how the seed gets onto the device:

  • generate it as part of “user creation” process on the authentication server and subsequent distribution to users (by email or download link), for import. For obvious reasons not all people like this, security-wise.
  • generate it, by means of an RSA proprietary scheme called Cryptographic Token Key Initialization Protocol (CT-KIP)  in parallel on the token and the server and thereby avoid the (seed’s) transmission over the network.

Btw: In both cases importing the seed into a TPM would be nice, but – as of mid 2010 when I did some research – this was still in a quite immature state. So not sure if this currently is a viable option.

Attacks

For an attacker going after the seed I see three main vectors:
  • compromise of an organization’s authentication server. From audits in the past I know these systems often reside in network segments not-too-easily accessible and they are – sometimes – reasonably well protected (hardening etc.). Furthermore I have no idea how easy it would be to extract the seeds from such a system once compromised. Getting them might allow for subsequent attacks on remote users (logging into VPN gateways, OWA servers etc.), but only against this specific organization. And if the attacker already managed to compromise the organization’s authentication server this effort might not even be necessary anymore.
  • compromise of the (mobile) devices of some users of a given organization using soft tokens and copy/steal their seeds. This could potentially be done by a piece of malware (provided it manages to access the seed at all, which might be difficult – protected storage and stuff comes to mind – or not. I just don’t know 😉
    This is the one usually infosec people opposing the replacement of hard tokens by soft tokens (e.g. for usability reasons) warn about. There are people who do not regard this as a very relevant risk, as it requires initial compromise of the device in question. Which, of course, can happen. But why “spend energy” on getting the seed then as the box is compromised anyway (and any data processed on it). I’m well aware of the “attacker can use seed for future attacks from other endpoints” argument. One might just wonder about the incentive for an attacker to got after the seeds…
    It should be noted that binding the (soft) token to a specific device, identified by serial number, unique device identifier (like in the case of iPhones) or harddisk ID or sth – which can be done in the RSA SecurID space since some time, I believe since Authentication Manager 7.1 – might to some degree serve as a mitigating control against this type of attack.
  • attack vendor (RSA) and hope to get access to the seeds of many organizations which can then be used in subsequent targeted attacks. I have the vague impression that this is exactly what happened here.  Art Coviello writes in his letter:
    “[the information gained by the attackers] could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.”
    I interpret this as follows: “dear customers, face the fact that some attackers might dispose of your seeds and the OTPs calculated on those so you’re left with the PIN as the last resort for the security of the overall authentication process”.
I leave the conclusions to the valued reader (and, evidently, the estimation if my interpretation holds or not) and proceed with the next section.
Mitigating Controls & Steps
First, let’s have a quick look at the recommendations RSA gives (in this document). There we find stuff like “We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.”  – yes, thanks! RSA for reminding us, this is always a good idea 😉 – and equally conventional wisdom including pieces like “We recommend customers update their security products and the operating systems hosting them with the latest patches”. And, of course, it’s pure coincidence that they mention the use of SIEM systems twice… being a SIEM vendor themselves ;-))
From my part I’d like to add:
– in case you use RSA SecurID soft tokens, binding individual tokens to specific devices seems a good idea to me. (yes, this might mean that users using several devices have several, different, instances then. and, yes, I understand that in the shiny new age of user-owned funky smartphone gagdets used for corporate information processing, this might be a heavy burden to ask your users for 😉
– some of you might re-think their (sceptical) position as for hard tokens: in the RSA SecurID space soft tokens can be “seeded” by means of CT-KIP, so no 3rd party is involved or disposes of the seeds. I’m not aware of such a feature for hard tokens.
– whatever you do, think about the supply chain of security components, which parties are involved and which knowledge they might accumulate.
– replacing proprietary stuff by standard based approaches (like X.509 certificates) should always be reconsidered.
– whatever you do, authentication-wise, you should always have a plan for revocation and credential replacement. This should be one of the overall lessons learned from this incident and the current trend that well-organized attackers will go after authentication providers and infrastructures (see, for example, this presentation from the recent NSA Information Assurance Symposium).
Last but not least I’d like to draw your attention to this upcoming presentation on the current state of authentication at Troopers. I’d be surprised if Steve and Marsh would not include the RSA incident in their talk 😉
thanks,
Enno
Continue reading
Breaking

VMSA-2011-0005: VMware vCenter Orchestrator remote code execution vulnerability

Reading this advisory I’m quite tempted to emit another rant on the relationship of heavy use of 3rd party components, lack of (security) quality assurance and services running at times where they’re not needed (see second workaround here). I’ll refrain  from that for today. Just wanted to let you know that the underlying vulnerability in Struts2 was initially discovered by Meder Kydyraliev who gives this talk at Troopers in two weeks. He’ll certainly describe the inner workings of this one, and others… 😉

Have a good one,

Enno

Continue reading
Breaking

GTP_SCAN released

gtp_scan is a small python script that scans for GTP (GPRS tunneling protocol) speaking hosts. To discover those hosts it uses the GTP build in PING mechanism, it sends a GTP packet of the type ECHO_REQUEST and listens for an incoming GTP ECHO_REPLY. Its capable of generating ECHO_REQUESTS for GTP version 1 and GTP version 2. Also the script can scan for both, GTP-C and GTP-U (the control channel and the user data channel), only the port differs here.

In the output the received packet is displayed and the basic GTP header is dissected so one can see a GTP version 1 host answering a GTP version 2 ECHO_REQUEST with the ‘version not supported’ message.

Tests have shown that there are some strange services around, which answer to an GTP ECHO_REQUEST with a lot of weird data, which leads to ‘kind of’ false positive results but they can easily be discovered by checking the output data with your brain 😉 (eg. there is no GTP version 12)

download it here gtp_scan-0.5.tar.gz

enjoy

/daniel

Continue reading
Breaking

Some More Security Research on The nPA AusweisApp

After the initial quick shot (see this post) we decided to have a closer look. And some more stuff turned up.

After decompiling the integrated java stuff we stumbled about hard coded server credentials:

package Idonttell;

public abstract interface Idonttell
{
public static final boolean debug = false;
public static final boolean auth = true;
public static final String SMTP_SERVER = "Idonttell.openlimit.com";
public static final String SMTP_USER = "Idonttell@Idonttell.openlimit.com";
public static final String SMTP_PASSWORD = "Idonttell";
public static final String SEND_FROM = "Idonttell@Idonttell.openlimit.com";
public static final String[] SEND_TO = { "buergerclient.it-solutions@Idonttell.com" };
public static final String MAIL_HEADER_FIELD = "OpenLimitErrorMessage";
public static final String MAIL_HEADER_FIELD_PROP = "yes";
}

The AusweisApp uses these credentials to authenticate against a mail server and send error reports to a dedicated email address. The server was accessible from the internet and services like SMTP, FTP and SSH were running. Following the principles of responsible disclosure the vendor was contacted and responded within a few hours, so the servers are already protected against any kind of misuse. So save your time and keep the german hacking laws in mind ;-).

Nevertheless doing code reviews for years, one point on the checklist is secure storage of data. Any kind of secrets should never be included directly in the source code and never ever in cleartext like it was done in the AusweisApp. These secrets can be accessed quite easily, so how useful is an authentication feature, if everybody knows the password ;-)?

This leaves me scratching my head. Maybe I was a bit overhasty when writing this 😉

Ok, getting serious again, this finding is another proof that the concept of rating closed source software based on well chosen metrics can help to determine the trustworthiness of software, because building secure software means that you develop with security in mind and this is what these metrics are measuring.

have a nice week and be aware of more updates as our research continues. We’re investigating another interesting possible flaw …

Michael

Continue reading
Breaking

Our contribution to the public discussion about the German new ID card (nPA)

Currently there’s quite some discussion about the security properties and posture of the German new ID card (“Neuer Personalausweis”, “nPA”, some technically reasonable security discussion can here be found e.g. here.

While – as of our current knowledge – we do not expect major security flaws on the architecture level, the problems discussed so far (like Evilgrade style attacks against one of the main applications or keylogging the PIN in scenarios with pinpad-less readers ) certainly show that security best practices must be followed by all parties involved in the development, deployment and use of the nPA and it’s associated applications. From our perspective this may be expected from the applications’ developers as well.
Looking at this:

TTICheck 32/64 Bit - (c) 2010 Michael Thumann
[i] Scanning .

.\ePALib_Client.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\mozilla\AusweisApp_FF3x_Win\components\siqeCardClientFFExt.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\npeCC30.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\pdcjk.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\PDFParser.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\PdfSecureAPI.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\PdfValidatorAPI.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\PdfViewerAPI.dll; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqApp.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqBootLoader.exe; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqBootLoaderAC.exe; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqCertMgr.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqCIFRepository1_1.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqCipher.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqCryptoAPI.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecCert.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecCertAttr.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecCertCV.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecCRL.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecCTL.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecMgr.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecOCSP.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecOCSPRequest.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecP12.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecP7.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecTSP.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecTSPRequest.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqDecTypeMatcher.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqeCardAPI_svr.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqeCardClient.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqEncP7.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqEPAProfile.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqHash.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqISO7816EPA.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqOIDManager.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqP1Verifier.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqP7Encoder.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqRNG.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqSEMk.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqSEMk_srv.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqSEMkApp.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqSSLClient.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqTerminalPCSC.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\siqTiffTxtParser.ols; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09
.\toolKillProcess.exe; Linker Version 8.0; ASLR NOT supported; DEP NOT supported; No SEH found; TTI = 26.09

we’re not sure if that’s the case ;-), when looking at the new AusweissApp with our closed source security metric.

So far for our little contribution to the mentioned debate,

have a great day everybody,

Michael

PS: At Troopers 11 there will be a presentation from Friedwart Kuhn on using the nPA for authentication purposes in corporate environments.

Continue reading
Breaking

Back to the roots

Finding exploitable vulnerabilities is getting harder. This statement of Dennis Fisher published on Kaspersky’s Threatpost blog summarizes a trend in the development lifecycle of software . The last published vulnerabilities that were gaining some attention in the public had all one thing in common, they were quite hard to exploit. The so called jailbreakme vulnerability was based on several different vulnerabilities that had to be chained together to break out of the iPhone sandbox, escalate its privileges and run arbitrary code. Modern software and especially modern operating systems are more secure, they contain less software flaws and more protection features that make reliable exploitation a big problem that can only be solved by very skilled hackers. Decades ago it was just like this, but intelligent tools and sharing of the needed knowledge enabled even low skilled people to develop working exploits and attack vulnerable systems. Nowadays we are going back to the roots where only a few very knowledgeable people are able to circumvent modern security controls, but that doesn’t mean that all problems are gone. Attackers are moving to design flaws like the DLL highjacking problem, so only the class of attacks is changing from the old school memory corruption vulnerabilities to logical flaws that still can be exploited easily. But the number of exploitable vulnerabilities is decreasing, so this might be a sign that we are on the right way to develop reliable and secure systems and that developing companies are adopting Microsofts Secure Development Lifecycle (SDL) to produce more secure software. As stated in my previous blogpost the protection features are available, but not used very often. But if they are used and if the developers are strictly following the recommendations of the SDL, this trend of “harder to exploit vulnerabilities” proves that it can be a success story to do so.

Have a bug-free day 😉
Michael

Continue reading
Breaking

MS10-063, Prevention

One of the four vulnerabilities rated “critical” from yesterday’s MS patchday, that is MS10-063, has an interesting “Workarounds” section as for MS Internet Explorer. There it’s stated:

“Disabling the support for the parsing of embedded fonts in Internet Explorer prevents this application from being used as an attack vector.”

which, according to the advisory, should/can be done by setting the “Font Downloading” parameter to “Disable”.

Which is exactly what this document suggests. So taking a preventive approach, once more, might have saved some concerns (“Will we be targeted by this one”) and patch/testing time…

Have a great day,

Enno

Continue reading