Once again, in some customer environment the question of allowing/prohibiting split tunneling for (in this case: IPsec) VPN connections popped up today. Given our strict stance when it comes to “fundamental architectural security principles” the valued reader might easily imagine we’re no big fans of allowing split tunneling (term abbreviated in the following by “ST”), as this usually constitutes a severe violation of the “isolation principle”, further aggravated by the fact that this (violation) takes place on a “trust boundary” (of trusted/untrusted networks).
Still, we’re security practitioners (and not everybody has such a firm belief in the value of “fundamental architectural security principles” as we have), so we had to deal with the proponents’ arguments. In particular as one of them mentioned additional costs (in case of disallowed ST forcing all 80K VPN users’ web browsing through some centralized corporate infrastructure) of US$ 40,000,000.
[yes, you read that correctly: 40 million. I’ve still no idea where this – in my perception: crazy – number comes from]. Anyhow, how to deal with this?
Internally we performed a rapid risk assessment (RRA) focused on two main threats, that were:
a) Targeted attack against $CORP, performed by means of network backdoor access by piece of malware acting as relay point.
a) Client gets infected by drive-by malware as not being protected by (corporate) infrastructure level controls.
[for obvious reasons I’ll not provide the values derived from the exercise here]
While the former seems the main reason why NIST document SP 800-77 “Guide to IPsec VPNs” recommends _against_ allowing ST, the RRA showed that the latter overall constitutes the bigger risk. To illustrate this I’ll give some numbers from the excellent Google Research paper “All Your iFRAMEs Point to Us” which is probably the best source for data on the distribution and propagation of drive-by malware.
Here’s the paper’s abstract:
“As the web continues to play an ever increasing role in information exchange, so too is it
becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a
detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by
downloads are caused by URLs that attempt to exploit their visitors and cause malware to be
installed and run automatically. Our analysis of billions of URLs over a 10 month period shows
that a non-trivial amount, of over 3 million malicious URLs, initiate drive-by downloads. An even
more troubling finding is that approximately 1.3% of the incoming search queries to Google’s
search engine returned at least one URL labeled as malicious in the results page. We also explore
several aspects of the drive-by downloads problem. We study the relationship between the user
browsing habits and exposure to malware, the different techniques used to lure the user into the
malware distribution networks, and the different properties of these networks.”
and I’m going to quote some parts of it in the following, to line some (in the end of the day: not so) small calculations.
The authors observed that from “the top one million URLs appearing in the search engine results, about 6,000 belong to sites that have been verified as malicious at some point during our data collection. Upon closer inspection, we found that these sites appear at uniformly distributed ranks within the top million web sites—with the most popular landing page having a rank of 1,588.”
Now, how many of those (top million websites and in particular of the 0.6%) will be visited _every day_ by a 80,000 user population? And how many of those visits will presumably happen over “the split tunnel”, thus without corporate infrastructure level security controls?
Looking at the post infection impact they noticed that the “number of Windows executables downloaded after visiting a malicious URL […] [was] 8 on average, but as large as 60 in the extreme case”.
To be honest I mostly cite this opportunisticly to refer to this previous post or this one 😉
And I will easily refrain from any comment on this observation ;-):
“We subject each binary for each of the anti-virus scanners using the latest virus definitions on that day. […] The graph reveals […] an average detection rate of 70% for the best engine.”
Part of their conclusion is that the “in-depth analysis of over 66 million URLs (spanning a 10 month period) reveals that the scope of the problem is significant. For instance, we find that 1.3% of the incoming search queries to Google’s search engine return at least one link to a malicious site.”
Let’s use this for some simple math: let’s assume 16K Users being online (20% of those 80K overall VPN users) on a given day, performing only 10 Google queries per user. That’s 160K queries/day. 1.3% of that, i.e. about 2K queries will return a link to at least one malicious site. If one out of 10 users clicks on that one, that means 200 attempted infections _per day_. If you cover 70% of that by local AV, 60 attempts (the remaining 30%) succeed. This gives 60 successful infections _each day_.
So, from our perspective, deliberately allowing a large user base to circumvent corporate infrastructure security controls (only relying on some local anti-malware piece) might simply… not be a good idea…
thanks
Enno
Hi Enno,
thanks for the interesting post which (as usual) has many valid points.
In my opinion you are not going far enough – basically there is something missing in the discussion at hand: the “corporate infrastructure security sphere”: If it consists of “Web-Proxy + AV-scanning”, than you don’t gain anything on the security side by disallowing ST, you would gain a lot on the Ops-side, if, as a consequence, you could remove local AV scannig from the VPN-Clients (but simply routing web-traffic through the central proxy is not enough mitigating / preventive control to do that).
So you should add a “central preventive measure” to the equation, which could, for example, be “URL-Filtering of known malicious sites” – but then again I ask myself how good the coverage of current URL-Filter-Deives is. The target (malicious websites) is moving – but how fast? And can the filter-update-provider keep up with the speed? So what would the impact of URL-filtering be in your rapid risk assessment? Or what other central preventive measures would be more effective?
Best wishes,
Dror