Yesterday I took a long run (actually I did the full distance here) and usually such exercises are good opportunities to “reflect on the world in general and the infosec dimension of it in particular”… at least as long as your blood sugar is still on a level to support somewhat reasonable brain activity 😉
Anyhow, one of the outcomes of the number of strange mental stages I went through was the idea of a series of blogposts on architectural or technological approaches that are widely regarded as “good security practice” but may – when looked at with a bit more of scrutiny – turn out to be based on what I’d call “outdated threat models”.
This series is intended to be a quite provocative one but, hey, that’s what blogs are for: provide food for thought…
First part is a rant on “Multi-factor authentication”.
In practically all large organizations’ policies, sections mandating for MFA/2FA in different scenarios can be found (not always being formulated very precisely, but that’s another story). Common examples include “for remote access” – I’m going to tackle this one in a future post – or “for access to high value servers” [most organizations do not follow this one too consistently anyway, to say the least ;-)] or “for privileged access to infrastructure devices”.
Let’s think about the latter one for a second. What’s the rationale behind the mandate for 2FA here?
It’s, as so often, risk reduction. Remember that risk = likelihood * vulnerability * impact, and remember that quite frequently, for infosec professionals, the “vulnerability factor” is the one to touch (as likelihood and impact might not be modifiable too much, depending on the threat in question and the environment).
At the time most organizations’ initial “information security policy” documents were written (at least 5-7 years ago), in many companies there were mostly large flat networks, password schemes for network devices were not aligned with “other corporate password schemes” and management access to devices often was performed via Telnet.
As “simple password authentication” (very understandably) was not regarded sufficiently vulnerability-reducing then, people saw a need for “a second layer of control”… which happened to be “another layer of authentication”… leading to the aforementioned policy mandate.
So, in the end of the day, here the demand for 2-factor auth is essentially a demand for “2 layers of control”.
Now, if – in the interim – there’s other layers of controls like “encrypted connections” [eliminating the threat of eavesdropping, but not the one of password bruteforcing] or “ACLs restricting which endpoints can connect at all” [very common practice in many networks nowadays and heavily reducing the vulnerability to password bruteforcing attacks], using those, combined with single-auth, might achieve the same level of vulnerability-reduction, thus same level of overall risk.
Which in turn would then make the need for 2FA (in this specific scenario) obsolete. Which shows that some security controls needed at some point of time might no more be reasonable once threat models have changed (e.g. once the threat of “eavesdropping on unencrypted mgmt traffic from a network segment populated by desktop computers” has mostly disappeared).
Still, you might ask: what’s so bad about this? Why does this “additional layer of authentication” hurt? Simple answer: added complexity and operational cost. Why do you think that 2-factor auth for network devices can _rarely_ be found in large carrier/service provider networks? For exactly these reasons… and those organizations have a _large_ interest in protecting the integrity of their network devices. Think about it…