In this post I’ll discuss some aspects of vulnerability disclosure. I don’t want to delve into an abstract & general discussion of vulnerability disclosure (for those interested here’s some discussion in the context of Google’s Project Zero, this is the well-known CERT/CC approach, this a paper from WEIS 2006 laying out some variants, and finally some statement by Bruce Schneier back in 2007). Instead I will lay out which approach we followed in the past (and why we did so) and which developments make us consider it necessary to re-think our way of handling. The post is not meant to provide definitive answers; it was also written not least to provide clarity for ourselves (“write down a problem in order to better penetrate it”) and, maybe, to serve as a starting point for a discussion which will help the community (and us) to find a position on some of the inherent challenges.
Since our first days as a company (back in 2001 – time flies…) we have always been performing substantial security research at ERNW, for several reasons:
- to develop our capabilities/skills and our methodology when tackling certain tasks.
- to contribute to public security knowledge & discussion (or, as we put it at Troopers: “to make the world a safer place”). The people running ERNW, including myself, consider it an important duty of any company to contribute to society in general (by paying taxes, providing employment, developing talent etc.) and to contribute to the field of expertise one is working in in particular.
- it helps to increase the visibility of our expertise which in turn helps to achieve “economic sustainability of the company as a whole” (which, let me be clear here, is probably amongst the objectives of any reasonable company).
- simply because security research is fun ;-).
From a “project sponsor perspective”, the research activities we undertake can be grouped into three main categories:
- research we “just do on our own” (read: without a specific customer context), because we think it’s important to look at the security properties of some class of devices or because we’re curious as for the real-life implementation of protocols etc. Actually every ERNW member (except for the back office staff) is expected to participate in at least one ongoing activity of this type.
- research that is somewhat related to/sponsored by a customer security assessment project. Here we often come up with an agreement along the lines of “while you [customer] pay n man-days for the assessment, we’re willing to spend much more effort for a certain component if you’re ok with us sharing the results with the public thereafter (of course, without any reference to the specific environment)”. Examples include this talk or this post.
- research projects we’re engaged for in a dedicated manner. The main property here (at least in the context of this post) being that the engaging party fully owns the intellectual property from the project so we’re not necessarily involved in the way the results are disseminated in the end of the day.
This post only covers the first two categories (for the simple reason that we have some decision power there) and the inherent question: What to do with vulnerabilities found in the course of those?
For the last ten years we have pretty much followed a responsible disclosure approach inspired by the classic “Rain Forest Puppy policy/RFPolicy” which states:
This should give an idea of the intent and spirit of this approach. In the following we will use a slightly different terminology though, that is the one from ISO/IEC 29147:2014 Information technology — Security techniques — Vulnerability disclosure. This one defines:
- “finder: individual or organization that identifies a potential vulnerability in a product or online service”.
- “vendor: individual or organization that developed the product or service or is responsible for maintaining it”.
- “remediation: patch, fix, upgrade, configuration, or documentation change to either remove or mitigate a vulnerability”.
Using this terminology, in the simplest case the actors involved can be broken down to:
- the finder who has discovered a vulnerability which she now reports to
- the vendor who receives the information, in order to provide remediation, which in turn benefits all users using the software in question.
The overall objective of this approach can hence be summarized as follows: “contribute to the education of the parties involved/affected and thereby help to achieve an overall higher state of security for everybody” (let’s designate this objective as [OBJ_L_PUBLIC_CULTURE], where “L” stands for “long-term”).
There’s some further assumptions, together with an expected course of action:
- At the time of reporting no patch is available.
- The vendor actually takes care of remediation.
- Once this remediation is public (e.g. a security advisory and/or a patch is released), it can be deployed everywhere where needed, without too much delay.
- The people involved/users affected (let’s call them the “stakeholders”) are well-informed, willing to deploy the remediation and enabled (wrt their technical skills and the environmental conditions) to do so.
You might already note that this is a quite greenfield scenario and accompanying set of assumptions. Still, going with a fairly standard responsible disclosure approach (depending on the specific conditions and overall picture with a 30 days or a 90 days period) has worked for us in the vast majority of cases
[one notable exception being this talk at HITB Amsterdam 2012 after the vendor in question had repeatedly discredited us in their/our customer space and even insulted us personally, and based on those/previous interactions we could not expect any reasonable outcome of going through a responsible disclosure process (actually they patched it 1.5 yrs later)].
Taking such a path also implied:
- we always got into direct contact with the vendors. We never went through through brokering organizations like HP/TippingPoint’s Zero Day Initiative or similar channels.
- we never asked for or received any financial compensation, neither in a direct way (“here’s some money”) nor in an indirect way (“what about bringing you guys in as a new pentesting partner? [of course we need to establish a good trust-based relationship first and you, ERNW, should start building this by refraining from any publication]…”).
- we have never sold any vulnerability information to a 3rd party.
As I said, overall for us it just worked and we considered it a good way to act in a way consistent with our values and objectives. Now let’s have a look at some developments we observe that make us reconsider if this is still the right approach.
Legal Blur
Here, for a moment, let’s keep in mind that my duties as managing director of ERNW include:
- taking care that our actions are conformable to the law.
- taking care that our actions are consistent with our values (there will be a future post on those) and our derived overall objectives.
- taking care that our resources incl. my own ones are spent in a way contributing to our objectives.
In the context of (responsible) vulnerability disclosure the above becomes increasingly difficult, mainly for these reasons:
- I have the impression that there is – compared with, say: 3-5 years ago – a growing number of vendors out there which operate with outspoken or elusive legal threats in the course of the procedure (which is a bit surprising as for some time it seemed that responsible disclosure had become a well established practice, not least due to the efforts people like Katie Moussouris spent on developing ISO 29147, ISO 30111 and the like). Personally I have to say I’m increasingly tired of this. We have responsibly reported numerous bugs which due to their nature would have raised significant sums of money if reported through a broker or sold “to interested 3rd parties”. I hence feel less and less inclined to go through series of conference calls to hear cryptic mentions of the “our legal department is working on this, to protect our customers” type, and to constantly remind myself: “it’s the right thing to do, for the long-term sake of the community of stakeholders”.
- The Wassenaar Arrangement (WA). I won’t go into a detailed discussion here as many people smarter than me have expressed their views (e.g. Sergey Bratus in this document) and there’s extensive and enlightening discussions on the “Regs — Discussions on Wassenaar” mailing list established by Arrigo Triulzi. Still suffice to say here that it is my understanding that the Wassenaar Arrangement has severe implications with regard to the way vulnerability disclosure, once performed “across borders”, takes place. For example we’re right now involved in a disclosure procedure (on a nasty chain of bugs identified by a group of researchers) with a vendor of security appliances. Let’s assume that the vendor is located in another country than Germany. How are we supposed to provide PoC code to the vendor in a timely manner once such code could be considered to be covered by the second, controlled class of software as of the WA? We don’t think it is (covered), but we’re not keen on going into lengthy legal battles over this estimation either.
Btw, it will be interesting to see how the case of Grant Willcox, a British researcher who removed some parts of the public version of his final year’s dissertation (“An Evaluation of the Effectiveness of EMET 5.1”) due to the WA, develops.
Next to this type of concerns there’s another main aspect which keeps us reflecting on the suitability of responsible disclosure. From a high-level perspective as for the overall objective of responsible disclosure this one could be even more important.
New Stakeholders in (Vulnerability Disclosure) Town
As outlined above, the RFPolicy inspired approach included two main classes of actors (the finder and the vendor), with an additional vague mention of “the community”, and it worked on the basis of some (inherent) assumptions. However nowadays a number of cases we’re involved with are quite different in one way or another. The main differences that we can identify are the following:
- there’s another group of stakeholders involved which are not part of the above, “traditional” picture, but who are heavily affected (when driving their car, when being treated by means of network-connected medical devices, when using some piece of technology in their household or even using pieces of technology to protect this very sphere etc.).
- the vulnerabilities might have a direct impact on their health or on their personal property (as opposed to the somewhat anonymous assets of enterprise organizations or vendors depicted in the classic RFPolicy).
- at the same time the affected users might be completely unaware of the vulnerabilities.
- even if they knew, due to the specific nature of certain components/devices it might just not be technically possible or feasible to apply the remediation.
These aspects induce another main objective (of vulnerability handling), to be designated as follows:
[OBJ_S_PUBLIC_PREV_HARM]: “protect public from harm against their lives, health or economic situation” (where the “S” marks that this usually is a – somewhat – short-term goal).
Identifying this objective evidently brings up an interesting question: how to proceed once the now two main objectives (of vulnerability [non-] disclosure), that are [OBJ_L_PUBLIC_CULTURE] and [OBJ_S_PUBLIC_PREV_HARM], clash? Or, to put it less abstract: what if pursuing the long-term goal of (vendor/community) education conflicts with the short-term goal of not contributing to people getting harmed?
Simple example: are we supposed (or even morally obliged) to disclose vulnerabilities in a medical device (maybe, after having tried to get in contact with the vendor several times and on several channels, without luck)? –This might put patients at danger (and the devices possibly can’t be patched anyway, for regulatory reasons). On the other hand: whom does it help if we just sit on the information? Should we try to go through other channels? If so, which ones? etc.
As I stated in the introduction we don’t have an easy answer for this type of situations with conflicting objectives.
Right now we handle those on a case-by-case basis like the “AVM FRITZ!Box” story or our research results with regard to certain alarm systems (if you’re waiting for the sequel to this post I have to disappoint you. we have decided to follow a different path here and hence it won’t be published anytime soon, for reasons becoming clear from the above discussion). We have even established an ethics committee at ERNW about two years ago, not least in order to resolve this type of dilemmas. It can be consulted by every member and it is entitled to provide a recommendation considered binding for everybody, including management.
Still we keep thinking there might be better/more suitable ways of vulnerability handling (and there’s probably several other researchers facing the same type of questions).
What could alternatives look like? These include
- don’t do anything with vulnerabilities we discover and “just sit on them”, maybe for a certain period of time imposed by some governing rules we have to come up with, maybe “indefinitely”.
- go full disclosure.
- go through a broker (which saves energy & time, too. furthermore this could bring in money to be used for additional Troopers student invitations, the Troopers charity fund or just some more nice equipment for the lab. I’m sure the guys would come up with plenty of ideas…).
- only report to vendor once there’s a bug bounty program (alternatively “drop 0day” as our old buddy Michael Ossmann suggested here).
- perform full disclosure and combine it with going through media/the press (again this could save energy & time and it might even increase the reach, hence subsequently contribute to the objective of “public education).
- hand over everything to sth like a “national clearing house”.
- something else…
For the moment we don’t find any of those particularly consistent with the overall objectives. Still we sense we have to develop an adapted approach to vulnerability disclosure, for the reasons outlined above. It’s just: what could that new approach look like?
We’re happy to receive any type of feedback. If nothing else we’re happy to contribute to the ongoing (and, from some perspective, overdue) debate of vulnerability disclosure and ethics of our field.
thanks for reading so far 😉 & everybody have a great day
Enno
I will join the “Workshop on Ethics in Networked Systems Research” held in London in August and I hope to both contribute to discussions there and to get back from it with a refined understanding of these issues.