Whether you are a security researcher or an amateur hacker, if you find a security vulnerability in an application or website, you are often faced with the question: What do I do now?
When we conduct a penetration test, the answer is usually relatively simple: we document the vulnerability and send the report to our client after the project is finished. But what do we do when we find a security vulnerability that’s not related to a customer project?
When dealing with vulnerabilities, there are several interests that should be taken into account:
- The vendor might suffer reputational damage as a result of the vulnerability.
- The developers need time to reproduce and fix the vulnerability.
- Users of the software may need time to apply updates.
- People affected by the vulnerability have an interest in the vulnerability being fixed and being informed about the risk they have been exposed to.
And of course the finder of the vulnerability has interests of his or her own. You know your personal motivations best, but often security researchers are driven by money, recognition or the feeling that they have made the internet safer through their work.
Some interests may weigh more heavily than others, depending on the situation. The Responsible Disclosure (or Coordinated Vulnerability Disclosure) approach has proven to be a good way to balance the interests.
It is completely understandable that you are curious when you discover a vulnerability and want to explore the implications. But be careful! Exploiting a vulnerability can quickly lead to legal and moral problems. It is therefore worthwhile to pause for a moment after you first suspect a vulnerability and consider how to responsibly assess its impact.
Is there a bug bounty programme for the affected application? The terms and conditions usually state what is considered acceptable behaviour and what is not. If the vulnerability has been discovered in a piece of software, it is advisable to set up a test installation in which you can safely explore the vulnerability. This is usually more difficult with web applications, but even then there are ways to minimise negative impacts. Avoid any actions that you expect to cause damage. Accessing other users’ data is off-limits in any case! Even if it can be tempting to have a working proof of concept or even an exploit, when in doubt, a reasonable suspicion is often enough to report a vulnerability.
It is important that you document what you did. This not only helps to fix the vulnerability, but also serves as proof that you did not act in bad faith. It also allows your contacts to understand your steps.
Reporting of a vulnerability
After you have documented the vulnerability, the question is how to report it. For this, you need to do a little research. Some companies and organisations have central points of contact to which you can send reports about their products. Examples are Microsoft and Apache. Other ways to find the appropriate contacts are a
SECURITY.md in the repository of open source projects or a security.txt for websites or web services.
Important note: If the affected software or website is covered by a bug bounty programme and the vulnerability is reported through it, the terms of the bug bounty programme apply. In most cases, this means Private Disclosure, i.e. the disclosure of information about the vulnerability is entirely up to the software vendor.
If nothing of this helps, there are other options: e.g. the contact channels in the website’s imprint or tickets in the project’s bug tracker. Note, however, that such channels are usually not confidential. Therefore, a report via one of these channels should not contain details about the discovered vulnerability. A simple note that you found a vulnerability and would like to get in touch with the responsible persons is sufficient.
At this point at the latest, it is also the right time to think about whether you want to report the vulnerability under your real name or under a pseudonym. Usually, the contact persons are grateful for your report, but in some cases, epecially in the case of an escalation, a vendor may try to shoot (figuratively speaking) the messenger (you). If you initially appear under your real name, you cannot change your mind later.
After you have reported a vulnerability, the first thing to do is to wait. If there is no response, you can ask again a few days later. Maybe you got the wrong contact or they are on holiday. In this case, you can try again through another contact person.
Hopefully someone will get back to you eventually. If there are questions about the reported vulnerability, there may be follow-up questions. Otherwise, you should hopefully receive feedback that the vulnerability has been closed or about why this was not possible. Once contact has been made, it is also the right moment to talk about the disclosure of the vulnerability and when this should take place.
But what can you do if no one answers? First of all, you should make sure that you have found the right contact person. If that doesn’t help, you can set a deadline after which you unilaterally disclose the vulnerability to increase the pressure.
The disclosure should not be about humiliating the provider. Although the publication should of course build up a certain pressure to act, it primarily enables the users of the affected application to make their own risk assessment.
It is difficult to establish a general deadline for the disclosure of a vulnerability. A good reference value is, for example, 90 days. Depending on how much time has passed since the first report, this can be shortened. In any case, however, the provider should have sufficient time to react to the vulnerability. In some cases, for example, if a vulnerability is already being actively exploited, a significantly shorter period may also be appropriate.
A few important points to keep in mind:
- Unfortunately, it happens from time to time that instead of fixing the vulnerability, a vendor threatens to or even takes legal action. This is not a desired outcome, but it does happen. Again, before you escalate, make sure you have documented your actions.
- Do not publish any information about the vulnerability before the set deadline has passed, not even teasers. There is a risk that the vulnerability will be blown out of proportion.
- Give the vendor enough time to respond to your message.
- Avoid unnecessarily inflating the vulnerability. From the outside, it is often difficult to give a realistic assessment of the vulnerability, and it makes the collaboration more difficult if the vendor or developers feel unjustly attacked.
At some point, the vulnerability should hopefully be fixed and you want to publish the information about the vulnerability.
As the name Coordinated Vulnerability Disclosure implies, disclosure should ideally be done in coordination with the vendor of the software. Adhere to agreements made, but this does not mean that you have to accept any terms.
If you have set the vendor a deadline, which has now expired, you must decide on an appropriate approach for full disclosure. Keep in mind that your credibility will suffer if you are driven by frustration or resentment because no one has responded to your messages. Try to be as professional as possible in your disclosure. And next time, devote your research to products from more cooperative vendors.
How you publish the vulnerability is up to you. You can publish it as a blog post, bug ticket or on a mailing list. If you have taken care to remain anonymous so far, you should also take care to do so when publishing. If you do not want to publish the vulnerability yourself, you can also use an intermediary. Possible intermediaries are journalists or other IT security experts, such as us. In that case, however, you should involve the intermediary at an early stage.
The information you publish about the vulnerability should be detailed enough so that a reader can assess the credibility of the report and possible implications. If there is no patch yet, you should be careful with working or almost working exploits. The goal of publishing should always be to protect the users.
If the vulnerability occurs in a piece of software, it can be assigned a CVE ID (Common Vulnerabilities and Exposures ID) to make it easier to talk about. Some vendors, such as Apache, will request the CVE ID if they accept your report. If not, you can request a CVE ID yourself.