How to report a vulnerability in software securely
Software vendors should enable channels to encourage vulnerabilities to be reported and mitigated before exploitation.
Hundreds of vulnerabilities affecting software, devices, operating systems or networks are publicly disclosed every day. So much so that in 2024 the record for discovered vulnerabilities was broken again: more than 40,000, compared to 29,000 in 2023, according to data collected by CVEdetails.
What is the vulnerability discovery process like? It can be internal or external. In the first case, the software developer or a device detects a vulnerability thanks to cybersecurity services such as security audits, penetration testing services or Red Team exercises. In the second case, security analysts from outside the organization report a vulnerability they have identified directly or through reward programs, which are becoming increasingly common in all companies.
Below, we will explain how to report a vulnerability in software safely and effectively to help mitigate it before it is exploited.
1. To whom should a vulnerability be reported?
Once an analyst detects a weakness in software, he or she must report the vulnerability by communicating it to:
- The supplier of the software in which the vulnerability was identified so that it takes the necessary measures to solve the vulnerability, for example, designing and launching a security patch. Many companies today have Bug Bounty programs to encourage cybersecurity analysts to use their talents to detect and report vulnerabilities securely, offer a reward for the work, and help improve security. If an organization does not have a bounty program, it should provide the necessary contact information so that users or independent researchers can properly report a vulnerability.
- Suppose the vendor cannot be contacted or a dilatory attitude is detected on the part of the vendor when mitigating a vulnerability. In that case, public bodies such as CERTs can be approached. For example, INCIBE-CERT has a coordinated vulnerability disclosure policy in Spain, popularly known as CVD.
To whom should a detected vulnerability not be reported? Third parties could use the information to exploit it before the problem has been remediated.
Likewise, cybersecurity experts strongly advise against making a vulnerability in software public without informing the vendor or a public body. Once a vulnerability is public, malicious actors can access all its information to develop proof-of-concepts and carry out zero-day vulnerability attacks.
Encrypted communication channels, such as PGP or reporting through secure pages, are essential to prevent this critical information from falling into the wrong hands.
2. What are the different ways to report a vulnerability?
Vulnerability reporting can be done confidentially by informing the affected software vendor or a public body. Likewise, security analysts who wish to report a vulnerability have three main ways of doing so:
- Private reporting. In these cases, researchers report through private channels to software vendors and leave it up to them to publish the details of the vulnerabilities. Bug Bounty programs require security analysts to employ this model to receive rewards.
- Full and public disclosure of the vulnerability. This is an extreme way to report a vulnerability because, as we pointed out in the previous section, it opens the door to attacks before a weakness is mitigated. When does a researcher choose this option? When it is evident that the software vendor does not consider the information that has been reported and does not take measures to solve the problem. Thus, publishing all the information about a vulnerability in software aims to force the supplier to develop a patch to mitigate it or provide information to the community to mitigate the problem.
- Responsible and coordinated disclosure. Public organizations such as INCIBE in Spain or the Cybersecurity & Infrastructure Security Agency (CISA) in the United States have CVD policies to facilitate coordinated vulnerability reporting that does not jeopardize software security and the people and companies that use it. Under this model, the details of a vulnerability are private until a security patch is developed and installed to fix the problem. After that, the information is made public. It is also important to note that some companies have CVD policies to encourage security analysts to report vulnerabilities and give them room to mitigate them before they are made public. This model is essential when the vulnerability affects different products or manufacturers cross-cuttingly, and the release of security patches must be coordinated.
3. What information should be included when a vulnerability is reported?
The report that a security analyst sends to the software supplier affected by the discovered vulnerability or to a public body must contain the necessary data to identify, understand and mitigate a vulnerability. Thus, a vulnerability report may include information such as:
- Evidence proves the existence of the vulnerability, such as screenshots, traffic captures or code snippets.
- The chronology of the vulnerability’s discovery and bringing it to the vendor’s attention.
- Details of the vendor, the software and the version affected by the vulnerability.
- The broadest possible description of the vulnerability and the impact of the vulnerability.
- The level of criticality of the vulnerability. If possible, the vulnerability score can be included according to the CVSS rating system.
- The requirements that need to be met to exploit the vulnerability.
- If it exists, the code of the proof of concept allows to reproduce the exploitation of the vulnerability.
- How to solve the detected problem in case you have extensive knowledge of the software.
- If desired, it is possible to establish a time frame to mitigate the vulnerability before it is decided to make it public.
Sometimes, the initial report submitted when a vulnerability is reported is complete enough for the vendor to develop a security patch. However, it is common that between the vulnerability report and the software update incorporating the patch, there is a communication flow between the researcher and the company to mitigate the vulnerability.
4. The importance of the Red Team in preventing external reporting of vulnerabilities
The vulnerability detection strategy of a software development company can be complemented by implementing a Bug Bounty program.
Although this measure can be of great help, it is essential to have other proactive approaches, such as conducting Red Team exercises that can anticipate attacks:
- Testing a company’s technologies, processes and personnel.
- Simulating realistic and wide-ranging attacks to detect any vulnerabilities that malicious actors could exploit.
- Proposing recommendations to optimize identified weaknesses and mitigate vulnerabilities in software or any other technological asset.
Likewise, it is also critical that companies have cybersecurity capabilities or services to manage vulnerabilities and address emerging vulnerabilities once they are identified.
In short, vulnerability identification is a critical activity when anticipating actors. Therefore, software vendors need secure channels for security analysts to report vulnerabilities and implement cybersecurity services that allow them to identify weaknesses before cybercriminals do.