Coordinated Vulnerability Disclosure: A Balanced Approach

The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue.  The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. 

There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options.  Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk.  This is but one factor to be considered.

Researchers and Vendors

The relationship between security researchers and product vendors is sometimes perceived as contentious.  I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions.  When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme.  And those vendors are clearly not familiar with the Streisand Effect.

The reality is that security researchers and vendors work together every day, with mutual respect and professionalism.  We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone.  As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited.

Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc.  In a perfect world that would be the case, but we don’t live in a perfect world.  In the real world, resources are finite.  Every complex product will have flaws because humans are involved.  Especially products that have changed and evolved over time.  No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered.

Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design.  For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies.

As they say, hindsight is 20/20.  Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously.  That is, until a security researcher brings a new way of thinking to the task and uncovers the issue.  For any vendor who takes security seriously, that’s still a good thing in the end.  It helps improve the product, protects customers, and improves the overall security of the Internet.

Non sequitur. Your facts are uncoordinated.                                     

When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery.  One option is to act unilaterally, disclosing the vulnerability directly.

From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions.  Of course, this option brings with it both moral and legal complications.  It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed.

Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach.  Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation.  This creates a race between attackers and defenders which, more often than not, is won by the attackers.  It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field.

Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed.  But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up.  These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case.  Dropping a 0-day to embarrass a vendor, government, etc.

Now, Full Disclosure does have an important role to play, which we’ll get to shortly.

Mutual Benefit

As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc.  Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour.

Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor.  The word ‘responsible’ is, by its nature, judgmental.  Who decides what is responsible?  For whom?  To whom?  The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’.  There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’.

Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure.  The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made.  Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc.

There is also the option of private disclosure, wherein the vendor notifies only customers directly.  This may happen as a prelude to Coordinated Disclosure.  There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk.  The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure.

Private disclosure without subsequent Coordinated Disclosure has several downsides.  As already stated, there is a risk that not all affected users will receive the notification.  Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record.  The lack of CVE IDs also means there is no universal way to identify the issues.  There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective.  It’s more likely to instill a false sense of security which is counter-productive.

Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc.  Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program.  Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities.  If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA).  I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future.

As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities.  Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy.  We do provide an acknowledgement for researchers in any resulting Security Advisory.

Carrot and Stick

Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well.  The vendor is being given an opportunity to address the issue before it is disclosed.  They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity.  This is the ‘carrot’ being offered by the researcher.

The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure.  The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so.

The ’90-day’ guideline should be considered just that, a guideline.  The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them.  Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline.

If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis.  But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement.

That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure.  For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort.  If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure.

You can’t coordinate unless both parties are willing to do so in good faith.  Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times.  Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option.

In All Things, Balance

Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders.  Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures.  You can’t mitigate what you don’t know about.  And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need.

Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure.  Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. 

The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier.  Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders.

The Needs of the Many

Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all.  In the end, that’s what we’re all working for, isn’t it? 

I encourage vendors and researchers alike to view each other as allies and not adversaries.  And to give each other the benefit of the doubt, rather than presume some nefarious intent.  Most vendors and researchers are working toward the same goals of improved security.  We’re all in this together.

If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.

 

Updated May 07, 2024
Version 2.0

Was this article helpful?

No CommentsBe the first to comment