We’re getting more vulnerabilities these days, but are we disclosing them responsibly? And what does that mean, anyway?
The National Vulnerability Database – a NIST site that documents security flaws as they emerge – added 7,903 software security flaws last year, compared to 5,186 in 2013. That’s a 52% increase. And Secunia, which publishes a report highlighting software vulnerabilities gathered from computers around the world, saw an 18% rise in discrete vulnerabilities across different applications in 2014.
Significantly, 83% of the vulnerabilities in the Secunia survey had a patch on the day of release. That’s an improvement over prior years, but it also leaves almost one in five vulnerabilities with no patch. Perhaps even more concerning is that if something isn’t patched on the first day, it often isn’t a priority for the vendor further down the line, according to Secunia’s data.
All of which raises an important question: what is the best way to disclose a vulnerability? Researchers and vendors typically sit along a spectrum of transparency.
Google’s Project Zero, which roots out zero-day bugs in the software industry, recently drew flak for its disclosure of vulnerability information. Microsoft was upset that it had adhered to a 90-day disclosure policy by releasing details of a bug, even after Microsoft had tried to co-ordinate a slightly later release while it worked on a fix.
Others are more aggressive, though. CERT allows 45 days, for example, while the Organization for Internet Safety imposes a 30-day policy. One of the seminal docs dealing with responsible disclosure is Rain Forest Puppy’s RFPolicy, which gives a vendor five days to respond before a security researcher goes public with the info.
Some companies eschew the idea of deadlines altogether, suggesting a more collaborative approach to disclosure in which the security researcher and the vendor work together to release information in a way that doesn’t leave users exposed.
Microsoft’s Coordinated Vulnerability Disclosure approach doesn’t advance a particular disclosure deadline, for example, but instead simply asks that the researcher gives the vendor time to address the problem. If the vulnerability is already being exploited in the wild, then it suggests that the researcher and the vendor work together to release “early public vulnerability” information to customers.
So, we have a mixture of approaches. You can break them down by the length of the disclosure deadline, although this seems arbitrary. Another way to classify them is by the level of flexibility. Some deadlines may be non-negotiable, while others can be extended as the vendor shows that they’re working on a fix. You can also identify no deadlines at all, in which vendors and researchers treat everything on a case by case basis.
And then, of course, there’s full disclosure, in which the researcher makes everything public, immediately, giving the vendor no time to fix the bug at all. The argument there is revealing everything immediately levels the playing field by ensuring that all actors have the information, rather than simply the bad ones.
Derek Manky, global security strategist at Fortinet, designed his PSIRT’s responsible disclosure policy in 2006-7. He built multiple action paths into his plan, based on different variables, but he errs on the side of longer disclosure times.
“Five days is way too short to allow response from a vendor, especially when the issue can be complicated and needs further investigation from multiple product teams,” he says. “Software is very complex nowadays and incorporates many libraries that are vulnerable, so the investigation time can take a bit longer.”
In rare cases when vendors don’t respond at all, his policy suggests releasing minimal details about the issue with no proof of concept.
“The more likely example is when vendors do respond back but it may still take 12-24 months until the issue is fixed,” he says.
On the other hand, Google made an important point when it decided to relax its strict 90-day policy, creating a grace period that would allow vendors that had committed to fixing a bug time to complete their work. The Project Zero team pointed out that the offensive community may spend far more resource on finding flaws than the defensive community does, meaning that by the time the defensive community releases it, it’s probably already in play. And it isn’t always easy to detect exploits in the wild.
Manky warns that things are changing. Devices and use cases tomorrow will be different to those from yesterday. “Many new products, including embedded devices that are becoming products (like medical devices), simply come shipped with many vulnerabilities that are low hanging fruit (due to a weak SDL cycle),” he says. “Worse, they do not have an update mechanism (meaning the patching process can take quite a bit longer) nor a PSIRT team in place to fix the issue in the first place.”
There’s no set standard for this stuff, which is why spats occasionally erupt online. So, should there be one? And if so, which organisation should define it?
There are 0 comments