The US government wants to improve the quality of open source software by making it easier to find vulnerabilities in it. The Department of Homeland Security hopes that better code reviews and bug bounties will help to reduce the security flaws in these projects, which are used extensively on the Internet by companies large and small.
It’s a worthy endeavour. Open source software (OSS) has suffered from security vulnerabilities for a long time. Heartbleed and Shellshock were the most notable recent ones, but a less publicized flaw in Apache Struts enabled attackers to remotely install back doors on web servers.
The DHS wants to fund projects to make static code analysis tools better. These scanners often aren’t adequate, warned Mike Pittenger, vice president of security strategy at Black Duck Software, which searches code bases for OSS.
“Heartbleed was a buffer overflow, so in theory you could find this with either static or dynamic analysis, but the vulnerability was complex enough and so far down in the code,” he said, adding that there have been over 60 new vulnerabilities disclosed in the affected OpenSSL software since Heartbleed surfaced.
“How many times do you think that OpenSSL was tested through any of the automated tools?” he asked. “Thousands of times, but it still took a security researcher to uncover it.”
Can’t we just rely on these researchers to find the bugs, then? Conventional wisdom says so, but experts aren’t so sure. Linus’ law, formulated by Eric Raymond in his seminal book on open source The Cathedral and the Bazaar, argues that because open source software is transparent, someone will eventually spot a bug and squash it. Todd Beardsley, senior security research manager at Rapid 7, disagrees.
“Unfortunately, this implies that bug fixing happens practically by accident, which is clearly not the case,” he said. “OSS can and does ship with bugs, and some of those bugs lead to security vulnerabilities.”
Quantifying open source bugs
Some people track those vulnerabilities with scanners, and one of the best snapshots of open source vulnerabilities is the Coverity Scan Report. This has been running since 2009, and the most recent one was the 2014 report, published in 2015. It analyzed 8,776 commercial software projects and 2,650 open source ones, finding a marked disparity between them. Open source software had a defect density of 0.61, compared to commercial software’s 0.76.
On the face of it, this makes OSS far more secure, but those numbers are averages. In practice, the security landscape in open source is a little bumpier.
“If you look at the defect density in terms of the OWASP Top 10, a list of the most critical web application security flaws, commercial software is actually in better shape,” said Andreas Kuehlmann, senior vice president and general manager of the Synopsys Software Integrity Group. This group was formed after Synopsys acquired Coverity and some other security firms.
“This tells us that developers of commercial web applications are doing a better job eliminating the most critical security flaws,” he pointed out.
The other problem with Linus’ Law is that not all of the code gets the same level of scrutiny. “It certainly isn’t true across the board, as the ‘many eyes’ are not evenly distributed across all open source projects,” said Jacquelyn Rees Ulmer, professor of information systems at Iowa State University, who has contributed to open source vulnerability research.
The same is true of proprietary code, of course, and its opacity puts it at a disadvantage. The existence and status of bugs may be hidden if a closed team has access to the source code, and it can’t be forked to deal with buggy versions.
Dealing with dependencies
However, the transparency of many open source projects can also create its own problems, because it allows everyone to draw on and reuse everyone else’s code. A good example of the problems that can create happened in March.
A disgruntled developer broke thousands of these projects by unpublishing 273 of his packages. One of them, called left-pad, was used in thousands of online sites and applications, and its disappearance caused countless headaches before the company that manages NPM could republish the package and solve the problem.
Some of the projects broken by the left-pad debacle didn’t use the package directly, but it was a necessary part of some other software components that they used. This is typical of OSS, which frequently uses packages and libraries from other open source projects, creating a web of dependencies.
Similarly, security flaws may not show up in one open source package, but they may be part of a component that it uses, or part of an open source software project used in turn by that component. It can be difficult keeping track of all that.
Incidentally, a few days after the NPM debacle, a Google researcher found a vulnerability in NPM scripts that could enable malicious packages to create a self-replicating worm that infected other packages. NPM Inc downplayed the risks but admitted that it couldn’t guarantee the safety of packages available in its open source registry.
Companies that do use open source projects should therefore make every effort to understand their exposure. This involves understanding where the OSS is your own code base, because the chances are that some of your developers have probably filched a library from somewhere to meet a deadline.
Another way to keep up is to become more actively engaged in these projects, rather than simply taking from them.
“While it certainly isn’t required, at least here in the US, it is a good idea if your business is heavily invested in the use of OSS software, if only out of self-interest,” said Rees Ulmer.
If your own staff can contribute to the health of a project by helping to ferret out security flaws in the code, then that will benefit everyone involved. Wasn’t that one of the fundamental points of open source, after all?