Code of Ethics for Cybersecurity

The battle between black hats and white hats will never end, but do we need some kind of Geneva Convention for how it’s waged? Security pros must often engage black hats, either directly or indirectly, and the rules of engagement aren’t always clear. Sometimes they can also raise ethical issues.

Take Facebook’s approach to password protection, for example. We’ve already talked about how inadequate passwords are. We know that companies use various techniques to move beyond password security – or at least to bolster it – but Facebook’s solution is perhaps among the most controversial.

At Web Summit in Lisbon, the social network’s chief security officer Alex Stamos said that the company buys lists of passwords on the black market. It then checks them against the hashed passwords that it stores on behalf of its own users. When it finds a match, it will often contact users telling them that their passwords needed changing.

Is it ethical to give money to cybercriminals online, even if your intentions are honourable? It’s sometimes looked down upon by security pros, says Gizmodo, and CSO Online interviews some of those pros, who say that it reinforces the criminal business model.

Perhaps they have a point. The money that the criminals take acts as a reward for their efforts, however small, and funds their lifestyle. It also motivates them to continue with their activities.

Brendan O’Connor, a technology lawyer who spoke at SecTor 2016 last month, disagrees.

“I can’t imagine that a single marginal purchase ‘fund[s] their lifestyle’,” he argues. “And this purchase devalues what other people then buy, and devalues the entire password-selling industry, when Facebook uses the password dump to protect users.”

This isn’t the only scenario in which people have seen fit to pay black hats. Crypto-ransomware that locks up files on victims’ machines has prompted several individuals and companies to pay criminals for the recovery of their files.

This, too, has sparked ethics debates. Paying ransomware criminals not only encourages their business model, but helps pay for it. While desperate circumstances call for desperate measures, experts have reported cases of sysadmins paying for the return of their files even though recovery from backups would have been possible, because it’s the “cheaper option”.

Surveys seem to bear this out. ThreatTrack asked 250 security pros and found that almost a third would negotiate with ransomware blackmailers.

Dancing with the devil

Just how close should we get to black hats when trying to secure our own systems? In 2015, AlienVault surveyed 1107 people at the RSA conference to get a sense of what they were prepared to do. Around half of the respondents said that they associate with black hats or visit hacker forums to get the information they needed.

The validity of these surveys is often statistically limited because they’re not often conducted with proper probability sampling, but the results can be interpreted broadly. It shows that security pros will often deal with online criminals where it’s to their advantage.

In the past, cybersecurity organizations have ventured even further into murky territory. WabiSabiLabi springs to mind. In the 2000s, this online marketplace began allowing cybersecurity pros to sell zero-day  flaws to others in a kind of online cybersecurity bazaar.

Vendors often pay for security bugs – Facebook, Google and others have bug bounty programs, and the Tipping Point Zero Day Initiative pays for flaws covering a variety of products. These initiatives use the research to plug gaps in online services or build protection from these exploits into their own security products.

The difference with WabiSabiLabi was that as a marketplace it allowed the flaws to be sold to the highest bidder. The company said that it would vet the buyers, but as people pointed out at the time, it would be easy to buy the exploits via a front man.

Still further along the continuum are those cybersecurity researchers who begin using black hat techniques themselves. Again, even though their actions may be well-intentioned, they fall into ethically grey areas.

We’ve seen researchers creating supposedly benevolent worms that try to ‘fix’ your systems without your permission. We’ve had research swarming botnets so badly with ‘attacks’ that they end up running into each other and each other’s work [PDF]. One ‘guerrilla researcher’ created his own botnet to measure just how big the internet was. We’ve also seen other other academics allegedly attacking the Tor network.

At some point, security researchers will cross a legal line, and those are more clearly defined. But ethics guidelines in cybersecurity research need a lot of work. There doesn’t seem to be a common code of conduct. O’Connor points to the R00tz Asylum rules as the only thing that seems to come close.

The ethical line in 2016 seems to be whatever a researcher or their employer feels happy with. As the issues that we’re facing in cybersecurity continue to evolve and become more complex, these blurred boundaries may become increasingly problematic.