The Flaw In Encryption Back Doors

Picture this: You’re a law enforcement agent, and you collared a terrorist who said that there will be an attack on a major city in the next 24 hours. Thousands might die. He doesn’t know the details, because his terrorist cell was compartmentalized. All he did was put the bomb together.

You have a phone from a now-dead operator in another cell that you’re sure has the information you need. That information is encrypted. Every minute that elapses brings you closer to disaster. Wouldn’t you want to get at those details?

These are the kinds of scenarios that make opponents to strong cryptography nod. According to FBI chief Christopher Wray, the FBI failed to get at content on 7,775 devices in the fiscal year to September 30th, 2017, even though it had proper warrants. That’s because those devices were encrypted, and the agency couldn’t crack their code. He called it a public safety issue.

On those devices could be information about, murderers, terrorists and child predators planning to cause harm or causing it right now. Wray calls the use of strong encryption a public safety issue.

Wray – and many other government representatives including those in the UK – want encryption technologies that are friendlier to law enforcement and intelligence. They want to be able to get at that information when they need it. In short, whatever euphemism they may use, they want a cryptographic back door.

No support for a crypto back door

Almost universally, though, cryptologists, cybersecurity experts and privacy advocates say this is a bad idea. While some of the SecTor speakers that we interviewed understood the need for government to get at data quickly in some cases, none of them supported a back door. Watch our video to find out why:

Their concerns? Firstly, that it causes broad harm to the cybersecurity ecosystem.  Crypto may be used by bad actors, but many people use it for legitimate reasons, to protect information that isn’t illegal – just sensitive.

If the feds can use crypto back doors to gain access to our information, then so could other state actors or cybercriminals, the argument goes.

Nevertheless, some governments have already legislated in this area. The UK government’s Investigatory Powers Act, passed in 2016, outlines the use of technical capability notices, which are government requests for user data.

In a secretive guidance document obtained by UK digital privacy advocate the Open Rights Group, the UK government required communications providers to “remove electronic protection applied by or on behalf of the telecommunications operator to the communications or data”.

So while the government may not be explicitly mandating a back door, that suggests that communications providers would be able to get the info somehow. The definition of “telecommunications operator” is broad, and seems to cover any company providing access to any kind of communication system (see the Act, 261(10)). The definition would scoop up messaging software providers, for example.

That’s technically doable if a communications firm holds its customers’ encryption keys, but what about those using end-to-end encryption, where the end users hold their own keys? They would have to create an explicit back door in their crypto algorithm or simply use weak crypto with known flaws, which amounts to the same thing.

The UK government already asked WhatsApp for a back door. It said no. The FBI has repeatedly tussled with Apple, which refuses to provide a back door and is making its devices more difficult to crack, prompting FBI agents to call its employees “jerks”.

Even if governments could get companies to play ball, bad actors would resort to other publicly-available mechanisms, decoupled from a specific app. GPG is a public domain encryption suite based on Phil Zimmerman’s PGP algorithm – the one that the US government tried to stop him exporting. Tor is an anonymity network.

Vulnerabilities have been demonstrated in both these privacy implementations in the past, though. That raises another point: hacking. If governments can’t get companies to put back doors into crypto, they could still crack targets’ systems on their own. This is something that the UK is banking on, and already does, in a process that it calls ‘equipment interference’. That’s hacking, to you and me, and there’s a whole section on it in the Investigatory Powers Act.

Upstream attacks

There’s another worry facing privacy advocates, which is that governments may work crypto back doors in upstream. What happens if government agents gain access to a standards body that is developing a crypto standard and introduce their own mathematical back door that others don’t spot? That’s exactly what the NSA reportedly did with the Dual EC DRBG standard, by inserting a flawed random number generator. Later, it made its way into RSA’s BSafe, which would have rendered its customers vulnerable.

It is still possible to inject flaws into standards and have them pass government tests. Researchers recently demonstrated their own BEA-1 compromised crypto algorithm, which they had configured to pass all NIST tests. On the surface, it looks impregnable. Underneath, it’s intentionally broken.

All of which raises a scarier proposition. Someone may eventually come along and prove that some of the encryption standards we’re all using today are mathematically flawed and open to attack. Then again, they may not. Even if no compromise is publicly unveiled, it’s far harder to prove for sure that they are 100% clean.

0

Bookmark and Share