What exposed docker registries tell us about cloud deployments

If you’ve delved into cloud technologies at all, you’ve probably run across the concept of containers. They promise to streamline your software developments, but they come with their own dark lining. This week, Palo Alto Networks’ Unit 42 research team published some shocking security research about container registries that should give us pause about how we develop for the cloud.

Containers offer all the virtualization capabilities of virtual machines, but in a more nimble package. Whereas a VM recasts an entire computer in software, enabling you to install a complete virtualized operating system from the ground up, a container uses a host operating system’s kernel for its core services, sharing it with other containers. The container only packages the bare minimum it needs to run a software service – that is, the application code, and any dependencies such as software libraries and environment variables.

This bare-bones approach to software packaging makes for a nimble virtual application that is fast to spin up and shut down. It also makes it easy to create container images that you can distribute and reuse. The cool thing about container images is that you can use them to build other, more complicated images. For example, a simple Python container image could form the foundation for several other images containing more specialized Python projects.

Docker is by far the most popular container system on the market today. Head over to the company’s public registry docker hub and you’ll find thousands of images offering everything from basic data science tools through to server-based RSS readers. Docker hub isn’t the only registry that companies can use to pull docker images, though. They can also set up their own docker registries containing internally developed images for their own developers to exchange and reuse.

The problem lies in how companies set up the infrastructure to support their container development and deployment environments. According to Palo Alto, organizations are misconfiguring their registries, mishandling network access controls to expose them in public. Its research found 15,887 unique versions of 2,956 applications exposed online. The companies exposing them ranged from research institutes through to retailers, news media organizations, and technology companies.

The researchers found the vulnerable registries by using the online search engines Shodan and Censys to search for a common message header given by all docker registries in response to an API query. It then tried to do four things with each registry if found online: check the version, pull an image, push an image, and delete an image. Researchers were able to make these requests without actually following through on them, so that they didn’t tamper with the online registries they found (an important ethical consideration there).

The project found 941 docker registries exposed to the internet, and 117 accessible without authentication. 80 of those accessible registries allowed pull operations, 92 allowed push operations, and seven allowed deletion.

That’s worrying for different reasons. Exposing your container images online is just as bad as exposing your internal source code, if not worse. They can also expose sensitive business data, according to the researchers, making a simple image pull damaging enough. An image push could enable an attacker to tamper with source code before uploading it, potentially compromising a company’s internal applications. And a successful deletion carries obvious harmful effects.

This isn’t the first project to hunt down exposed docker registries. Another security researcher conducted a similar project in May 2019, discovering 94 open registries online and even earning himself a bounty or two.

The infrastructure isn’t the only part of a docker stack that can expose an organization to attack. The other part is the content of the images themselves. Research conducted in February 2019 by vulnerability management company Snyk found that the ten most popular docker images each contained at least 30 vulnerabilities. The problem here is that many of those vulnerabilities stem from indirect dependencies, which is a common problem in open source development. They’re hard to track down and get rid of.

Is this docker’s fault? Hardly. It even developed a Secure Trusted Registry that helped manage image workflows and provided an optional vulnerability scanning service before Mirantis bought it in November. However, that requires organizations to invest actual money in the Docker Enterprise product that contained it, and to invest actual time in learning how to use it.

There’s the rub with containers, and indeed with any form of cloud-spawned easy development on-ramp. These technologies make it easy for developers and admins alike to spin up complex and highly functional application deployments (you can often do it in a single command). But it also makes it easier to make mistakes, often because a developer will have no idea what they’re actually doing.

Abstraction has become a poisoned chalice for organizations who want fast results. It lowers the barriers to entry, but also makes it less likely that developers will understand what’s happening under the hood. By all means drive DevOps disciplines into your organization to benefit from the streamlined agility of the cloud, but don’t do it at the expense of diligence and a deep understanding of the underlying technology.