Popular Docker Images under Security Scrutiny

Juan Elosua Tomé    borjapintoscastroeng    16 June, 2020
Popular Docker Images under Security Scrutiny

Docker is a widely-used technology in development to quickly deploy self-contained applications independent of the operating system used. It is very useful for developers and administrators.

For developers, it provides agility in creating and testing complex technological architectures and their integration. Another crucial aspect in Docker’s success among developers is the certainty that its code will work in any other machine working with Docker, so eliminating the classic issues of deployment in the target machine due to different configurations of environments, dependencies, base SW versions, etc.

It makes it easier for administrators to maintain virtual machines and allocate resources because Docker containers are much lighter. A simple image is needed to be able to deploy the containers needed. But, how secure are these images?

From the TEGRA cybersecurity centre in Galicia, we have carried out a study on the most popular Docker images. To do this, we have used the DockerHub platform, the official image repository managed by Docker, so that any user can download an image instead of building one themselves from scratch. For example, this would be an image from the mysql database.

Popular Images with More Vulnerabilities

Firstly, we got a list of the 100 most downloaded images from DockerHub, as of August 8, 2019.

Afterwards, we have analysed each image using Dagda, a tool that its creator, Elías Grande Rubio, defines as: “a Docker security suite that allows both the static analysis of vulnerabilities of software components in Docker images and the analysis of dependencies of different application frameworks”.

Below are the 10 Docker images (of the 100 analysed) where Dagda found the greatest number of vulnerabilities:

Docker ImageNo. of DownloadsVulnerabilities

How is it possible that there are so many vulnerabilities in the most popular Docker images?

How Docker Works

If we think about how Docker works, we see that it is built in static layers.

Therefore, we see that the vulnerabilities of the previous layers are present in the images built based on them.

We are confident that the developers, the day they created the image, updated it as much as possible. However, we see how the images are anchored to the time when they were built and as time passes, bugs, vulnerabilities and new exploits are discovered.

Details of the Vulnerabilities Found

Then, already aware that Docker images work by layers and inheritance (even of vulnerabilities), we dissect in depth the vulnerabilities found. To do this, we got the dockerfiles corresponding to their construction and observed how the analysed images are made up. In the following figure we can see the inheritance scheme of the images with more detected vulnerabilities:

It should be pointed out that of the 10 most vulnerable popular images we have analysed, most (6) inherit from centOS7. In the following sections we will analyse this in detail.

Detailed Analysis of CentOS-Based Images

Let us discuss the source of vulnerabilities in centOS-based images. For each image, we subtract the centOS-based vulnerabilities, resulting in the following table:

Docker ImageVulnerabilities

Now the origin of the vulnerabilities is more evident, which ones are specific to each image and which ones are inherited from the operating system.


If we use Docker in our technology stack, it is important to have tools that help us assess the security of the images we use or build, either with free solutions such as Dagda, Anchore, Clair, Dockscan, etc., or other paid solutions such as Docker Trusted Registry or Twistlock.

One option to consider in these tools is the real-time container monitoring functionality. This dynamic monitoring scans all events occurring in the running container and, if there is any suspicious activity, it triggers an alert.

Let us think that Docker images usually have a very specific activity for which they have been built. Therefore, within a running image, if an administrator tried to install new software it would be an anomalous behaviour. For example, in a container with a WordPress, it would be very strange for an administrator to install new software.

To show how this works, we enabled real-time monitoring for a base image of ubuntu:18.04 and installed the git package. In the figure we can see how dynamic monitoring detects this behaviour and triggers the corresponding warnings.

In short, if we work with Docker, the use of container analysis tools can help us to have a security approach within our development lifecycle. The tools will show us the existing vulnerabilities so that we can analyse more thoroughly if the image can really be compromised or not, both in a static way and with a dynamic monitoring.

In any case, now we understand that the inherent nature of Docker Imaging makes it more likely to be anchored in time, so we must assess the impact of such vulnerabilities. The existence of a vulnerability is one thing, but that a vulnerability exists and can be exploited by an attacker is another, and something even more complicated (we hope) is that an attacker can exploit it remotely.

However, in the Docker world there are examples of vulnerabilities being exploited in-the-wild, like the 2014 ElasticSearch dockerised attack exploiting the CVE-2014-3120 vulnerability, one of the first publicly recognised attacks on Docker images. Other examples would be the known Heartbleed vulnerabilities (CVE-2014-0160) caused by an OpenSSL library, or Shellshock (CVE-2014-6271) associated with the GNU BASH library. These libraries used to be installed in many base images, in this case even if the deployed application were secure it would have a remotely exploitable vulnerability when using one of them.

Should These Images Be Used? Is the Risk Greater or Lesser?

Like all tools and software, these images should be used with caution. It is possible that, in development, in continuous integration or while testing an application with a database, vulnerabilities may not affect − as long as we only want to test the functionality and those containers will be destroyed at the end. Even so, it is necessary to monitor the environment and practice defence in depth. Recommendations could be:

  • For production use, it should be verified that the application does not make use of the vulnerable libraries and that the vulnerability exploit does not affect the nature of the application itself in order to ensure that future updates to the application do not expose us.
  • The same recommendation should apply to Docker as when using any third-party software: we should only use containers from reliable sources. As an example of the risks of not following this recommendation we can see this article describing the use of Dockerhub with malicious images used for cryptomining.
  • Within a security-conscious development cycle, managing vulnerabilities and versions of all components of the product or software is a key task.
  • Minimum exposure point. An advantage of using Docker is that you can build images only with the libraries needed to work. For example, you could remove the shell so that no attacker could perform actions, which would be very complicated on a real server. These images are called distroless, and do not contain any shells, package managers, or other programs that are expected to contain a standard distribution, resulting in smaller, less vulnerable images.


As we have seen, with the emergence of technologies such as Docker, aimed at facilitating deployments by packaging complete application dependencies, the defined boundary between the responsibilities of developers and those of system administrators within a company becomes blurred. The summary of its “dangers” could be:

  • Docker images are built on static layers and, by their nature, these are anchored to the moment they were built, so images are more prone to out-of-date (especially those with a significant number of layers).
  • Docker images are usually created by developers and other profiles who, not being used to system administration tasks, may not take into account the security measures required for their proper update, configuration, and maintenance.

In summary, it is necessary to have joint processes, tools and methodologies between both profiles that make it possible for the productivity gained with Docker not to generate, on the other hand, a security issue or a lack of control of the risks we are exposed to in our systems.

TEGRA cybersecurity centre is part of the mixed unit in cybersecurity research known as IRMAS (Information Rights Management Advanced Systems), which is co-financed by the European Union, within the framework of the 2014-2020 Galicia FEDER Operational Programme to promote technological development, innovation and high-quality research.

This image has an empty alt attribute; its file name is image-85.png

Leave a Reply

Your email address will not be published. Required fields are marked *