Featured image thumbnail for post 'Essential Docker Security Practices To Consider'

Essential Docker Security Practices To Consider

Learn the best strategies to protect your containers and data.

Joey Miller • Posted September 23, 2023




Docker is not inherently secure by default, especially since the daemon-container architecture runs as root by default.

There are many ways you can significantly limit the consequences of an attack should one compromise your system via your services/containers.

This post served as my guide when getting started with self-hosting. This list is in no way comprehensive, so let me know in the comments if you have any suggestions to expand this list.

Use rootless docker

Running Docker without requiring root (administrative) privileges on the host system is one of the first things you should do to enhance security. If an attacker can gain access to an externally-facing Docker container, there is more of a risk that they can escalate to having significant control over the host system. Running Docker rootless will reduce the potential attack surface.

Although there are some difficulties that this can bring (as it goes against how Docker was initially designed), it is possible to force the Docker daemon to run as a non-root user. See my guide about dealing with the caveats of rootless docker.

Alternatively, consider replacing Docker with Podman. Podman is interoperable (OCI-compatible) but has a different architecture - it does not have a central root daemon. Podman containers are spawned as child processes of the user.

Use a reverse-proxy

A reverse proxy is a service that acts as an intermediary between the client (requester) and the web services. A reverse proxy will be responsible for forwarding the client requests to the backend services. It offers several security and performance benefits, which include:

  • Handling SSL/TLS termination. This allows for centralized certificate management, making it easier to keep certificates up to date.
  • Obfuscation of the backend web servers. Clients only see the IP address of the reverse proxy, which can help protect the backend services from direct attacks.
  • Enforcing security policies. Reverse proxies can prevent attacks by inspecting and filtering incoming traffic for malicious content or patterns.
  • Content caching. By caching content that gets served to the client, the reverse proxy can significantly reduce the load on the backend services for static files.
  • Better logging and monitoring. Since all requests go through the reverse proxy, analyzing the logs proves to be an effective way of monitoring for security threats/attacks. This enables the use of tools such as Crowdsec or fail2ban (described in more detail below).

Nginx or Traefik are commonly used in reverse proxy implementations. See my guide about setting up Nginx with Keycloak for an example on getting started with reverse proxies.

Use a monitoring/mitigation tool

As mentioned prior, tools such as Crowdsec or fail2ban are designed to protect systems from unauthorized access. They do this by monitoring log files for suspicious or threatening behaviour. They can mitigate these attacks by taking action such as using the host firewall to temporarily block the offending IP address.

If you don't care about the mitigation functionality, using a tool like Grafana allows you to monitor various network-related metrics and data. This would provide more visibility to the threats you are facing.

All of these tools are available as Docker containers, making them a fairly easy way to manage external threats.

Keep Docker images up to date

Outdated Docker images may contain bugs or vulnerabilities that can be prevented by running the latest versions.

There are a couple of ways you can stay up to date:

  • Allow a tool such as Watchtower to automatically update your images. Watchtower is very configurable and can run as a Docker container alongside your others.
  • Use a tool to automatically update your docker-compose.yml files. If your infrastructure is version-controlled (i.e. in git), consider a tool like Renovate. This will enable you to get automated pull requests to update any outdated images.
  • Manually perform the updates routinely.

Isolate container networking

Isolated container networking helps prevent malicious or compromised containers from directly communicating with other containers (or the host system).

There are many methods, but some of the ways you might go about this include:

  • Manually configuring your containers to run on separate networks. This may not be realistic when running large amounts of containers.
  • Avoid using host networking. Docker host networking removes any isolation between the container and the host.
  • Disable networking for containers that do not need any outside communication.
  • Set up a firewall between containers. This can be done manually with iptables to only allow traffic to/from some containers. Otherwise, some tools (such as trafficjam) exist that can simplify this.

Consider isolating containers from the host/internet

Additionally, consider limiting internet access for containers that do not require it. This reduces the number of potential entry points for attackers.

This could be accomplished with a user-defined bridge network:

docker network create --driver bridge isolated_network
docker run --network isolated_network --name my_container imagename

Take regular backups

Make sure you take regular backups of your Docker data. Any persistent data should be saved in a volume-mount or a host-mount. By regularly backing up this data you ensure that critical data is protected, and you can rapidly restore your services in the event of an issue/corruption/security incident.

Duplicati can be run as a Docker container, making it simple to back up to a large number of targets (including OneDrive, Google Drive, Mega, Dropbox, S3, etc).

Read-only volumes

Following the principle of least privilege, make sure that only volumes/mounts that need write access on the host are permitted to do so.

Note: Where possible, you should also avoid exposing /var/run/docker.sock to containers.

Configure resource quotas

To ensure a running container does not consume too many of the host machine's resources, Docker allows you to set limits. This can provide some mitigation of DoS attacks, and provide fairer resource allocation across containers.

Docker allows for resource constraints such as:

  • Memory usage.
  • CPU usage.
  • Number of container restarts.
  • Number of file descriptors.
  • Number of processes.

See the Docker documentation for more information.

The rabbit hole goes deeper

For more information and other techniques to secure your Docker architecture have a read of the Docker OWASP cheat sheet.


If you found this post helpful, please share it around:


Comments