Skip to main content

How to setup SSL in an Apache2 Docker container running as a non-root user


You can't do it directly in standard Docker, unless you run the container as root, or weaken the file ownership permissions, which you should not do. That's why you're having so much trouble finding a working example with a non-root user (even the official docs example doesn't set a user...which means the container is running as root).


SSL certificates are owned by root. For security reasons, Docker containers should be run as a non-root (unprivileged) user. This means that your Apache container cannot access the SSL certificates, unless you apply the aforementioned Bad Ideas.

The solution

Install NGINX as a reverse proxy on the host machine, and terminate your SSL connections there. So NGINX sits in front of your containers, handling the SSL connections to clients and forwarding requests to your Apache container(s) on the back end.

I know, I know, you didn't want more complexity. But setting up NGINX to reverse proxy SSL is relatively simple, and using this approach has the following advantages:

  • SSL certificate ownership is safely retained by root.
  • Docker containers can be safely run as non-root users.
  • The NGINX reverse proxy also adds a lot of flexibility to your setup, since it allows you to pass requests back to anything, so you can also run containerised Golang apps or anything else alongside your webserver, without resorting to specialised hosting services such as Heroku.

Another probable solution

Another method that should work (but which I have not tested) is to step outside basic docker and engage Swarm mode, which gives you access to i) services, ii) secrets and iii) configs:

  • Declare Apache as a service with a single replication (unless you actually need more).
  • Declare the private key file as a docker secret, accessible by the Apache container / user (ownership and file permissions must be set).
  • Declare the certificate to be a config accessible by the Apache container / user.

There are a couple of caveats to the Swarm idea:

  • Docker secrets (and configs) are immutable, and cannot be removed while in use by a service. So automated renewal of your certificates won't work without additional measures to rotate them, such as using a bash script to manage the rotation. If you are using Certbot to generate your certificates, you should be able to execute such a script using Certbots pre/post validation hooks, or you can probably just list the commands in the hooks directly.
  • In principle, giving the Apache container / user (eg. www-data) access to the private key file as a Docker secret is similar (identical?) to weakening the file ownership permissions to give the webserver access directly, since you need to set permissions for the secret in the same manner. Recall that the private key is normally owned by root, so it is not clear that using secrets is any better.

So, I suggest just sticking to the reverse proxy method, which works, is not controversial, and faster/easier to set up. Although I will note that apparently everyone else on the internet runs their Apache containers as root and doesn't seem to care that this conflicts with standard Docker security practice.

If you are setting up a Tuskfish 2 site a couple of configuration changes are required to use a reverse proxy

If you are using NGINX as a reverse proxy to terminate SSL in front of Tuskfish, there are a couple of code configurations to make:

1. Lock the protocol to https: in index.php (otherwise the routing won't work):

Uncomment this line:

$url = "https://" . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'];

Comment out the next two lines:

//$url = (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] === 'on' ? "https" : "http")

2. Lock the secure cookie flag to true in: trust_path/libraries/tuskfish/class/Tfish/Session.php

Comment out this line:

// $secure = isset($_SERVER['HTTPS']);
Uncomment the next line:

$secure = true

Copyright, all rights reserved.