9 minutes
Deploying Secure Web Applications With Docker
Over the past few we have seen more and more use of containers and I think this will continue in the future. And from a developers point of view, rightly so. I still remember using cumbersome and slow Virtual Machines that would take ages to provision and start up. Then along came containers and Docker. Yes there are some drawbacks to be considered before you start using it, but the bottom line is, I can quickly fire up a container or multiple containers, do my work on the project, tear everything down at the end and switch to a different project, where I can do more of the same.
Aside from helping out in the local environment, there is one other advantage that Docker and containers bring to the table. Deployments, and ease of it. It is fairly simple to ship your application with Docker. There are tools that allow you to set up everything in a matter of minutes. This is what this article will try to walk you through, and at the end you will have shipped your application to the web, and better yet, you have it deployed behind a HTTPS secured server.
The basics
For this you will need a server with Docker installed. As long as you have a server running Linux you can easily install docker on it. If you do not have a server you can get one fairly cheap nowadays at different providers. And be sure to install docker-compose as well, as it will simplify a lot of the tasks before us.
In this walk through we will build and deploy a very simple expressjs frontend application, but you can deploy basically anything you want. To achieve our goal we will use the traefik as our ingress router which will route the incoming traffic from the public to our expressjs app.
Preparing the server
Now that you have docker
and docker-compose
installed, we are going to bring
the ingress router up and running and apply a small configuration to it. To do
this, create a docker-compose.yml
file somewhere in your system, i.e. in
/opt/env/
. Create the directory and file, and paste in the following contents:
version: "3"
services:
traefik:
image: traefik:2.2
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/etc/traefik/traefik.toml:ro
container_name: traefik
networks:
- public
restart: always
networks:
public:
Here we define a pretty straight forward docker network and a service connected
to that network. Note that the image version of traefik, 2.2
might be
outdated when you are reading this article, but at the moment it is the newest
version. Setting the container_name
is optional, but it is highly recommended
to set the restart policy for traefik as you want it up all the time. Of
course you will also have to bind the service to the port 80. And the volume
mounts, you need to bind the Docker socket to the service, since traefik
connects to it, listening for Docker container creations, and the traefik.toml
file:
[entryPoints]
[entryPoints.web]
address = ":80"
[providers]
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
[log]
level = "DEBUG"
[accessLog]
In this simple configuration file we enable the access logging, turn the logging
level up to debug, which you can lower later to info, set the location of the
Docker socket as the provider, and define an entrypoint. This is it, you can
start traefik now by simply executing docker-compose up -d
.
Create the sample application
You can skip this step if you wish to build your own application. And even though we are using a JavaScript application here, you can use whatever you want.
Now, lets create the sample expressjs application on your local machine:
mkdir docker-deploy
cd docker-deploy
npm init
# simply use all default values
npm install express --save
Now in that same directory create a new index.js
file with the following
contents:
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening at http://localhost:${port}`))
Quickly make sure it’s working:
node index.js
And open http://localhost:3000/ in your browser.
Deploy the application
Create the image
To deploy your application we must first create the Docker image which will hold
your application and from which a container will be created on the server later.
First we create a .dockerignore
file, telling Docker which files to ignore
when creating the image:
node_modules
npm-debug.log
We will ignore the debug log, because we don’t want it in the container, and
our local node_modules usually hold some development version files, and we
want the container to be production ready, so we will install production ready
modules at image creation. Now that we have this in place, lets create the
Dockerfile
:
FROM node:13.12-alpine3.10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
ENTRYPOINT ["node","index.js"]
This is our sort of a blueprint for creating our own Docker image, where we tell
Docker to create our image FROM
the node base image, more specifically the
13.12-alpine3.10 version of the node Docker image. We must use the
node image in order to have tools like npm
and node
available. Of course
if your application is using other technology, you will want to use a different
base image. However, it is recommended to use the alpine version of the
image if it is available. Alpine Linux is a small Linux distribution perfect for
Docker, since it is very basic and small and reduces the size of your Docker
images.
Next we set the WORKDIR
to /usr/src/app and COPY
any package.json*
files to that directory and clean install the node modules. After that we copy
the rest of our application to the image, which in our case will only copy the
index.js file, since the rest is ignored anyway.
Finally we set the EXPOSE
d port to 3000, since our application will listening
on port 3000 inside the container, and we set the ENTRYPOINT
, which is
basically a command that will be executed when the container is created using
this image.
Before we go on to create the image, head on over to the Docker Hub and register, since we will be pushing the newly created image there. And while you’re there, go ahead and create a new repository there.
Now, let’s create the image:
docker build -t your_dockerhub_name/express-sample .
After a short while, a new image should be created. Now lets see if everything is ok by creating a new container with it:
docker run --rm -d -p 3000:3000 your_dockerhub_name/express-sample
Again open http://localhost:3000/ in your browser. If you see the same result, this means we have successfully created a Docker image and a Docker container from it.
Now we must push that image to Docker Hub, before proceeding to deployment.
# if you have not yet logged in to Docker Hub from your shell do so now
docker login
docker tag your_dockerhub_name/express-sample:latest your_dockerhub_name/express-sample:1.0.0
docker push your_dockerhub_name/express-sample:1.0.0
Before we pushed the image, we have also tagged it with a new version, because our command above has created a new image with the latest tag by default. We could have specified this version there as well. This is an optional step, but recommended, since you want to tag images with versions if you are doing to deploy them, to avoid overwriting them and breaking something on the server.
Deploying the image
Now we have finally come to the deployment part. To execute the deployment, we
will once again use the power of docker-compose
and create yet another
docker-compose.yml
file, not on the server, but locally, right there by your
code:
version: "3"
services:
express:
image: your_dockerhub_name/express-sample:1.0.0
labels:
- "traefik.http.routers.express-sample.rule=Host(`express-sample.yourdomain.com`)"
- "traefik.http.services.express-sample.loadbalancer.server.port=3000"
container_name: express-sample
networks:
- env_public
restart: unless-stopped
networks:
env_public:
external: true
You may also want to add this file to .dockerignore
since its of no use inside
the container. To quickly walk through the file again, we again define a
network, but this time the name is env_public. public comes from the
network defined by the traefik container but we must prepend it with
env_, since docker-compose
will by default prepend the created network
with directory name of where the docker-compose.yml
is located.
We assign this network to the service, set a restart policy, and optional container name. Ports do not need to be bound, since traefik will handle the traffic.
What we absolutely do need to define are the labels
, those tell traefik
which host will be routed to this service, and which port is in use. There are
more rules that can be used for routing, but they are beyond the scope of this
article and I invite you to check the wonderful
traefik docs.
All that there is now is to fire up the docker-compose
command:
DOCKER_HOST="ssh://root@your-server-ip" docker-compose up -d
By setting the DOCKER_HOST
variable we instruct docker-compose
to execute
all commands through the SSH connection that it establishes. If all has gone
well you should be able to view your deployed application by visiting
http://express-sample.yourdomain.com.
Securing the application
Since all of this is now served over the insecure HTTP connection it is time for us to secure it. To do this we must re-configure traefik and instruct it to try and obtain a valid SSL certificate from Lets Encrypt for every new registered container.
First open up the docker-compose.yml
file on the server and edit the traefik
service:
# ...
traefik:
image: traefik:2.2
labels:
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.routers.global-redirect.rule=HostRegexp(`{host:.+}`)"
- "traefik.http.routers.global-redirect.entrypoints=web"
- "traefik.http.routers.global-redirect.middlewares=redirect-to-https"
ports:
- "80:80"
- "443:443"
# ...
Here we added a couple of rules to the labels to instruct traefik to redirect
all HTTP traffic straight to HTTPS, and bound the servers 443 port to the
services 443 port for HTTPS traffic. Now, let’s tell it to use Lets Encrypt,
by editing the traefik.toml
file:
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[certificatesResolvers.myresolver.acme]
email = "your@email.com"
storage = "acme.json"
[certificatesResolvers.myresolver.acme.httpChallenge]
# used during the challenge
entryPoint = "web"
# ...
As you can see, we added a new web-secure entrypoint and an acme
certificate resolver which uses http challenges, that will be handled by the
insecure web entrypoint. Now we need to recreate the traefik service
which can be done by simply running docker-compose up -d
again.
After the server reloads, add the following two labels to your applications
docker-compose.yml
file:
# ...
labels:
- "traefik.http.routers.express-sample.tls=true"
- "traefik.http.routers.express-sample.tls.certresolver=myresolver"
# ...
Now we have instructed the service to use the myresolvre certificate
resolver, defined in the servers traefik.toml
configuration file, to handle
SSL certificates for it. To apply this simply run the following command again:
DOCKER_HOST="ssh://root@your-server-ip" docker-compose up -d
Conclusion
Docker can make your life easier, be it locally for your development needs, or deployments, or,… but it is not the silver bullet. There are some considerations to make before you start using it, you should also take care with the containers, since there is a possibility that you hand your whole server to an attacker on a silver platter. But this is beyond the scope of this article. However, I do invite you to educate yourself about Docker and containers in general further, as they are already playing a big part in software development and I sense that they will not go away any time soon.
I hope you were able to get your application deployed, and you found this article helpful and informative. If you have run into issues, please, leave me a comment, and I will be more than happy to help!