A strange reluctance to be critical

How to build a staging server with Docker, Jenkins and Traefik

~ Chapter 1 ~

Rationale behind the chosen stack

This guide is largely outdated at this point. You should instead consider installing Jenkins on Kubernetes with Helm.

This first page of the guide is dedicated to explaining the rationale behind this setup. You may skip ahead to the actual implementation if you don’t need this information.


Over the past months, I have been promoting DevOps culture at Mutation. For some time now, our projects have both been built and distributed using Deploybot’s platform. While it worked reasonably well enough to get us going at the beginning, we outgrew Deploybot as our projects gained in complexity.

One of the possibilities I felt we were particularly missing out on was to fully leverage Docker during both the build and the deployment process. Our projects usually use one of the .Net and PHP stacks and it is not necessarily practical to have different staging environments for each technology.

Additionally, configuring staging environments had to be done manually and we have been stepping on each other’s toes when multiple developers worked on the same project but tried to preview different feature branches alongside the master branch, simultaneously.

We decided to rework our development and staging pipelines to improve on some of our current pain points. At this point, however, we are not ready internally to also distribute Docker images in production. This guide will also stop short of doing so.

Requirements of the desired stack

I have compiled a list of requirements this reworked stack should feature for it to be worth the trouble :

  • Projects must be able to configure how they are to be built, tested and distributed from a file in version control.
  • Projects must handle their code and server dependencies from a file in version control.
  • The stack needs to be OS and language agnostic.
  • Our Docker registry must be private.
  • Each branch must be able to be previewed in its own staged version. This must be automated.
  • We must enforce secure https URLs.
  • The environments must whitelist IPs and deny others.
  • The whole solution must remain economic.

Proposed solution

After looking into various possibilities, I have settled on a combination of Docker, Jenkins, Portainer, and Traefik working hand-in-hand as answers the requirements. These services will all be running inside containers on a Linux host.


It feels elegant to only have Docker installed on the host server. We can kickstart other tools as services within Docker containers instead of installing them on the host server directly.

I will be creating a Docker Swarm even though we do not intend to use load balancing. Stacks will be deployed through a docker-compose file tailored for the environment.

While you can use a free public cloud for storing custom Docker images, I will rather save our build snapshots in a custom, private, Docker Registry. I will use the official container to launch an instance of it.


This open source automation server will be used to run the different tasks required by the builds. Though I could have used a cloud-based solution, hosting our own continuous integration server on premise made more sense for our general use case.

For the time being, Jenkins is not very pretty, but the advent of Blue Ocean improved the user experience and how build pipelines are configured.


We don’t really want to open more ports than we have to on the host server. We only want – and a web server should only generally need – to open ports 22, 80 and 443.

Our Docker containers will potentially fight for the same ports as they are brought up. Your websites containers will probably all want port 80 or 443. We need a reverse-proxy in front of the containers that will route traffic to the correct destinations based on DNS information. The proxy will know to go from port 80/443 to whatever the dynamic IP of the target container ends up being.

Traefik is a fast reverse-proxy written in Go that is able to listen for Docker container events. It will know when containers are started or stopped, and will automatically add or remove their DNS rewrites accordingly. It uses custom Docker labels to send key/values pairs back to it’s proxy, then used to customize the desired configuration for each container.

Additionally, the proxy supports automated handling of Let’s Encrypt certificates out of the box. This will allow our environments to run under https very easily. There are rate limits on certificate generation, but you should not hit them for a long time.


Developers will need to quickly find details about the running containers. I think it is impractical to force them into ssh-ing onto the server and docker inspect through containers to extract a database IP, for instance.

Portainer offers a simple but feature-rich UI that allows management of your Docker containers and is exactly what I need to get people excited about containerization.

Really meant for a staging environment

The proposed stack is meant for an internal use inside a company. It should be protected from the Internet by firewalls and other appropriate measures.

You should not use the information in this guide for constructing a production stack. The steps illustrated here will do not necessarily follow best security practices. Certainly, assumptions are made in the way things are configured that would be false in a production environment.

Now that you know why this is what I am building, it is time to get our hands dirty and begin preparing the server.

Read other articles