How to build a staging server with Docker, Jenkins and Traefik~ Chapter 2 ~
Preparing the server
Let’s pre-install requirements and configure the server so it can successfully build the staging stack. Across these examples, you are expected to replace placeholder values with your own, such as
You must also be able to connect to your server using SSH using an account with elevated permissions to run the commands listed in the article.
I am using an entry level CentOS box off of Google Compute Engine with the sole purpose of hosting the staging stack, but any Linux machine should work fine.
For this setup to work, you need 3 different DNS entries plus a wild card sub-domain pointing to the IP of the server on which everything will be installed. I have chosen the following for my proposed solution:
- jenkins.your_domain.com : Will point to Jenkins’ UI
- docker-registry.your_domain.com : Will point to the private Docker Registry
- swarm.your_domain.com : Will point to Portainer’s UI
- *.swarm.your_domain.com : Will point to dynamically staged containers
There are multiple ways to install Docker CE, as explained in their documentation, but I chose to run their convenience script which indeed happens to be very convenient:
Adding you to the
docker group with the
usermod command will allow you to run Docker commands without having to be
root. You will have to log out and log back in so that the group changes to your user are applied.
You can prove that this step has worked by running the Hello World container.
If Docker has correctly been configured, you should see a message like the following:
Installing Docker Compose
Docker compose also needs to be installed on the host. Again, by following the official documentation, run a set of commands similar to the following:
1.19.0 is likely outdated by the time you read this. That is why I suggest you double-check with the official commands instead of pasting my values)
Create mounted directories
Containers will mount volumes that will allow them to load configuration files and maintain their own reusable file caches. We need to create a list of paths on the server for that purpose.
Feel free to change the paths to something in your setup, but be aware that these paths will be reused later on within our stack’s
docker-compose.yml configuration file and you will need to update them there as well.
You will likely require elevated rights to run these commands. If that is the case you may have to use
su or prefix with
sudo depending on your OS.
Injecting a custom toolset
Additionally, it may be useful for you to mount a collection of commonly used helper scripts into your build containers. For example, the repetitive process of compressing a build result into a tarball can be abstracted into a shell script which in turn can be mounted inside the build containers where it can be invoked during builds.
The obvious advantage is that if you change something in that script, all the jobs will reflect that change instantly.
I have created a git repository that holds many of such helpers scripts that I have cloned in my user’s home directory under
/home/your_user/dev-ops. I feel you should do something similar as well and will therefore reference this path when configuring the stack.
Folders and files mounted in the Jenkins container must have permissions that can be understood build the build host. By convention, these can be expressed by
chown 1000:1000. Your mounted volume caches require this set of permission as will the optional
The Traefik container requires a global default configuration file to be mounted when it is ran. Create a file called
/var/lib/traefik which contains the following. Be aware of the multiple values you would need to customize for your use case.
Be wary of which IPs you are white listing in this file. You will want to block almost everything while granting access to you, the actual host server and the IPs of incoming webhooks. For example, I had to supply the list of BitBucket webhook IPs because that’s where our projects are hosted and BitBucket needs to notify Jenkins of incoming commits.
For the sake of keeping this in one place, we will declare as many services as we can in a
docker-compose.yml file that will represent the orchestration of the server.
/home/your_domain/dev-ops/server-setup/, create a file called
docker-compose.yml containing the following. Remember to replace the domain names for something you actually desire and to make sure the mounted volume paths work out in your case.
The only image we need to build is a custom Jenkins image as you will need to add special configurations to how is it built. Next to
docker-compose.yml, create a directory called
jenkins that will contain the new
Dockerfile for the Jenkins image.
Before we write the file, we must obtain the group id under which
docker is running. The id will need to be added to the
Dockerfile and will allow Jenkins’ containers to start sub-containers as builds.
Taking note of the value outputted by the last command, edit the
Dockerfile so it contains build commands similar to these:
The declaration of the services is rather self-explanatory when you know a little about Compose’s syntax. It will bring up all the services, mapping their ports and volumes (or file shares) to whatever the configuration is set to.
Most importantly to remember, as this will have to be reused by the resulting build images, it also declares a custom
traefik network that will be used to route public traffic to the private containers. Containers that don’t have access to that network will not be accessible from the web through Traefik’s load balancer.
Once you are ready, bring up the stack using the up command and take note of the actual network name that has been created, which depends on the current directory name:
You may prove that you have all of these containers running successfully by visiting the different hosts you have entered:
This is the first real time you will be testing the white listed IPs you have configured in
traefik.toml and it is something to keep in mind if you are getting authorization errors.
Configuring the services
You will want to hit all these services and configure them further to make them suit your needs. Jenkins is likely going to be the one that requires the most attention as you need to install the Blue Ocean plugin, among various other important settings like the driver that allows syncing with Jira for example.
I won’t go through any of the service setups in this article, but I am sure you will find these steps to be either self-explanatory or documented elsewhere, better than I could do it here.
With the stack ready to receive builds, lets write reusable Jenkins jobs that will be shared by our builds.