How to build a staging server with Docker, Jenkins and Traefik~ Chapter 3 ~
Writing invokable Jenkins build jobs
In this step we are going to configure your first pipeline and make Jenkins aware of a code repository’s activity.
Doing so will make Jenkins start watching the repository for code changes and allow builds to automatically be ran and configured at build time through a required
Jenkinsfile located at the connected projects’ root directory.
Make building blocks
While one can write all of the code required by a job directly in a single
Jenkinsfile, it is more elegant to extract reusable processes into additional Jenkins jobs. This gives the luxury of performing updates on the job without having to go through each projects.
Docker in Docker
Apart from a clearer expression of intention, there is also the issue of running Docker in Docker if you don’t split the jobs. You cannot easily reference the host server’s Docker instance while you are inside a build container.
For example, you may be building an application in a Docker container based on php:7.2. After the build, you wouldn’t be able to save the build to the registry because you would need to call Docker from within the PHP container which is already in a Docker instance.
Alternatively, you could be running a build that needs to do a quick
docker run of something as one of the steps. If that build is not running in the host container (
agent any) then it would not be able to invoke the Docker binary. Here is a visual example of one trying to
docker run -it compose:latest compose install from the
Jenkins does not like Docker in Docker
If, on the other hand, you delegate the same Docker operation to another Jenkins job not running as a child container, that job will be allowed access to the Docker socket as it remains at the same level as the Jenkins container.
Staying one level deep works well
A third reason for why you may need to split your jobs is that often times images like the previously used php:7.2 image in the first example ships with most GNU/Linux tools like
cat and so on stripped out.
These binaries will surely be required by deployments at one point or another. This is why we’ve installed additional utilities inside our custom version of the Jenkins image we have built during the previous chapter.
Create the job to create images
Create another new repository that will only handle the logic of how Docker images are built and saved. This repository will only contain a single
Jenkinsfile. Loading that project as a new Pipeline in Jenkins will allow you to invoke it later on in other Pipelines when you build your applications.
The job will have to be aware it can be invoked either as a commit on itself (from a trigger on the project repository) or as a sub-job in another Jenkins process (explicitly from another job’s
A convention I want to enforce is to only allow Docker image configuration to come from either a
Dockerfile at the root of the project or through a script located under
Create the job to deploy images
Again, create a new repository that will only handle the deployment logic of Docker images.
Internal conventions also need to be defined at this point. I have settled on
docker-compose.staging.yml as the naming convention for how projects are expected to define their staging stacks.
Add the pipelines to Jenkins
Once you have these files committed in two different repositories, import them both as new Jenkins pipelines. Take note of the name of your repository as it will define the key jobs invoking this predefined deployment process must use.
Step completion checklist
- Created a new git project to hold the image creation’s
- Created a new git project to hold the deployment’s
- Imported the project as a new Jenkins Pipeline
- Received green builds for both
With reusable build and deploy jobs defined, you may now configure your project’s deployment configuration.