4

A full CI environment with Docker 2/2

 3 years ago
source link: https://blog.oio.de/2021/02/12/a-full-ci-environment-with-docker-2-2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

A full CI environment with Docker 2/2

In part 1 of this blog post we already set up quite some development environment. We have a remote source code management server to work with. A repository manager to handle our dependencies and additionally implemented a quality check. However, it would be nice to have some automation of those steps. We basically want to achieve that every single committed line of code goes through the same steps. We want to have a continuous integration pipeline!

zeichnung.png

In this post I will show you how to add an automation server to our toolchain, namely Jenkins. Instead of handling everything separately from the host, we will outsource all steps. The main task is to provide Jenkins with the necessary privileges to allow communication with the other tools.

Because each of the tools runs within its own Docker container, we need to think about the communication concept between them. As the different containers are connected to the same network specified within the docker-compose.yml they can access each other by their service name. Keep in mind to use the correct host and ports! It’s a little tricky but I have also found it a good exercise to realize how the communication actually works with Docker.

Jenkins

Let’s get started, again. Make sure that your other containers are running (nexus, sonardb, sonarqube, gitlab). Then start the new Jenkins container (pull the updated repo). If you have a closer look, you can see that it differs from the other services. Instead of just pulling a Docker image from Docker hub, we make use of a Dockerfile. This is due to the fact that we later on will use Docker as a build agent in the Jenkins pipeline. I will not go into much detail here, but in short: on top of a base Jenkins image we install docker-ce (we basically only need the client) and configure it to be able to use the local Docker socket within the Jenkins container. However, this is just one way to achieve the desired workflow. It would also be possible to use the Jenkins Docker plugin and configure the Jenkins project differently.

Make sure you have the Dockerfile and entrypoint.sh on the correct path.

docker-compose up -d jenkins

Jenkins should be running now as well. Again, for the first login you need to get the initial secret.

docker exec -ti jenkins cat /var/jenkins_home/secrets/initialAdminPassword

Install the suggested plugins and configure the admin account (admin/admin).

Configure Jenkins

Login as admin and install the following plugins (restart afterwards):

  • Docker Pipeline Plugin
  • Authorize Project

Go to ‘Configure Global Security’ and select ‘Project-based Matrix Authorization Strategy‘ as ‘Authorization Strategy‘. In the section ‘Access Control for Builds‘ select ‘Run as SYSTEM‘.

Connect Jenkins and GitLab

GitLab
To be able to access your GitLab repo from Jenkins you need to add a user with the correct permissions. Log in as administrator and create a new user (Jenkins). Add it to your default-project as ‘Reporter‘ to only grant read-only access. As before, with your host you need to provide a public ssh-key.

Jenkins
Now we also need to add the private key to your Jenkins global credentials. Select ‘SSH username with private key‘ as the type and ‘gitlab‘ as ID. It should look something like this.

Jenkins_credentials_marked-1024x467.png

We can already add the credentials for the access to Nexus and SonarQube which we will use later. To access Nexus, we can do the same as before using a settings.xml file. However, we have to configure the maven settings in the context of the Jenkins container. We basically need to replace ‘localhost’ with the new host address.

<!--settings.xml--> 
<settings>
   <mirrors>
    <mirror>
     <id>nexus</id>
     <name>My repo</name>
     <url>http://nexus:8081/nexus/repository/maven-public/</url>
     <mirrorOf>*</mirrorOf>
    </mirror>
   </mirrors>
  
   <profiles>
    <profile>
     <id>downloadSources</id>
     <properties>
      <downloadSources>true</downloadSources>
      <altReleaseDeploymentRepository>
         dockerCI-repo::default::http://nexus:8081/nexus/repository/dockerCI-repo/
      </altReleaseDeploymentRepository>
     </properties>
    </profile>
   </profiles>
  
   <activeProfiles>
    <activeProfile>downloadSources</activeProfile>
   </activeProfiles>

   <servers>
    <server>
     <id>dockerCI-repo</id>
     <username>dockerCI-user</username>
     <password>user</password>
    </server>
   </servers>
</settings> 
 

In addition, I also added an alternative deploy repository property which overwrites the URL from the project POM so that we do not need to change our project pom.xml as well. In Jenkins, add credentials of type secret fileand choose the new settings.xml, I called it ‘demo-settings‘.

To access SonarQube, choose ‘secret text and provide the token we used before. I called it ‘demo-sonar-token‘. If you cannot get your hands on the token we used before you need to create a new one.

echo $SONAR_TOKEN

After adding these three credentials

credential_jenkins-1024x204.png

you need to restart the whole Jenkins container.

docker-compose restart jenkins

To check that Jenkins also has access to the local docker daemon, run

docker-compose exec jenkins sudo docker ps

This should print the same as running docker ps on your host. This only seems to work out of the box because it is already pre-configured within the customized Docker image for Jenkins!

Create a Jenkins project

Now we are basically done with the configuration of Jenkins and we need to create a Jenkins project to define what we want it to do. Create a ‘Multibranch Pipeline‘ project and name it ‘dockerCI‘. Go to the ‘branch sources‘ section and add the following URL to reach your GitLab repository and choose the just created ‘gitlab‘ credential (ssh keys).

ssh://git@gitlab/user/default-project.git

Jenkins pipeline

To tell Jenkins what it actually needs to do with the Git project we have to add a Jenkinsfile to our project (don’t forget to also push it to GitLab!). It will be automatically detected when pulling the project. It contains all the necessary instructions and credentials.

<!--Jenkinsfile--> 
pipeline {
    agent { 
        docker { 
            image 'openjdk:11'
            args  '--net="ci_nw"'
        }
    }
    stages { 
      stage('Build') { 
          steps { withCredentials([file(credentialsId: 'demo-settings', 
                  variable: 'MVN_SETTINGS')]) {
                    sh ''' sh ./mvnw --settings $MVN_SETTINGS -X clean deploy '''
                }
            }
        }
      stage('Sonar') { 
          steps { withCredentials([string(credentialsId: 'demo-sonar-token',              
                  variable: 'SONAR_TOKEN')]) {
                    sh ''' sh ./mvnw sonar:sonar 
                           -D sonar.host.url=http://sonarqube:9000/sonarqube/ 
                           -D sonar.login=$SONAR_TOKEN '''
                }
            }
        }
    }
}

In words, it starts a Docker container (only Java is needed) which will build the project. It uses the provided Maven wrapper and links with the Nexus repos the same way as before locally. In an additional step, SonarQube is added as well. The withCredentails provides the credentials from the secrets we created earlier.

Now we are basically done! You can try to run the job from your Jenkins GUI. It will build the project, run the tests on SonarQube and deploy the artifacts to the Nexus repo. However, as a last step, instead of starting the job manually we can add a webhook to GitLab. This way it will automatically trigger the Jenkins job to run each time it receives code changes.

GitLab: Create a Webhook

To allow access to Jenkins from GitLab we create a new user ‘gitlab’ in Jenkins. Provide it with the ‘Overall-Read‘ permission in global security. Login with the new user and generate an API token. Go to the project and enable project-based security. Add the ‘gitlab’ user and give it the permission to run a ‘job‘ in the category ‘build‘.

Go to GitLab and create a webhook pointing to

http://jenkins:8080/jenkins/project/dockerCI/build

Add the secret token from before, select ‘push event’ and uncheck ‘enable SSL‘. If you now test the webhook it could fail, due to denied permission. If it tells you that you in general can’t use localhost you need to enable this here (re-login as admin!).

To test if your webhook is working properly, you can check it by pushing something to your GitLab repo. I hope you can see something like this appearing in the Jenkins GUI.

jenkins_success.png

Finally, we integrated all the tools into an automated workflow. With Docker we can start and stop the whole environment with a single command. After the first configuration we don’t have to think about the steps each time we make some changes. Including all these tools in an automated pipeline helps us to consistently improve our code basis and allows to have a high reproducibility in our deployments. The same can be achieved by working with remote services as well, but this will be treated in a post to come.

Thanks for reading! In case you get stuck somewhere, don’t hesitate to leave a comment!

P.S. I see the issue with switching between working from localhost and the Docker network. I have it on my list for a future post as well.

Short URL for this post: https://blog.oio.de/AVzHc

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK