Skip to content

Gitlab CI

Boman Romain requested to merge gitlab-ci into master

This Merge Request introduces a basic "continuous integration" for the development of "waves" using GitLab CI (Issue #36).

How does it work?

For each git push, the file .gitlab-ci.yml is read and executed. This file contains the description of a "pipeline", which is a series of tasks (called "jobs") that can be interconnected. These tasks are launched on network machines (the "runners"). If tasks are independent and if there are enough runners, these tasks are run in parallel.

For example, there can be a "build" job (running cmake and make) followed by a "test" job (running ctest).

The runners are remote machines (mainly linux machines) which are able to run docker containers. A docker container can be seen as a lightweight virtual machine. It is a big file that contains a file system such as the one of "ubuntu" or "debian" distributions with several development tools and libraries installed.

The pipelines, as well as each job can be visually inspected on the gitlab website (see CI/CD menu on the left).

In practise the problem was to create a docker container that has an environment which is able to compile waves and its tests. The main difficulties I faced was the installation of MKL and Trilinos. Anyway, after many tries the pipeline was successful.

The tasks

pipeline

Today, I have defined 2 build tasks:

  • a build with trilinos (with a lots of warnings)
  • a build without trilinos (almost clean)

Then the result of the trilinos build is sent to a runner in charge of the ctest run.

A last task consists in building the Doxygen documentation (make dox).

The runners

The SeGI provides us with 3 "free" shared runners. These runners looks unused for the moment, but they are limited to 1-hour jobs (which is more than enough for waves). Since I would like to use GitLab CI for other projects that require more CPU time (e.g. PFEM), I have decided to add 4 more runners to the "Aerospace and Mechanical Engineering" group of this gitlab server: garfield (my own ubuntu machine), gaston, spring and thorgal (the machine of the test suite of metafor). These runners allows us to start 1-day jobs. They are tagged with their name as well as mn2l, so that we can select them for long jobs.

runner

Docker image

The docker image is based on ubuntu:18.04 (as my machine). I have installed MKL 2019.2 and the Trilinos version that was built on gaston and fabulous.

The image is currently stored on my Docker Hub account. For now, it is rather big (more than 4Go uncompressed). I will try to make it smaller later (size is not a real problem). I would like to add the last dependencies of Metafor and build it before trying to optimise it. I think that having one single image which is able to build all our program is a good idea (so that we will be able to run Cupydo eventually!).

Concerning the port to python-3, it will be easy to build a second image based on python 3. In this case, having 2 separate images seams important.

The Dockerfile (the script that generates the image) is currently stored on my gitlab account. I will move it to the department group in a near future.

The future...

For the moment, the full pipeline is started each time a "push" is done. If you push many commits at once, only the last version will be tested.

I would like to add more "jobs" to the pipeline such as simple scripts using "clang-format" and "cppcheck". That is rather easy to do, ... but I would like also to add continuous integration to PFEM very soon.

Then, once we have played a little bit with this new tool, we could decide whether each test should be performed each time we push. For example, it would be nice to build the documentation only for releases and merge requests. All these things can be configured through the .gitlab-ci.yml file. But today I think we should first see how this new thing works in practise before trying to disable it for some cases.

Edited by Boman Romain

Merge request reports