Implementing a C++ CI/CD Pipeline

Reading Time : 19min read
Implementing a C++ CI/CD pipeline - illustration

Although C++ is often incorrectly perceived as an old, outdated language, those who have spent time in the engineering realms where it is used will know that it is more akin to a uniquely advanced tool that tackles problems or constraints where few other languages have the ability to operate. C++ is still the reigning king of game engines because of its performance capabilities, it is the default for most serious robotics or embedded software projects, and it is still extremely prominent in banking, cloud infrastructure orchestration, and operating systems. However, as C++ has been around since we first saw McFly in a DeLorean, there are many legacy projects that tend to slow down the adoption rate of new practices in C++-reliant industries.

One of these new practices is, of course, continuous integration and continuous delivery/deployment (CI/CD) pipelines which have been widely adopted in many software engineering fields. Continuous integration is about automatically building and testing every change a developer makes to an application. While, continuously delivery or deployment is about getting that new, hopefully now stable, build to production with as little human interaction as possible. It is worth noting that the difference between continuous delivery and continuous deployment is usually that delivery still has at least one human making a go-or-no-go decision before the new build hits production. Continuous deployment is more for the daredevil devs who trust in their thousands of autonomous tests to save them from committing a bug that lands in production moments later.

Before we go head first in setting up this new system we should ask the most important question when considering a new practice: why. Let’s split the system up to highlight the benefits. Continuous integration should overall enable your developer to spend more time developing quality software and less time testing it. Typically the integration is setup such that after every commit the entire application is automatically built on multiple different platforms and configurations, then faces a barrage of autonomous tests to make sure it is performing as expected. This usually happens in minutes or seconds which is obviously way more efficient and rigorous than a developer’s manual tests. Now the developer might feel more moments of frustration when they, for example, see that the application now fails 32 tests on Android when using clang, and they now have to go fix that versus moving on to the next feature. But that is better and more time-efficient for everyone than having a few thousand mobile users experience a crash two weeks later. Also as there have been much fewer changes in that single commit versus the 2 weeks of team’s commits the bug was introduced. Hence it will also likely be easier to solve and produce fewer headaches overall. The developer might end up writing more autonomous tests and fixing more bugs upfront during the development period but the overall quality of software that results should be much better. Another benefit of this is that you should always have a stable build ready of the latest features to show your clients, marketing team, or investors.

That sounds great by why do we want to deploy it to our users as fast and often as possible? Well for many reasons that are included in agile methodologies, but more succinctly because we can get feedback from users faster, more frequently, and on smaller slices of new functionality. This is a critical business and product development strategy that can between the difference between sink or swim in your market. The pipeline also encourages the development team to release smaller minimum viable features to their customers rather than more fleshed out features every quarter that the customers didn’t really want.

Convinced? Great. So where can you get some CI/CD? Lots of places. AWS has their CodePipeline with CodeCommit, CodeBuild, and CodeDeploy. Coupling GitHub with Jenkins and its plugins is a popular strategy. TeamCity has been around the block and CircleCI is very hot right now. Not to mention Google, IBM, and Microsoft Azure all have their own variations. For this tutorial we are going to use GitLab as it is established, still cutting edge, easy to use, flexible, and provides us with the whole pipeline of tools.

Setup Simple C++ Project

We are doing to create and use an extremely simple application for this guide on how to set up a CI/CD pipeline: a command-line program that takes in a rather limited set of numbers and outputs the resulting factorial of the input.
The initial code is as follows:

#include <iostream>
#include "Factorial.h"
int main(int argc, char* argv[])
  int n = atoi(argv[1]);
    auto factorial = Factorial::GetFactorial(n);
    std::cout << "The factorial of " << n << " = " << factorial << std::endl;    
    return 0;


class Factorial {
	static unsigned long long GetFactorial(int input);
#include "Factorial.h"
unsigned long long Factorial::GetFactorial(int input) 
	unsigned long long factorial = 1;
	for(int i = 1; i <= input; ++i)
        factorial *= i;
    return factorial;

Not the most interesting application you have ever seen but it will do for our purposes. Put those three files in a “src” folder and then outside that folder create the following file to allow us to use cmake.

set(SOURCES src/main.cpp src/Factorial.cpp )
set(HEADERS src/Factorial.h)
add_executable(Factorial ${SOURCES} ${HEADERS})
set_property(TARGET Factorial PROPERTY CXX_STANDARD 17)

Also add a gitignore so you don’t commit your binaries:

If you want to build the project locally, install cmake and run the following commands:
mkdir build && cd build
cmake ..
cmake –build .

If you want to run the executable use the command below in the build directory. Here we are providing the argument of 7 to get the factorial of it.
Debug/Factorial.exe 7

Output should be:
The factorial of 7 = 5040

Setup GitLab

GitLab can be self-hosted on your own servers or containers, but they also provide a cloud-like solution which we will be using today. Go to, then create an account. After that create a new project, name it something original, and follow the instructions to push our Factorial application to the git repo.
Although we will definitely want to have CI/CD configuration for our master branch, we best start our changes on a development branch. So click on the “+” next to the project path and add a new development branch.

Add Build Stage

In the new branch we are going to create a “.gitlab-ci.yml” file which is the core instruction sheet that defines our CI/CD pipeline. Note it is a yml file so be careful of your indentation and use spaces, not tabs. From the web gui add this to the file:

- build

stage: build
- linux
image: gcc
- apt-get update --yes - apt-get install --yes cmake
- mkdir build
- cd build
- cmake ..
- cmake --build .
- build

This our first draft of a pipeline that simply builds our application. At the top you can see our stages which are defined globally, we will add more but for now we just have build. Next you can see the name of our first job which is called “build-job” and it belongs to the build stage. Jobs of the same stage are executed in parallel. The tags allow us to specify which machines, or runners as they are called in GitLab, take on and execute each job. The next line in this job is specifying that we want to use docker hub’s official gcc image as a base for the environment we want to build our application in. We then run some Ubuntu commands before our actual build script to install cmake onto the container. We then finally specify the commands that actually build our application. Lastly we use the “artifacts” and “paths” parameters to tell GitLab what directory/files to keep after the runner is done. We will be able to view the resulting files in the web gui, download them if desired, and use them in other stages.

Once you commit this yml file you may notice a gold pause sign or blue progress sign next to the commit hash as seen in the screenshot below.

If you click on it you will be brought to the pipelines panel where you can see all your jobs as well as their status. If you then click on the “build-job” you will see the commands from the yml file executing in a terminal. But wait you say. I didn’t provision any build servers. No you didn’t, GitLab has a handy service where their “shared runners” pickup jobs and execute them. As of July 2020 a private project gets 2000 CI minutes free of these shared runners. There is no limit for public open source projects.

If you go to Settings -> CI/CD -> and then “Expand” Runners you will see which shared runners are available for your jobs as well as how to setup your own private runners.

Switch Executors and Setup Dedicated Build Server

At the moment the shared-runner is using the “docker-machine” executor. GitLab as many executors that you can use to run your build in different scenarios, including VirtualBox, SSH, and Kubernetes. We are going to switch to the “shell” executor as it is the simplest and will allow us to focus on the pipeline without getting caught up in extra steps for specific tech stacks. It is worth noting that a disadvantage of using shell rather than “docker-machine” is that we will not have a clean build environment for every build.

First, we should setup our own build server and then install gitlab-runner on it. I’m going to spin up an AWS EC2 Ubuntu 18.04 instance but other providers will also work. Obviously, if you change the OS type you may have to do a small amount of tweaking to the script commands.
After I spin up my EC2 instance and ssh in with my “name-choosen.pem” file I will want to run the following commands to setup the gitlab runner:

$ sudo apt-get update
$ sudo apt-get install gitlab-runner
$ sudo gitlab-runner register

Please enter the gitlab-ci coordinator URL (e.g.
Please enter the gitlab-ci token for this runner:
E1D4xTkQCg6SKb8dEquk <- your token will be in Gitlab -> Project -> Settings -> CI/CD -> Runners -> Specific Runners
Please enter the gitlab-ci description for this runner:
Please enter the gitlab-ci tags for this runner (comma separated):
aws-runner, ubuntu
Registering runner… succeeded runner=E1D4xTkQ
Please enter the executor: docker, docker-ssh, shell, virtualbox, docker+machine, custom, parallels, ssh, docker-ssh+machine, kubernetes:
Runner registered successfully. Feel free to start it, but if it’s running already the config should be automatically reloaded!

After that we want to install the dependencies used to build our project and make sure the cmake variables are defined:

$ sudo apt-get install --yes gcc
$ sudo apt-get install --yes g++
$ sudo apt-get install --yes cmake
$ export CC=/usr/bin/gcc
$ export CXX=/usr/bin/g++

Lastly we want to edit our “.gitlab-ci.yml” to use our new build server:

- build

stage: build
- aws-runner
- mkdir build
- cd build
- cmake ..
- cmake --build .
- build

Again, as soon as you commit that change the build pipeline will kick off now using your new build server!

Add Test Code and Test Stage in CI/CD

Thus far we have setup a basic way to build our application every time we push a commit to our development branch. That’s nice and all, we can see if the code compiles and links on a specific platform, but it’s not quite CI. What we need now are some autonomous tests to check that our application is behaving as expected.

Automated tests is a whole topic and science unto itself. But in a real development environment we might be using TDD (Test Driven Development) or another strategy where your development team is writing their own tests along with their features. You might also have specific test engineers creating automated tests for security, performance, or reoccurring issues. Large organizations can have tens of thousands of tests for different build stages, but even starting with a small amount of tests in a small organization can help. For this demo we are just going to add the some basic tests using Google’s C++ test framework called Google Test. Google Test is pretty prominent in the C++ testing community, and is reasonably easy to setup, but there are many other C++ testing frameworks you could use in its place.

First be sure to `git pull origin development` into your local branch if you have made changes to your .gitlab-ci.yml file on the website. Then create a “tst” folder alongside your current “src” folder and add these two lines to your existing CMakeList.txt file in the root of our project.

add_subdirectory (tst)

Next in your “tst” folder add the following cmake files so that it can use googletest and build our test file.
cmake_minimum_required (VERSION 3.10)

# Setup GoogleTest
configure_file( googletest-download/CMakeLists.txt)
if(result)   message(FATAL_ERROR "CMake step for googletest failed: ${result}")
execute_process(COMMAND ${CMAKE_COMMAND} --build .
  message(FATAL_ERROR "Build step for googletest failed: ${result}")
# Prevent overriding the parent project's compiler/linker
# settings on Windows
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
# Add googletest directly to our build. This defines
# the gtest and gtest_main targets.
# The gtest/gtest_main targets carry header search path
# dependencies automatically when using CMake 2.8.11 or
# later. Otherwise we have to add them here ourselves.
#  include_directories("${gtest_SOURCE_DIR}/include")
project (Factorial_test)
set(SOURCES test.cpp ../src/Factorial.cpp) 
set(HEADERS ../src/Factorial.h)
add_executable(Factorial_test ${SOURCES} ${HEADERS})
set_property(TARGET Factorial_test PROPERTY CXX_STANDARD 17)
target_link_libraries(Factorial_test gtest_main)
add_test(NAME Factorial_test COMMAND Factorial_test)
cmake_minimum_required(VERSION 3.10)
project(googletest-download NONE)
  SOURCE_DIR        "${CMAKE_CURRENT_BINARY_DIR}/googletest-src"
  BINARY_DIR        "${CMAKE_CURRENT_BINARY_DIR}/googletest-build"
  TEST_COMMAND      ""

Then we want to actually add our test file. Note that we are using the some of googletest’s macros for this but it is essentially just testing that three different factorials equal what they should equal.

#include "gtest/gtest.h" 
#include "../src/Factorial.h" 
TEST(FactorialValueTest, FactorialOf3) { ASSERT_EQ(Factorial::GetFactorial(3), 6); } 
TEST(FactorialValueTest, FactorialOf0) { ASSERT_EQ(Factorial::GetFactorial(0), 1); } 
TEST(FactorialValueTest, FactorialOf14) { ASSERT_EQ(Factorial::GetFactorial(14), 87178291200); }

Now that we have our tests you can go ahead and build everything locally using the same commands as before. Then if you want to run the tests use the command below in the build directory.

Output should be:
Running main() from D:\path\to\project\build\tst\googletest-src\googletest\src\
[==========] Running 3 tests from 1 test suite.
[———-] Global test environment set-up.
[———-] 3 tests from FactorialValueTest
[ RUN ] FactorialValueTest.FactorialOf3
[ OK ] FactorialValueTest.FactorialOf3 (0 ms)
[ RUN ] FactorialValueTest.FactorialOf0
[ OK ] FactorialValueTest.FactorialOf0 (0 ms)
[ RUN ] FactorialValueTest.FactorialOf14
[ OK ] FactorialValueTest.FactorialOf14 (0 ms)
[———-] 3 tests from FactorialValueTest (4 ms total)

[———-] Global test environment tear-down
[==========] 3 tests from 1 test suite ran. (8 ms total)
[ PASSED ] 3 tests.

Lastly we want to add the test stage to our pipeline.

 stage: test
  - aws-runner
  - cd build
  - tst/Factorial_test
  - build-job

The added ‘dependencies` line here just specifies that the new test environment needs to download the artifacts, in our case the `build` folder, from the previous `build-job`. Don’t forget to add the `- test` stage to your stages at the top of the file.

When those changes are pushed up to your repo it will again set off another pipeline build and you can then go to the `test-job` to see how it progresses. Once it has finish executing you should see in the logs that all test have passed.

That’s a bit boring though, you should try messing something up in the `GetFactorial()` function, like adding a ` factorial++` in the for loop. Then push the change and watch the `build-job` succeed, then the `test-job` fail. If you check its logs you will see exactly which tests fails, what value they were expecting, and what values they got. Congratulations, you now have a working CI pipeline!

You should probably adjust that `GetFactorial()` function back so you have a working project again too.

Setup Production Environment and Deploy Section

So now that we have continuous integration setup lets sort out deployment. Deployment environments can be extremely wide ranging, especially with C++ applications which could be running on servers, PC, edge devices, satellites, etc. For this guide we are just going to spin up a second AWS EC2 Ubuntu 18.04 instance and call it our production environment.
In GitLab you can specify a production environment by going to Operations -> Environments -> New environment, then adding `production` as the “Name” and leaving “External URL” blank.

Once your production environment is up we want to take the aws server’s public ip, from the aws instances dashboard), and add it as a variable in GitLab. To do this go to Project -> Settings -> Variables -> Add Variable. Use the name ` DEPLOY_SERVER`, specify the ”Environment scope” as `production`, be sure to mask the variable.

Next we want to make sure we can securely transfer the production-ready files from our build server to the production server. So go back to your AWS build server and run the command `sudo ssh-keygen` and then hit enter several times to use the default ssh settings. Now run the command `cat ~/.ssh/id_rsa` to get the private ssh key and copy it. Then create another variable on GitLab, call it `SSH_PROD_P_KEY`, paste the key, and mask it also. On the build server now run `cat ~/.ssh/` to get the public ssh key, then on your production server edit the file `~/.ssh/authorized_keys` with vim or nano and add the public key. Our build server should now be able to ssh and scp to the production server. You can run `ssh ubuntu@` on your build server to check if you can get to the production server.

Now we can finally add the deployment section to our .gitlab-ci.yml file:

stage: deploy

 - aws-runner
 - master
 - mkdir -p ~/.ssh     
 - echo -e "$SSH_PROD_P_KEY" > ~/.ssh/id_rsa
 - chmod 600 ~/.ssh/id_rsa  
 - ssh-keyscan -t rsa $DEPLOY_SERVER >> ~/.ssh/known_hosts
 - scp build/Factorial ubuntu@$DEPLOY_SERVER:/home/ubuntu
 name: production
 - build-job

name: production
– build-job

You can see that we have specified this `deploy-job` to only execute on the master branch. Our script here adds our private ssh key variable to the appropriate file with the needed permissions and checks that the deployment server is in our know_hosts file. We then take the Factorial executable from the build folder that the `build-job` dependency has given us and scp it to the `/home/ubuntu` directory. Make sure to add the `- deploy` stage to your stages at the top of the file.

However you will probably notice that when you commit this the deployment stage doesn’t run. Why? Because we are on the “development” branch, and we specified this should only run on the master branch. So in GitLab you can navigate to the master branch then create a merge request and, if you can comfortable with the changes, approve the merge. As soon as you do that a new pipeline will be fired off with all three stages. After they all complete successfully you can ssh into your production server to see the resulting Factorial ready to use. Huzzah! We have a complete continuous integration / continuous deployment pipeline!

Continuous Delivery Check

So some of you might be sweating bullets right now. One merge and my code is automatically in production? Yikes! Well in the merge request you should see the relevant testing stages that the current state of the project on that branch should have passed. However if you are not as confident in your tests, or management wants an extra layer of checks you can add a so called staging environment and deployment section so a human can have one last check before production. This can be setup in almost exactly the same manner as our ‘deploy-job` just change the key and public ip to the new staging server. We can also add `when: manual` to the production deploy job so that a human always needs to go into the pipeline and press a play button on the production deploy job for it to be deployed to production. As mentioned before this is the difference between fully-autonomous continuous deployment and continuous delivery which usually requires at least one last manual check.


In conclusion, C++ CI/CD is a powerful tool that can help developers to improve the quality and reliability of their C++ applications. By automating the process of building, testing, and deploying software, C++ CI/CD can help developers to save time and effort and to produce more reliable software.

Interested in Learning More?

Subscribe today to stay informed and get regular updates from Kobiton

Ready to accelerate delivery of
your mobile apps?

Request a Demo