Understand To Be Continuous
This page introduces general notions & philosophy about to be continuous.
A state-of-the art pipeline?!
Generally speaking, a CI/CD pipeline should be composed of one or several of the following stages:
- compile the code and package it into an executable or intermediate format
- perform all required tests and code analysis:
- unit testing
- code quality audits
- Static Application Security Testing (SAST)
- dependencies check
- licenses check
- package the compiled code into an executable format (ex: a Docker image)
- create the hosting infrastructure
- deploy the code into a hosting environment
- perform all required acceptance tests on the deployed application
- functional testing (using an automated browser or a tool to test the APIs)
- performance testing
- Dynamic Application Security Testing (DAST)
- publish the validated code and/or package somewhere
- and lastly deploy to production
to be continuous provides predefined, configurable and extensible templates covering one or several of the above stages.
Several kinds of template
to be continuous provides 6 kinds of template, each one related to a specific part of your pipeline.
Build & Test
Build & Test templates depend on the language/build system and are in charge of:
- building and unit testing the code,
- providing all language specific code analysis tools (linters, SAST, dependency check, ...),
- publishing the built artifacts on a package repository.
Code Analysis templates provide code analysis tools (SAST, dependency check, ...) not dependent on any specific language or build tool (ex: SonarQube, Checkmarx, Coverity).
Packaging templates provide tools allowing to package the code into a specific executable/distributable package (ex: Docker, YUM, DEB, ...). They also provide security tools related to the packaging technology (linters, dependency checks, ...).
Infrastructure(-as-code) templates are in charge of managing and provisioning your infrastructure resources (network, compute, storage, ...).
Deploy & Run
Deploy & Run templates depend on the hosting (cloud) environment and are in charge of deploying the code to the hosting environment.
Acceptance templates provide acceptance test tools (functional testing, performance testing, DAST).
Generic pipeline stages
Our GitLab templates keep using a coherent set of generic GitLab CI stages, mapped on the generic pipeline depicted in the previous chapter:
||build & test||Build (when applicable), unit test (with code coverage), and package the code|
||build & test / code analysis||Perform code anaysis jobs (code quality, Static Application Security Testing, dependency check, license check, ...)|
||packaging||Build the deployable package|
||packaging||Perform all tests on package|
||infrastructure||Instantiate/update the (non-production) infrastructure|
||deploy & run||Deploy the application to a (non-production) environment|
||acceptance||Perform acceptance tests on the upstream environment|
||build & test||Publish the packaged code to an artifact repository|
||infrastructure||Instantiate/update the production infrastructure|
||deploy & run||Deploy the application to the production environment (CD pipeline only)|
buildstage is not only related to building the code, but also running unit tests (with code coverage)
testis not related to unit testing, but more code analysis.
We chose to keep those names anyway to stay compatible with GitLab Auto DevOps that has the same philosophy.
.gitlab-ci.yml file will have to declare all stages required by included templates, in the right order.
Instead of keeping track of required stages, simply add them all in your
stages: - build - test - package-build - package-test - infra - deploy - acceptance - publish - infra-prod - production
you may think the complete list is too large for you case, but don't worry: each stage only appears if at
least one active job is mapped to it. Therefore - for e.g. - if you're not using any packaging template, the
package-xxx stages will never show up in your pipelines.
So far, we've presented a quite static vision of what a CI/CD pipeline should be, but the reality is somewhat different depending whether it's triggered on a development branch (that's CI) or on the production branch (that's CD).
The guiding principles
The guiding principles
- continuous integration (CI) has to be fast (and to some extend energy efficient)
- continuous deployment/delivery (CD) has to secure the deployment/delivery to production
Continuous integration is what a developer keeps doing 90% of the day: change a bit of code, commit, push and wait for the code to be up and running somewhere. Such a cycle may occur up to 30 times per day, sometimes more. That's why it has to be as fast as possible, as every saved minute is a huge gain for the developer's productivity at the end of the day. That's also why it may not be wise to start a complete non-regression automated test campaign every time the developer changes a comma in his code.
On the contrary, continuous deployment/delivery occurs much less often but denotes the will of rolling out a new version of your code. That's why it has to make everything possible to maximize the confidence in what you are doing.
How is it mapped on Git workflows?
Using Git, there are many possible workflows.
You are free to use whichever you want, but our templates make strong hypothesis you should be aware of:
masterbranch is production (triggers the CD pipeline),
developbranch is integration (triggers an hybrid pipeline),
- any other branch is development (triggers the CI pipeline).
You're not compelled to use any integration branch. If you don't use the
develop branch, then you'll probably
be using a basic Feature Branch
workflow, that is more than enough in most cases.
Now, here is a table that summarizes the generic templates behavior for each branch:
||compile & unit test||deep analysis / automatic trigger||create/deploy the
||automatic trigger||create/deploy the
|development (all other branches)||quick analysis (when possible), or manual trigger||create/deploy a