Skip to content

Understand To Be Continuous

This page introduces general notions & philosophy about to be continuous.

A state-of-the art pipeline?!

Generally speaking, a CI/CD pipeline should be composed of one or several of the following stages:

  1. compile the code and package it into an executable or intermediate format
  2. perform all required tests and code analysis:
    • unit testing
    • code quality audits
    • Static Application Security Testing (SAST)
    • dependencies check
    • licenses check
    • ...
  3. package the compiled code into an executable format (ex: a Docker image)
  4. create the hosting infrastructure
  5. deploy the code into a hosting environment
  6. perform all required acceptance tests on the deployed application
    • functional testing (using an automated browser or a tool to test the APIs)
    • performance testing
    • Dynamic Application Security Testing (DAST)
    • ...
  7. publish the validated code and/or package somewhere
  8. and lastly deploy to production


to be continuous provides predefined, configurable and extensible templates covering one or several of the above stages.

Several kinds of template

to be continuous provides 6 kinds of template, each one related to a specific part of your pipeline.

Build & Test

Build & Test templates depend on the language/build system and are in charge of:

  • building and unit testing the code,
  • providing all language specific code analysis tools (linters, SAST, dependency check, ...),
  • publishing the built artifacts on a package repository.

Code Analysis

Code Analysis templates provide code analysis tools (SAST, dependency check, ...) not dependent on any specific language or build tool (ex: SonarQube, Checkmarx, Coverity).


Packaging templates provide tools allowing to package the code into a specific executable/distributable package (ex: Docker, YUM, DEB, ...). They also provide security tools related to the packaging technology (linters, dependency checks, ...).


Infrastructure(-as-code) templates are in charge of managing and provisioning your infrastructure resources (network, compute, storage, ...).

Deploy & Run

Deploy & Run templates depend on the hosting (cloud) environment and are in charge of deploying the code to the hosting environment.


Acceptance templates provide acceptance test tools (functional testing, performance testing, DAST).

Deployment environments

All our Deploy & Run templates support 4 kinds of environments (each being optional):

Environment Type Description Associated branch(es)
Review Those are dynamic and ephemeral environments to deploy your ongoing developments.
It is a strict equivalent of GitLab's Review Apps feature.
All development branches (non-integration, non-production)
Integration A single environment to continuously deploy your integration branch. The integration branch (develop by default)
Staging A single environment to continuously deploy your production branch.
It is an iso-prod environment, meant for running the automated acceptance tests prior to deploying to the production env.
The production branch (master or main by default)
Production Well.. the prod! The production branch (main or master by default)

A few remarks:

  • All our Acceptance templates support those environments and cooperate gracefully with whichever deployment technology you're using to test the right server depending on the branch it's running on.
  • Transition from Staging to Production can be either automatic (if you feel confident enough with your automated acceptance tests) or one-click (this is the default). This is configurable.
  • If you're working in an organization where development and deployment are managed by separate teams, you may perfectly not declare any Production environment in the development project, but instead trigger a pipeline in the project owned by the deployment team.
  • More info about deployment environments on Wikipedia.

Generic pipeline stages

Our GitLab templates keep using a coherent set of generic GitLab CI stages, mapped on the generic pipeline depicted in the previous chapter:

Stage Template type Description
build build & test Build (when applicable), unit test (with code coverage), and package the code
test build & test / code analysis Perform code anaysis jobs (code quality, Static Application Security Testing, dependency check, license check, ...)
package-build packaging Build the deployable package
package-test packaging Perform all tests on package
infra infrastructure Instantiate/update the (non-production) infrastructure
deploy deploy & run Deploy the application to a (non-production) environment
acceptance acceptance Perform acceptance tests on the upstream environment
publish build & test Publish the packaged code to an artifact repository
infra-prod infrastructure Instantiate/update the production infrastructure
production deploy & run Deploy the application to the production environment (CD pipeline only)

Ambiguous naming?

  • build stage is not only related to building the code, but also running unit tests (with code coverage)
  • test is not related to unit testing, but more code analysis.

We chose to keep those names anyway to stay compatible with GitLab Auto DevOps that has the same philosophy.

Your .gitlab-ci.yml file will have to declare all stages required by included templates, in the right order.


Instead of keeping track of required stages, simply add them all in your .gitlab-ci.yml:

  - build
  - test
  - package-build
  - package-test
  - infra
  - deploy
  - acceptance
  - publish
  - infra-prod
  - production

💡 you may think the complete list is too large for you case, but don't worry: each stage only appears if at least one active job is mapped to it. Therefore - for e.g. - if you're not using any packaging template, the package-xxx stages will never show up in your pipelines.

Git workflows

So far, we've presented a quite static vision of what a CI/CD pipeline should be, but the reality is somewhat different depending whether it's triggered on a development branch (that's CI) or on the production branch (that's CD).

The guiding principles

The guiding principles

  • continuous integration (CI) has to be fast (and to some extend energy efficient)
  • continuous deployment/delivery (CD) has to secure the deployment/delivery to production

Continuous integration is what a developer keeps doing 90% of the day: change a bit of code, commit, push and wait for the code to be up and running somewhere. Such a cycle may occur up to 30 times per day, sometimes more. That's why it has to be as fast as possible, as every saved minute is a huge gain for the developer's productivity at the end of the day. That's also why it may not be wise to start a complete non-regression automated test campaign every time the developer changes a comma in his code.

On the contrary, continuous deployment/delivery occurs much less often but denotes the will of rolling out a new version of your code. That's why it has to make everything possible to maximize the confidence in what you are doing.

How is it mapped on Git workflows?

Using Git, there are many possible workflows.

You are free to use whichever you want, but our templates make strong hypothesis you should be aware of:

  • the master (or main) branch is production (triggers the CD pipeline),
  • the develop branch is integration (triggers an hybrid pipeline),
  • any other branch is development (triggers the CI pipeline).


You're not compelled to use any integration branch. If you don't use the develop branch, then you'll probably be using a basic Feature Branch workflow, that is more than enough in most cases.

Now, here is a table that summarizes the generic templates behavior for each branch:

build test infra/deploy acceptance infra-prod/production
development (any branch other than develop, main or master) compile & unit test quick analysis (when possible), or manual trigger create/deploy a review environment manual trigger N/A
integration (develop branch) deep analysis / automatic trigger create/deploy the integration environment automatic trigger N/A
production (main or master branch) create/deploy the staging environment create/deploy the production environment

Docker Images Versions

to be continuous templates use - whenever possible - required tools as Docker images. And when available, the latest image version is used.

In some cases, using the latest version is a good thing, and in some other cases, the latest version is bad.

  • latest is good for:
    • DevSecOps tools (Code Quality, Security Analysis, Dependency Check, Linters ...) as using the latest version of the tool is the best way to ensure you're likely to detect vulnerabilities as soon as possible (well, as soon as new vulnerabilities are known and covered by DevSecOps tools).
    • Public cloud CLI clients as there is only one version of the public cloud, and the official Docker image is likely to evolve at the same time as the APIs.
  • latest is not good for:
    • Build tools as your project is developped using one specific version of the language / the build tool, and you would like to control when you change version.
    • Infrastructure-as-Code tools for the same reason as above.
    • Acceptance tests tools as the same reason as build tools.
    • Private cloud CLI clients as you may not have installed the latest version of - say - OpenShift or Kubernetes, and you'll need to use the client CLI version that matches your servers version.

To summarize

  1. Make sure you explicitely override the Docker image versions of your build, Infrastructure-as-Code, private cloud CLI clients and acceptance tests tools matching your project requirements.
  2. Be aware that sometimes your pipeline may fail (without any change from you) due to a new version of DevSecOps tool that either highlights a new vulnerability (🎉), or due to a bug or breaking change in the tool (💩 happens).