Understand To Be Continuous¶
This page introduces general notions & philosophy about to be continuous.
A state-of-the art pipeline?!¶
Generally speaking, a CI/CD pipeline should be composed of one or several of the following stages:
- compile the code and package it into an executable or intermediate format
- perform all required tests and code analysis:
- unit testing
- code quality audits
- Static Application Security Testing (SAST)
- dependencies check
- licenses check
- ...
- package the compiled code into an executable format (ex: a Docker image)
- create the hosting infrastructure
- deploy the code into a hosting environment
- perform all required acceptance tests on the deployed application
- functional testing (using an automated browser or a tool to test the APIs)
- performance testing
- Dynamic Application Security Testing (DAST)
- ...
- publish the validated code and/or package somewhere
- and lastly deploy to production
Info
to be continuous provides predefined, configurable and extensible templates covering one or several of the above stages.
Several kinds of template¶
to be continuous provides 6 kinds of template, each one related to a specific part of your pipeline.
Build & Test¶
Build & Test templates depend on the language/build system and are in charge of:
- building and unit testing the code,
- providing all language specific code analysis tools (linters, SAST, dependency check, ...),
- publishing the built artifacts on a package repository.
Code Analysis¶
Code Analysis templates provide code analysis tools (SAST, dependency check, ...) not dependent on any specific language or build tool (ex: SonarQube, Checkmarx, Coverity).
Packaging¶
Packaging templates provide tools allowing to package the code into a specific executable/distributable package (ex: Docker, YUM, DEB, ...). They also provide security tools related to the packaging technology (linters, dependency checks, ...).
Infrastructure¶
Infrastructure(-as-code) templates are in charge of managing and provisioning your infrastructure resources (network, compute, storage, ...).
Deploy & Run¶
Deploy & Run templates depend on the hosting (cloud) environment and are in charge of deploying the code to the hosting environment.
Acceptance¶
Acceptance templates provide acceptance test tools (functional testing, performance testing, DAST).
Generic pipeline stages¶
Our GitLab templates keep using a coherent set of generic GitLab CI stages, mapped on the generic pipeline depicted in the previous chapter:
Stage | Template type | Description |
---|---|---|
build |
build & test | Build (when applicable), unit test (with code coverage), and package the code |
test |
build & test / code analysis | Perform code anaysis jobs (code quality, Static Application Security Testing, dependency check, license check, ...) |
package-build |
packaging | Build the deployable package |
package-test |
packaging | Perform all tests on package |
infra |
infrastructure | Instantiate/update the (non-production) infrastructure |
deploy |
deploy & run | Deploy the application to a (non-production) environment |
acceptance |
acceptance | Perform acceptance tests on the upstream environment |
publish |
build & test | Publish the packaged code to an artifact repository |
infra-prod |
infrastructure | Instantiate/update the production infrastructure |
production |
deploy & run | Deploy the application to the production environment (CD pipeline only) |
Ambiguous naming?
build
stage is not only related to building the code, but also running unit tests (with code coverage)test
is not related to unit testing, but more code analysis.
We chose to keep those names anyway to stay compatible with GitLab Auto DevOps that has the same philosophy.
Your .gitlab-ci.yml
file will have to declare all stages required by included templates, in the right order.
Tip
Instead of keeping track of required stages, simply add them all in your .gitlab-ci.yml
:
stages:
- build
- test
- package-build
- package-test
- infra
- deploy
- acceptance
- publish
- infra-prod
- production
you may think the complete list is too large for you case, but don't worry: each stage only appears if at
least one active job is mapped to it. Therefore - for e.g. - if you're not using any packaging template, the
package-xxx
stages will never show up in your pipelines.
Publish & Release¶
Many templates offer the possibility to package the code and publish it to an appropriate registry (ex: PyPI for Python, Maven repository for Java, npm for Node.js, Container registry for container images...).
In addition, to be continuous also support triggering a release. A release is the action - from the main branch - to freeze a stable version of the code, determine the next version, publish the versioned code packages (possibly with additional release specific artifacts - ex: a changelog).
As stated above, a release should trigger one or several publish actions. But a publish is not necessarily related to a release. Depending on the related technology, you may also want to publish unstable package versions (ex: snapshot in Maven terminology).
Release¶
Functionally, a release involves the following steps:
[mandatory]
determine the next release version (either manually or automatically),[optional]
bump version- update files with the new version (ex:
pom.xml
,setup.py
,.bumpversion.cfg
...) - update other files related with the release (ex:
README.md
,CHANGELOG.md
...) - commit the changes
- update files with the new version (ex:
[mandatory]
create a Git tag named after the version,[optional]
create a GitLab release,[mandatory]
package the code (language-dependent format) & publish the versioned package(s) to an appropriate repository.
Some to be continuous templates provide their own release job when supported by the build tool (ex: Maven with the Maven Release Plugin, Python with Bumpversion or with the Poetry Version command), but the release action can also be implemented by a separate tool/template such as semantic-release that can gracefully automates the release process (automatically determines the next version number based on Git commit messages, enforces semantic versioning, generates the release notes, creates the tag and the GitLab release).
Publish¶
As explained before, publishing a code package is not necessarily related to a release process (as some technologies - such as Maven - allow publishing unstable package versions) but a release should always end by publishing the versioned packages.
As a result, all publish-capable templates are fully compatible with semantic-release
or any alternative implementing the release process.
That means that - for instance - you may perfectly choose to use semantic-release
to perform the release of a multi-modude project containing Maven and Docker code:
the publish of released versions will be implemented by each template in the pipeline triggered by the Git tag created during the release process.
Deployment environments¶
All our Deploy & Run templates support 4 kinds of environments (each being optional):
Environment Type | Description | Associated branch(es) |
---|---|---|
Review | Those are dynamic and ephemeral environments to deploy your ongoing developments. It is a strict equivalent of GitLab's Review Apps feature. |
All development branches (non-integration, non-production) |
Integration | A single environment to continuously deploy your integration branch. | The integration branch (develop by default) |
Staging | A single environment to continuously deploy your production branch. It is an iso-prod environment, meant for running the automated acceptance tests prior to deploying to the production env. |
The production branch (main or master by default) |
Production | Well.. the prod! | The production branch (main or master by default) |
A few remarks:
- All our Acceptance templates support those environments and cooperate gracefully with whichever deployment technology you're using to test the right server depending on the branch it's running on.
- Transition from Staging to Production can be either automatic (if you feel confident enough with your automated acceptance tests) or one-click (this is the default). This is configurable.
- If you're working in an organization where development and deployment are managed by separate teams, you may perfectly not declare any Production environment in the development project, but instead trigger a pipeline in the project owned by the deployment team.
- More info about deployment environments on Wikipedia.
Development workflow¶
So far, we've presented a quite static vision of what a CI/CD pipeline should be, but there will be differences depending on where you are in the development workflow:
Stage | Description |
---|---|
dev/early | Working in a feature branch, associated to no Merge Request |
dev/work-in-progress | Working in a feature branch, associated to a Draft Merge Request |
dev/review | Working in a feature branch, associated to a Ready Merge Request |
integration | A change is being pushed to the integration branch (if any) |
deployment | A change is being pushed to the production branch |
Adaptive pipeline¶
Ask a developer and a security expert what is the top requirement when building a CI/CD pipeline?
The developer would probably answer:
Speed
Divide the pipeline execution time by 2, you’ll double my productivity.
While the security expert would argue:
Tests & Security checks
Software must be thoroughly tested and analyzed at every stage to avoid bugs, poor quality or even worse: security vulnerabilities
Those 2 goals look antinomic as adding more tests and analysis tools in the pipeline will obviously increase the overall execution time.
Still, you can conciliate both developer's productivity and software quality and security if you implement an adaptive pipeline depending on the stage in the development workflow: prioritize speed in the early development stages, and gradually introduce DevSecOps checks as you get closer to production.
to be continuous implements the above idea through the adaptive pipeline strategy:
Supported Git workflows¶
Using Git, there are many possible workflows.
You are free to use whichever you want, but our templates make strong hypothesis you should be aware of:
- the
main
(ormaster
) branch is production (triggers the CD pipeline), - the
develop
branch is integration (triggers an hybrid pipeline),
the use of an integration branch is optional, and even discouraged in the general case
- any other branch is development (triggers the CI pipeline).
The following schemas summarize the 2 main branching strategies (depending on whether you will be using an integration branch or not) with their associated deployment strategy:
If you're using a complete Gitflow workflow, release branches are managed the same way as regular development branches, possibly deploying to dedicated review environments.
When to use an integration branch?¶
Let's state it clearly: the most efficient Git workflow is the simplest one that fits your needs. Consequently:
- If you do not have good reasons of using an integration branch, just don't.
- If you do not know which Git workflow to use, start as simple as possible.
- Feature-Branch shall be enough in most situations.
What is the harm with an integration branch?
Using an integration branch has several drawbacks that you shall be aware of before making your choice.
The root cause is it introduces a de-facto delay between the end of a development (feature branch being merged into develop
) and its deployment to the production environment (develop
being merged into main
, thus flushing accumulated changes all at once).
This "two-stages" deployment raises issues:
- Who is responsible of flushing
develop
intomain
? When? - When things go wrong during a deployment to production, it might be complex to identify which change caused the issue (there might even be cases where the problem is actually due to the interaction between 2 separate changes).
- Depending on the time elapsed since the end of development, it may be difficult for the author of the failing code to analyze the reasons of the problem if this development dates back to a few weeks or months, and the developer has moved on to other tasks.
Using an integration branch is discouraged in the general case, but there are some acceptable reasons to adopt one. The following chapters present the 3 main ones.
Can't afford Review environments¶
Instantiating a dedicated hosting environment for each development branch in progress might be indeed a cost difficult to afford.
Tip
Keep in mind there might be tricks to mitigate this cost:
- shutdown all review environments every evening,
- use a degraded infrastructure (ex: use an in-memory database instead of a real one, get rid of all redundancy, …)
Even with smart ideas, it may occur that review environments are just not affordable. In that case your developers will need a single, shared environment to integrate all their work. That's exactly the purpose of an integration branch.
Not mature enough for continuous deployment¶
Lack of automated testing, need of manual acceptance tests campaign on a dedicated environment, poor software quality… There are many reasons why you may not feel ready for continuous deployment.
In that case, an integration branch with its associated integration environment might be the adapted solution.
Release-oriented deployment¶
In release-oriented projects, several features get bundled into a release and then deployed all at once. With the release often comes a versioning strategy, release notes, roadmap, something very common in software edition business.
In that case, an integration branch would be the most appropriate way of addressing this requirement.
Thus developers will develop changes and continuously integrate them into the integration branch, and once all expected features
have been developed, you'll be able to proceed with the release and flush them all by merging develop
into main
.
Modularity & Composability¶
to be continuous templates are built to be:
- Modular: each template complies to the Single-responsibility principle while observing common architectural principles (standard behaviour per template type, generic pipeline stages, common Git workflow principles, ...)
- Composable: each template cooperates gracefully with others to minimize the amount of integration work.
Let's illustrate this with an example. Here is a pipeline of a project using multiple templates:
- Maven to build and test the Java code,
- SonarQube to analyse the code,
- Docker to containerize the application,
- Kubernetes to deploy and run the containers,
- Cypress and Postman for automated acceptance tests.
Composability highlights:
- the Docker build uses the artifacts produced by Maven to build the container,
- the SonarQube analysis reuses the unit tests and coverage report from Maven,
- Kubernetes deploys the Docker container built in the upstream jobs,
- Cypress and Postman automatically test the application deployed by Kubernetes in the upstream pipeline.
Other examples of composability in to be continuous:
- Most templates implementing publish/release feature also cooperate with the Semantic Release template (when used) to retrieve the next version number from Semantic Release.
- the Terraform template supports techniques to propagate generated inventory information (easy to reuse with Ansible or AWS template for instance).