Skip to content

self-managed to be continuous (advanced)

Using the official to be continuous templates, you may have some specific needs within your company:

  • simply expose some of your mutualized tools configuration in Kicker (ex: a shared SonarQube server, an Artifactory, a private Kubernetes cluster),
  • or maybe develop and share some template adjustments to address your specific technical context (ex: no default untagged runner, need to access the internet through a proxy, ...),
  • or even develop your own internal templates.

Installing tbc in a custom group

By default and preferably, to be continuous shall be installed:

  • in the to-be-continuous root group on your GitLab server,
  • with public visibility.

If one or both of these requirements can't be met (because you're not allowed to create a root group in your organization and/or not allowed to create projects with public visibility), then you'll have a couple of extra things to do to have to be continuous working in your self-managed server:

  1. Use the right GitLab Synchronization option(s) when running the GitLab Copy CLI for first the time:
    • --dest-sync-path to override the GitLab destination root group path,
    • --max-visibility to override the maximum visibility of projects in the destination group. For more info about GitLab Copy CLI options, please refer to the doc.
  2. Set the right variable(s) in your local copy of the tools/gitlab-sync project when configuring the TBC synchronization for first time:
    • $DEST_SYNC_PATH to override the GitLab destination root group path,
    • $MAX_VISIBILITY to override the maximum visibility of projects in the destination group.
  3. TBC configuration shall be overridden accordingly in the KICKER_RESOURCE_GROUPS variable in your local copy of the doc project (see the (Have your own doc + kicker chapter)[#have-your-own-doc-kicker]).

Variable presets

Variable presets are groups of to be continuous variable values that can be used within your company.

You can simply define variable presets:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Create a kicker-extras.json declaring your presets,
    JSON schema:
  3. Make a first release by creating a Git tag.

Example kicker-extras.json:

  "presets": [
      "name": "Shared SonarQube",
      "description": "Our internal shared SonarQube server",
      "values": {
        "SONAR_HOST_URL": ""
      "name": "Shared OpenShift",
      "description": "Our internal shared OpenShift clusters (prod & no-prod)",
      "values": {
        "OS_URL": "",
        "OS_ENVIRONMENT_URL": "https://%{environment_name}",
        "OS_PROD_URL": "",
        "OS_PROD_ENVIRONMENT_URL": "https://%{environment_name}",
        "K8S_URL": "",
        "K8S_ENVIRONMENT_URL": "https://%{environment_name}",
        "K8S_PROD_URL": "",
        "K8S_PROD_ENVIRONMENT_URL": "https://%{environment_name}"
      "name": "Artifactory Mirrors",
      "description": "Our internal Artifactory mirrors",
      "values": {
        "NPM_CONFIG_REGISTRY": "",
        "GOPROXY": "",
        "PIP_INDEX_URL": "",

With this, Kicker will prompt about each applicable preset directly from the online form.

Template variants

Another essential extra resource is template variants. Roughly speaking, this is a template override to address a specific technical issue.

You can simply define a template variant:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Create a kicker-extras.json declaring your variant,
    JSON schema:
  3. Make a first release by creating a Git tag.

Example: let's imagine in your company - in addition to the default untagged shared runners - you would also like to let users use a non-default shared runner to deploy to your private Kubernetes cluster. Let's also suppose those runners don't have a free access to the internet, but need to go through an http proxy.

That would involve:

  1. develop the following variant for the Kubernetes template. For instance in file templates/acme-k8s-variant.yml:
    # ==========================================
    # === ACME variant to use Kubernetes runners
    # ==========================================
    # override kubernetes base template job
      # Kubernetes Runners tags
        - k8s
        - shared
      # Kubernetes Runners proxy configuration
        http_proxy: ""
        https_proxy: ""
        no_proxy: "localhost,,"
        HTTP_PROXY: "${http_proxy}"
        HTTPS_PROXY: "${https_proxy}"
        NO_PROXY: "${no_proxy}"
    (developing this requires advanced to be continuous knowledge)
  2. declare it in a kicker-extras.json file:
      "variants": [
          "id": "acme-k8s-runners",
          "name": "ACME Kubernetes Runners",
          "description": "Use the ACME Kubernetes shared Runners",
          "template_path": "templates/acme-k8s-variant.yml",
          "target_project": "to-be-continuous/kubernetes"
    (the target_project field declares the original template the variant applies to)

This way, your variant will show up as a simple actionable component in the Kubernetes template form in Kicker.

Develop your own templates

You may also have to develop tooling very specific to your company.

In that case, you just have to:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Develop your template following the guidelines,
  3. Declare the template with a Kicker descriptor,
  4. Make a first release by creating a Git tag.

Your template can then be used like any other to be continuous one.

Have your own doc + kicker

If you developed any of the above (Kicker extras and/or internal templates), you'll want all developers from your company to have an easy access to a reference documentation + Kicker with your additional material.

In your local copy of the doc project:

  1. Declare the CI/CD project variable GITLAB_TOKEN: a group access token with scopes api,read_registry,write_registry,read_repository,write_repository and with Owner role.
  2. Declare the CI/CD project variable KICKER_RESOURCE_GROUPS: JSON configuration of GitLab groups to crawl.
  3. create a scheduled pipeline (for instance every day at 3:00 am).

Here is an example of KICKER_RESOURCE_GROUPS content:

    "path": "acme/cicd/all", 
    "visibility": "public"
    "path": "acme/cicd/ai-ml", 
    "visibility": "internal", 
    "exclude": ["project-2", "project-13"],
        "id": "ai-ml", 
        "name": "AI/ML", 
        "description": "ACME templates for AI/ML projects"
    "path": "to-be-continuous", 
    "visibility": "public"

Some explanations:

  • path is a path to a GitLab group with GitLab projects containing Kicker resources.
  • visibility is the group/projects visibility to crawl.
  • exclude (optional) allows to exclude some project(s) from processing.
  • extension (optional) allows to associate Kicker resources with a separate extension (actionable within Kicker).

By default, KICKER_RESOURCE_GROUPS is configured to crawl the to-be-continuous group only.

Setup tracking

Another optional thing you might want to setup is tracking (to collect statistics about template jobs execution).

to be continuous already provides an unconfigured project to perform this: tools/tracking.

Here is what you'll have to do to set it up:

  1. Install an Elasticsearch server.
  2. Create a dedicated user with appropriate authorization to push data to some indices.
  3. In your local copy of the tools/tracking: define the TRACKING_CONFIGURATION CI/CD project variable as follows:
      "clients": [
          "authentication": {
  4. Manually start a pipeline on the main (or master) branch: this will (re)generate a new Docker image with your configuration that will now be used by every template job.
  5. Set the following as an instance-level CI/CD variable:
    • value: $CI_REGISTRY/to-be-continuous/tools/tracking:master (adapt the path if you've installed TBC in a custom root group)
      This will override the default tracking image used by all TBC templates

Use custom service images

Apart from the tracking image (see previous chapter), TBC also uses a couple of extra service images. By default, all latest versions of the TBC templates are pulling those images from the public registry, but you might override this behavior and use your own built/hosted images.

Proceed the same way as explained in the previous chapter:

  1. (re)build the image(s) locally in your GitLab server,
  2. override the default image by setting the right instance-level CI/CD variable (see variable names below).

Here are the service images used by TBC templates:

Project Description Image variable
tools/tracking This image can be used to collect statistics about template jobs execution.
Used by all TBC templates.
tools/vault-secrets-provider This image can be used to retrieve secrets from a Vault server.
Used by TBC Vault variants.
tools/aws-auth-provider This image can be used to retrieve an authorization token for AWS.
Used by TBC AWS variants.
tools/gcp-auth-provider This image can be used to retrieve an access token for GCP.
Used by TBC GCP variants.

Use Docker registry mirrors

to be continuous uses explicit Docker registries

By default Docker images names that do not specify a registry (e.g. alpine:latest) are fetched from the Docker Hub.

Since Docker Hub has some quotas, some companies use Docker registry mirrors.

Some Docker registry mirrors can mirror multiple registries (e.g. Artifactory or Nexus Repository when coupling Docker Proxy and Docker Group).

In that case, when pulling an image without specifying the original registry, the mirror will look for an image with the same name in each of the upstream registries.

It will return the 1st matching image, which is not necessarily from the registry you expected.


  • a developer builds the superapp/backend:1.0.0 image and pushes it to both Docker Hub and
  • the developer also tags this image with tag latest and pushes the latest tag to both Docker Hub and
  • the developer then builds image superapp/backend:1.1.0 and pushes it only to Docker Hub (e.g. because of a failure in the build pipeline) without noticing that the image has not been pushed to
  • if a user pulls superapp/backend:latest he would expect to get the superapp/backend:1.1.0 image from Docker Hub
  • but if a mirror has been set up to proxy both Docker Hub and with a priority to, the returned image would be superapp/backend:1.0.0 pulled from

This behavior can be used by attackers in supply chain attacks: they can push a malicious image in lots of Docker registries with the same name as a trustworthy image which is only published in the Docker Hub. A mirror could return the malicious image just because it found an image with the correct name on a different registry before the Docker Hub.

In order to protect against this kind of attacks, to-be-continuous always use fully qualified image names (i.e. including the registry).


To refer to aquasec/trivy:latest, to be continuous templates will always specify


When using containerd as a container runtime, this should have no impact, containerd will still try to use the configured Docker registry mirrors if any.

On the other hand, when using Docker as a container runtime, specifying the registry name when pulling a Docker image prevents Docker from using a Docker registry mirror. Instead, the Docker daemon will directly pull the image from the specified registry. As a consequence, Docker Hub quotas may be reached sooner.


You can simply override the image names specifying your own Docker registry mirror.


If you have a Docker registry mirror for the Docker Hub, you can use something like this:


In this case, both containerd and the Docker daemon will try to pull the aquasec/trivy:latest image through your Docker registry mirror.