Skip to content

self-managed to be continuous (advanced)

Using the official to be continuous templates, you may have some specific needs within your company:

  • simply expose some of your mutualized tools configuration in Kicker (ex: a shared SonarQube server, an Artifactory, a private Kubernetes cluster),
  • or maybe develop and share some template adjustments to address your specific technical context (ex: no default untagged runner, need to access the internet through a proxy, ...),
  • or even develop your own internal templates.

Variable presets

Variable presets are groups of to be continuous variable values that can be used within your company.

You can simply define variable presets:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Create a kicker-extras.json declaring your presets,
    JSON schema: https://gitlab.com/to-be-continuous/kicker/raw/master/kicker-extras-schema-1.json
  3. Make a first release by creating a Git tag.

Example kicker-extras.json:

{
  "presets": [
    {
      "name": "Shared SonarQube",
      "description": "Our internal shared SonarQube server",
      "values": {
        "SONAR_URL": "https://sonarqube.acme.host"
      }
    },
    {
      "name": "Shared OpenShift",
      "description": "Our internal shared OpenShift clusters (prod & no-prod)",
      "values": {
        "OS_URL": "https://api.openshift-noprod.acme.host",
        "OS_REVIEW_ENVIRONMENT_DOMAIN": "apps-noprod.acme.host",
        "OS_INTEG_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP-integration.apps-noprod.acme.host",
        "OS_STAGING_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP-staging.apps-noprod.acme.host",
        "OS_PROD_URL": "https://api.openshift-prod.acme.host",
        "OS_PROD_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP.apps.acme.host",
        "K8S_URL": "https://api.openshift-noprod.acme.host",
        "K8S_REVIEW_ENVIRONMENT_DOMAIN": "apps-noprod.acme.host",
        "K8S_INTEG_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP-integration.apps-noprod.acme.host",
        "K8S_STAGING_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP-staging.apps-noprod.acme.host",
        "K8S_PROD_URL": "https://api.openshift-prod.acme.host",
        "K8S_PROD_ENVIRONMENT_URL": "https://NAME-OF-YOUR-APP.apps.acme.host"
      }
    },
    {
      "name": "Artifactory Mirrors",
      "description": "Our internal Artifactory mirrors",
      "values": {
        "NPM_CONFIG_REGISTRY": "https://artifactory.acme.host/api/npm/npm-mirror",
        "DOCKER_REGISTRY_MIRROR": "https://dockerproxy.acme.host",
        "GOPROXY": "https://artifactory.acme.host/api/go/go-mirror",
        "PIP_INDEX_URL": "https://artifactory.acme.host/api/pypi/pythonproxy/simple",
        "TWINE_REPOSITORY_URL": "https://artifactory.acme.host/api/pypi/python-mirror"
      }
    }
  ]
}

With this, Kicker will prompt about each applicable preset directly from the online form.

Template variants

Another essential extra resource is template variants. Roughly speaking, this is a template override to address a specific technical issue.

You can simply define a template variant:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Create a kicker-extras.json declaring your variant,
    JSON schema: https://gitlab.com/to-be-continuous/kicker/raw/master/kicker-extras-schema-1.json
  3. Make a first release by creating a Git tag.

Example: let's imagine in your company - in addition to the default untagged shared runners - you would also like to let users use a non-default shared runner to deploy to your private Kubernetes cluster. Let's also suppose those runners don't have a free access to the internet, but need to go through an http proxy.

That would involve:

  1. develop the following variant for the Kubernetes template. For instance in file templates/acme-k8s-variant.yml:
    # ==========================================
    # === ACME variant to use Kubernetes runners
    # ==========================================
    # override kubernetes base template job
    .k8s-base:
      # Kubernetes Runners tags
      tags:
        - k8s
        - shared
      # Kubernetes Runners proxy configuration
      variables:
        http_proxy: "http://proxy.acme.host:8080"
        https_proxy: "http://proxy.acme.host:8080"
        no_proxy: "localhost,127.0.0.1,.acme.host"
        HTTP_PROXY: "${http_proxy}"
        HTTPS_PROXY: "${https_proxy}"
        NO_PROXY: "${no_proxy}"
    
    (developing this requires advanced to be continuous knowledge)
  2. declare it in a kicker-extras.json file:
    {
      "variants": [
        {
          "id": "acme-k8s-runners",
          "name": "ACME Kubernetes Runners",
          "description": "Use the ACME Kubernetes shared Runners",
          "template_path": "templates/acme-k8s-variant.yml",
          "target_project": "to-be-continuous/kubernetes"
        }
      ]
    }
    
    (the target_project field declares the original template the variant applies to)

This way, your variant will show up as a simple actionable component in the Kubernetes template form in Kicker.

Develop your own templates

You may also have to develop tooling very specific to your company.

In that case, you just have to:

  1. Create a GitLab project (if possible with public or at least internal visibility),
  2. Develop your template following the guidelines,
  3. Declare the template with a Kicker descriptor,
  4. Make a first release by creating a Git tag.

Your template can then be used like any other to be continuous one.

Have your own doc + kicker

If you developed any of the above (Kicker extras and/or internal templates), you'll want all developers from your company to have an easy access to a reference documentation + Kicker with your additional material.

In your local copy of the doc project:

  1. Declare the CI/CD project variable GITLAB_TOKEN: a personal access token with scopes api,read_repository and at least Developer role on all groups & projects to crawl.
  2. Declare the CI/CD project variable KICKER_RESOURCE_GROUPS: JSON configuration of GitLab groups to crawl.
  3. create a scheduled pipeline (for instance every day at 3:00 am).

Here is an example of KICKER_RESOURCE_GROUPS content:

[
  {
    "path": "acme/cicd/all", 
    "visibility": "public"
  },
  {
    "path": "acme/cicd/ai-ml", 
    "visibility": "internal", 
    "exclude": ["project-2", "project-13"],
    "extension": 
      {
        "id": "ai-ml", 
        "name": "AI/ML", 
        "description": "ACME templates for AI/ML projects"
      }
  },
  {
    "path": "to-be-continuous", 
    "visibility": "public"
  }
]

Some explanations:

  • path is a path to a GitLab group with GitLab projects containing Kicker resources.
  • visibility is the group/projects visibility to crawl.
  • exclude (optional) allows to exclude some project(s) from processing.
  • extension (optional) allows to associate Kicker resources with a separate extension (actionable within Kicker).

By default, KICKER_RESOURCE_GROUPS is configured to crawl the to-be-continuous group only.

Setup tracking

Another optional thing you might want to setup is tracking (to collect statistics about template jobs execution).

to be continuous already provides an unconfigured project to perform this: tools/tracking.

Here is what you'll have to do to set it up:

  1. Install an Elasticsearch server.
  2. Create a dedicated user with appropriate authorization to push data to some indices.
  3. In your local copy of the tools/tracking: define the TRACKING_CONFIGURATION CI/CD project variable as follows:
    {
      "clients": [
        {
          "url":"https://elasticsearch-host",
          "authentication": {
            "username":"tbc-tracking",
            "password":"mYp@55w0rd"
          },
          "timeout":5,
          "indexPrefix":"tbc-",
          "esMajorVersion":7,
          "skipSslVerification":true
        }
      ]
    }
    
  4. Manually start a pipeline on the master branch: this will (re)generate a new Docker image with your configuration that will now be used by every template job.