Skip to content

GitLab CI template for OpenShift

This project implements a GitLab CI/CD template to deploy your application to an OpenShift platform.

Usage

This template can be used both as a CI/CD component or using the legacy include:project syntax.

Use as a CI/CD component

Add the following to your gitlab-ci.yml:

include:
  # 1: include the component
  - component: gitlab.com/to-be-continuous/openshift/gitlab-ci-openshift@5.2.3
    # 2: set/override component inputs
    inputs:
      # ⚠ this is only an example
      base-app-name: wonderapp
      review-project: "wonder-noprod" # enable review env
      staging-project: "wonder-noprod" # enable staging env
      prod-project: "wonder-prod" # enable production env

Use as a CI/CD template (legacy)

Add the following to your gitlab-ci.yml:

include:
  # 1: include the template
  - project: 'to-be-continuous/openshift'
    ref: '5.2.3'
    file: '/templates/gitlab-ci-openshift.yml'

variables:
  # 2: set/override template variables
  # ⚠ this is only an example
  OS_BASE_APP_NAME: wonderapp
  OS_REVIEW_PROJECT: "wonder-noprod" # enable review env
  OS_STAGING_PROJECT: "wonder-noprod" # enable staging env
  OS_PROD_PROJECT: "wonder-prod" # enable production env

Understand

This chapter introduces key notions and principle to understand how this template works.

Managed deployment environments

This template implements continuous delivery/continuous deployment for projects hosted on OpenShift platforms.

Review environments

The template supports review environments: those are dynamic and ephemeral environments to deploy your ongoing developments (a.k.a. feature or topic branches).

When enabled, it deploys the result from upstream build stages to a dedicated and temporary environment. It is only active for non-production, non-integration branches.

It is a strict equivalent of GitLab's Review Apps feature.

It also comes with a cleanup job (accessible either from the environments page, or from the pipeline view).

Integration environment

If you're using a Git Workflow with an integration branch (such as Gitflow), the template supports an integration environment.

When enabled, it deploys the result from upstream build stages to a dedicated environment. It is only active for your integration branch (develop by default).

Production environments

Lastly, the template supports 2 environments associated to your production branch (main or master by default):

  • a staging environment (an iso-prod environment meant for testing and validation purpose),
  • the production environment.

You're free to enable whichever or both, and you can also choose your deployment-to-production policy:

  • continuous deployment: automatic deployment to production (when the upstream pipeline is successful),
  • continuous delivery: deployment to production can be triggered manually (when the upstream pipeline is successful).

Supported authentication methods

This template supports token authentication only. Tokens associated with OpenShift user accounts are only valid for 24 hours. To generate a token that never expires you need to create and use a service account token.

Follow these steps:

# create a service account
oc create serviceaccount cicd -n <your_project_name>
# ⚠ don't forget to add required role(s) (ex: basic-user & edit)
oc adm policy add-role-to-user <role_name> system:serviceaccount:<your_project_name>:cicd -n <your_project_name>
# retrieve service account's token name(s)
oc describe serviceaccount cicd -n <your_project_name>
# get service account token from the secret
oc describe secret <token_name> -n <your_project_name>
# test the token
oc get all --token=<token>
# this token can be used to authenticate ;)

⚠ don't forget to replace <your_project_name> with your OpenShift project name and <role_name> with the appropriate role (ask your OpenShift support). See default cluster roles.

Deployment context variables

In order to manage the various deployment environments, this template provides a couple of dynamic variables that you might use in your hook scripts, deployment manifests and other deployment resources:

  • ${environment_type}: the current deployment environment type (review, integration, staging or production)
  • ${environment_name}: a generated application name to use for the current deployment environment (ex: myproject-review-fix-bug-12 or myproject-staging) - details below
  • ${environment_name_ssc}: the above application name in SCREAMING_SNAKE_CASE format (ex: MYPROJECT_REVIEW_FIX_BUG_12 or MYPROJECT_STAGING)

Generated environment name

The ${environment_name} variable is generated to designate each deployment environment with a unique and meaningful application name. By construction, it is suitable for inclusion in DNS, URLs, Kubernetes labels... It is built from:

  • the application base name (defaults to $CI_PROJECT_NAME but can be overridden globally and/or per deployment environment - see configuration variables)
  • GitLab predefined $CI_ENVIRONMENT_SLUG variable (sluggified name, truncated to 24 characters)

The ${environment_name} variable is then evaluated as:

  • <app base name> for the production environment
  • <app base name>-$CI_ENVIRONMENT_SLUG for all other deployment environments
  • 💡 ${environment_name} can also be overriden per environment with the appropriate configuration variable

Examples (with an application's base name myapp):

$environment_type Branch $CI_ENVIRONMENT_SLUG $environment_name
review feat/blabla review-feat-bla-xmuzs6 myapp-review-feat-bla-xmuzs6
integration develop integration myapp-integration
staging main staging myapp-staging
production main production myapp

Supported deployment methods

The OpenShift template supports two ways of deploying your code:

  1. script-based deployment,
  2. template-based deployment.

1: script-based deployment

In this mode, you only have to provide a shell script that fully implements the deployment using the oc CLI.

The deployment script is searched as follows:

  1. look for a specific os-deploy-$environment_type.sh in the $OS_SCRIPTS_DIR directory in your project (e.g. os-deploy-staging.sh for staging environment),
  2. if not found: look for a default os-deploy.sh in the $OS_SCRIPTS_DIR directory in your project,
  3. if not found: the GitLab CI template assumes you're using the template-based deployment policy.

2: template-based deployment

In this mode, you have to provide a OpenShift templates in your project structure, and let the GitLab CI template oc apply it.

The template processes the following steps:

  1. optionally executes the os-pre-apply.sh script in your project to perform specific environment pre-initialization (for e.g. create required services),
  2. looks for your OpenShift template file, performs variables substitution and oc apply it,
    1. look for a specific openshift-$environment_type.yml in your project (e.g. openshift-staging.yml for staging environment),
    2. fallbacks to default openshift.yml.
  3. optionally executes the os-post-apply.sh script in your project to perform specific environment post-initialization stuff,
  4. optionally executes the os-readiness-check to wait & check for the application to be ready (if not found, the template assumes the application was successfully started).

Deployment jobs process the selected template with the following labels:

  • app: the application target name to use in this environment (i.e. $environment_name)
    Can be overridden with $OS_APP_LABEL.
  • env: the environment type (i.e. $environment_type_)
    Can be overridden with $OS_ENV_LABEL.

Cleanup jobs

The GitLab CI template for OpenShift supports two policies for destroying an environment (actually only review environments):

  1. script-based cleanup
  2. template-based cleanup

1: script-based cleanup

In this mode, you only have to provide a shell script that fully implements the environment cleanup using the oc CLI.

The a deployment script is searched as follows:

  1. look for a specific os-cleanup-$environment_type.sh in the $OS_SCRIPTS_DIR directory in your project (e.g. os-cleanup-staging.sh for staging environment),
  2. if not found: look for a default os-cleanup.sh in the $OS_SCRIPTS_DIR directory in your project,
  3. if not found: the GitLab CI template assumes you're using the template-based cleanup policy.

TIP: a nice way to implement environment cleanup is to declare the label app=${environment_name} on every OpenShift object associated to your environment. Then environment cleanup can be implemented very easily with command oc delete all,pvc,is,secret -l "app=${environment_name}"

2: template-based cleanup

In this mode, you mainly let OpenShift delete all objects from your OpenShift deployment file.

The template processes the following steps:

  1. optionally executes the os-pre-cleanup.sh script in your project to perform specific environment pre-cleanup stuff,
  2. deletes all objects with label app=${environment_name}
    works well with template-based deployment as this label is forced during oc apply
  3. optionally executes the os-post-cleanup.sh script in your project to perform specific environment post-cleanup (for e.g. delete bound services).

Cleanup job limitations

When using this template, you have to be aware of one limitation (bug) with the cleanup job.

By default, the cleanup job triggered automatically on branch deletion will fail due to not being able to fetch the Git branch prior to executing the job (sounds obvious as the branch was just deleted). This is pretty annoying, but as you may see above, deleting an env may require scripts from the project...

So, what can be done about that?

  1. if your project doesn't require any delete script (in other words deleting all objects with label app=${environment_name} is enough to clean-up everything): you could simply override the cleanup job Git strategy to prevent from fetching the branch code:
    os-cleanup-review:
      variables:
        GIT_STRATEGY: none
    
  2. in any other case, we're just sorry about this bug, but there is not much we can do:
    • remind to delete your review env manually before deleting the branch
    • otherwise you'll have to do it afterwards from your computer (using oc CLI) or from the OpenShift console.

Using variables

You have to be aware that your deployment (and cleanup) scripts have to be able to cope with various environments, each with different application names, exposed routes, settings, ... Part of this complexity can be handled by the lookup policies described above (ex: one script/manifest per env) and also by using available environment variables:

  1. deployment context variables provided by the template:
    • ${environment_type}: the current environment type (review, integration, staging or production)
    • ${environment_name}: the application name to use for the current environment (ex: myproject-review-fix-bug-12 or myproject-staging)
    • ${environment_name_ssc}: the application name in SCREAMING_SNAKE_CASE format (ex: MYPROJECT_REVIEW_FIX_BUG_12 or MYPROJECT_STAGING)
    • ${hostname}: the environment hostname, extracted from the current environment url (after late variable expansion - see below)
  2. any GitLab CI variable
  3. any custom variable (ex: ${SECRET_TOKEN} that you have set in your project CI/CD variables)

While your scripts may simply use any of those variables, your OpenShift templates shall be variabilized using parameters.

Parameters are evaluated in the following order:

  1. from a (optional) specific openshift-$environment_type.env file found in the $OS_SCRIPTS_DIR directory of your project,
  2. from the (optional) default openshift.env file found in the $OS_SCRIPTS_DIR directory of your project,
  3. from the environment (either predefined GitLab CI, custom or dynamic variables).

For example, with the following parameters in your template:

parameters:
  - name: environment_name
    description: "the application target name to use in this environment (provided by GitLab CI template)"
    required: true
  - name: hostname
    description: "the environment hostname (provided by GitLab CI template)"
    required: true
  - name: MEMORY
    description: "Pod memory (depends on the environment)"
    required: true
  - name: INSTANCES
    description: "Number of pods (depends on the environment)"
    required: true
  - name: SECRET_TOKEN
    description: "A secret that should not be managed in Git !"
    required: true

With a default openshift.env file:

INSTANCE=1
MEMORY=2Gi

And a specific openshift-production.env file:

INSTANCE=3

And finally SECRET_TOKEN variable defined in your project CI/CD variables.

Then, when deploying to production, the parameters will be evaluated as follows:

Parameter Evaluated from
environment_name dynamic variable set by the deployment script
hostname dynamic variable set by the deployment script
MEMORY Default openshift.env file (undefined in specific openshift-production.env file)
INSTANCES specific openshift-production.env file
SECRET_TOKEN project CI/CD variables

About multi-line parameters

The template manages multiline parameters passed through environment (ex: a TLS certificate in PEM format).

Unfortunately it doesn't support multiline parameters from dotenv files (OpenShift limitation), but you might use the following technique.

Exemple in your default openshift.env file:

# define SSL_CERT template param using a GitLab CI secret variable
TLS_CERT=${DEV_TLS_CERT}

We could imagine the openshift-production.env as follows:

# define SSL_CERT template param using a GitLab CI secret variable
TLS_CERT=${PROD_TLS_CERT}

The template will take care of expanding variables contained in your dotenv files (requires DEV_TLS_CERT and PROD_TLS_CERT are defined in your environment).

Environments URL management

The OpenShift template supports two ways of providing your environments url:

  • a static way: when the environments url can be determined in advance, probably because you're exposing your routes through a DNS you manage,
  • a dynamic way: when the url cannot be known before the deployment job is executed.

The static way can be implemented simply by setting the appropriate configuration variable(s) depending on the environment (see environments configuration chapters):

  • $OS_ENVIRONMENT_URL to define a default url pattern for all your envs,
  • $OS_REVIEW_ENVIRONMENT_URL, $OS_INTEG_ENVIRONMENT_URL, $OS_STAGING_ENVIRONMENT_URL and $OS_PROD_ENVIRONMENT_URL to override the default.

ℹ Each of those variables support a late variable expansion mechanism with the %{somevar} syntax, allowing you to use any dynamically evaluated variables such as ${environment_name}.

Example:

variables:
  OS_BASE_APP_NAME: "wonderapp"
  # global url for all environments
  OS_ENVIRONMENT_URL: "https://%{environment_name}.nonprod.acme.domain"
  # override for prod (late expansion of $OS_BASE_APP_NAME not needed here)
  OS_PROD_ENVIRONMENT_URL: "https://$OS_BASE_APP_NAME.acme.domain"
  # override for review (using separate resource paths)
  OS_REVIEW_ENVIRONMENT_URL: "https://wonderapp-review.nonprod.acme.domain/%{environment_name}"

To implement the dynamic way, your deployment script shall simply generate a environment_url.txt file in the working directory, containing only the dynamically generated url. When detected by the template, it will use it as the newly deployed environment url.

Deployment output variables

Each deployment job produces output variables that are propagated to downstream jobs (using dotenv artifacts):

  • $environment_type: set to the type of environment (review, integration, staging or production),
  • $environment_name: the application name (see below),
  • $environment_url: set to the environment URL (whether determined statically or dynamically).

Those variables may be freely used in downstream jobs (for instance to run acceptance tests against the latest deployed environment).

You may also add and propagate your own custom variables, by pushing them to the openshift.out.env file in your deployment script or hook.

Configuration reference

Secrets management

Here are some advices about your secrets (variables marked with a 🔒):

  1. Manage them as project or group CI/CD variables:
    • masked to prevent them from being inadvertently displayed in your job logs,
    • protected if you want to secure some secrets you don't want everyone in the project to have access to (for instance production secrets).
  2. In case a secret contains characters that prevent it from being masked, simply define its value as the Base64 encoded value prefixed with @b64@: it will then be possible to mask it and the template will automatically decode it prior to using it.
  3. Don't forget to escape special characters (ex: $ -> $$).

Global configuration

The OpenShift template uses some global configuration used throughout all jobs.

Input / Variable Description Default value
cli-image / OS_CLI_IMAGE the Docker image used to run OpenShift Client (OC) CLI commands
⚠ set the version required by your OpenShift server
quay.io/openshift/origin-cli:latest
url / OS_URL Default OpenShift API url has to be defined
🔒 OS_TOKEN Default OpenShift API token has to be defined
base-app-name / OS_BASE_APP_NAME Base application name $CI_PROJECT_NAME (see GitLab doc)
environment-url / OS_ENVIRONMENT_URL Default environments url (only define for static environment URLs declaration)
supports late variable expansion (ex: https://%{environment_name}.openshift.acme.com)
none
scripts-dir / OS_SCRIPTS_DIR directory where OpenShift scripts (templates, hook scripts) are located . (root project dir)
base-template-name / OS_BASE_TEMPLATE_NAME Base OpenShift template name openshift
app-label / OS_APP_LABEL The OpenShift label set with the $environment_name dynamic variable value. Advanced usage app
env-label / OS_ENV_LABEL The OpenShift label set with the $environment_type dynamic variable value (review, integration, staging or prod). Advanced usage env

Review environments configuration

Review environments are dynamic and ephemeral environments to deploy your ongoing developments (a.k.a. feature or topic branches).

They are disabled by default and can be enabled by setting the OS_REVIEW_PROJECT variable (see below).

Here are variables supported to configure review environments:

Input / Variable Description Default value
review-project / OS_REVIEW_PROJECT OpenShift project for review env none (disabled)
review-url / OS_REVIEW_URL OpenShift API url for review env (only define to override default) $OS_URL
🔒 OS_REVIEW_TOKEN OpenShift API token for review env (only define to override default) $OS_TOKEN
review-app-name / OS_REVIEW_APP_NAME Application name for review env "${OS_BASE_APP_NAME}-${CI_ENVIRONMENT_SLUG}" (ex: myproject-review-fix-bug-12)
review-environment-url / OS_REVIEW_ENVIRONMENT_URL The review environments url (only define for static environment URLs declaration and if different from default) $OS_ENVIRONMENT_URL
review-autostop-duration / OS_REVIEW_AUTOSTOP_DURATION The amount of time before GitLab will automatically stop review environments 4 hours

Integration environment configuration

The integration environment is the environment associated to your integration branch (develop by default).

It is disabled by default and can be enabled by setting the OS_INTEG_PROJECT variable (see below).

Here are variables supported to configure the integration environment:

Input / Variable Description Default value
integ-project / OS_INTEG_PROJECT OpenShift project for integration env none (disabled)
integ-url / OS_INTEG_URL OpenShift API url for integration env (only define to override default) $OS_URL
🔒 OS_INTEG_TOKEN OpenShift API token for integration env (only define to override default) $OS_TOKEN
integ-app-name / OS_INTEG_APP_NAME Application name for integration env ${OS_BASE_APP_NAME}-integration
integ-environment-url / OS_INTEG_ENVIRONMENT_URL The integration environment url (only define for static environment URLs declaration and if different from default) $OS_ENVIRONMENT_URL

Staging environment configuration

The staging environment is an iso-prod environment meant for testing and validation purpose associated to your production branch (main or master by default).

It is disabled by default and can be enabled by setting the OS_STAGING_PROJECT variable (see below).

Here are variables supported to configure the staging environment:

Input / Variable Description Default value
staging-project / OS_STAGING_PROJECT OpenShift project for staging env none (disabled)
staging-url / OS_STAGING_URL OpenShift API url for staging env (only define to override default) $OS_URL
🔒 OS_STAGING_TOKEN OpenShift API token for staging env (only define to override default) $OS_TOKEN
staging-app-name / OS_STAGING_APP_NAME Application name for staging env ${OS_BASE_APP_NAME}-staging
staging-environment-url / OS_STAGING_ENVIRONMENT_URL The staging environment url (only define for static environment URLs declaration and if different from default) $OS_ENVIRONMENT_URL

Production environment configuration

The production environment is the final deployment environment associated with your production branch (main or master by default).

It is disabled by default and can be enabled by setting the OS_PROD_PROJECT variable (see below).

Here are variables supported to configure the production environment:

Input / Variable Description Default value
prod-project / OS_PROD_PROJECT OpenShift project for production env none (disabled)
prod-url / OS_PROD_URL OpenShift API url for production env (only define to override default) $OS_URL
🔒 OS_PROD_TOKEN OpenShift API token for production env (only define to override default) $OS_TOKEN
prod-app-name / OS_PROD_APP_NAME Application name for production env $OS_BASE_APP_NAME
prod-environment-url / OS_PROD_ENVIRONMENT_URL The production environment url (only define for static environment URLs declaration and if different from default) $OS_ENVIRONMENT_URL
prod-deploy-strategy / OS_PROD_DEPLOY_STRATEGY Defines the deployment to production strategy. One of manual (i.e. one-click) or auto. manual

os-cleanup-all-review job

This job allows destroying all review environments at once (in order to save cloud resources).

It is disabled by default and can be controlled using the $CLEANUP_ALL_REVIEW variable:

  1. automatically executed if $CLEANUP_ALL_REVIEW set to force,
  2. manual job enabled from any master branch pipeline if $CLEANUP_ALL_REVIEW set to true (or any other value),

The first value force can be used in conjunction with a scheduled pipeline to cleanup cloud resources for instance everyday at 6pm or on friday evening.

The second one simply enables the (manual) cleanup job on the master branch pipeline.

Anyway destroyed review environments will be automatically re-created the next time a developer pushes a new commit on a feature branch.

⚠ in case of scheduling the cleanup, you'll probably have to create an almost empty branch without any other template (no need to build/test/analyse your code if your only goal is to cleanup environments).

Extra functions

The template provides extra scripts that can be called in your .gitlab-ci.yml or hook scripts for extra treatments.

Function signature Description
force_rollout <deploymentConfig_name> Force a new rollout of the specified deploymentConfig. This can be useful when your deployment references a stable or latestimage stream tag that is updated by gitlab pipeline. Once your template applied, if you only changed some application stuff and pushed a new version of the image, yet did not change anything in your template, no rollout will be triggered. Call this function to force a new rollout.
poll_last_rollout <deploymentConfig_name>, [timeout: 2 minutes] Wait for the last rollout to end. This function will fail if the rollout fails or did not ended during the specified amount of time (two minutes by default).
purge_old_image_tags <image_name>, <number_to_keep> For the given image stream, crawls all the tags and keeps only the N youngest ones. This can be useful when you create a new image tag for each pipeline (exemple of tag: $CI_COMMIT_SHORT_SHA or $CI_COMMIT_SHA).

Variants

Vault variant

This variant allows delegating your secrets management to a Vault server.

Configuration

In order to be able to communicate with the Vault server, the variant requires the additional configuration parameters:

Input / Variable Description Default value
TBC_VAULT_IMAGE The Vault Secrets Provider image to use (can be overridden) registry.gitlab.com/to-be-continuous/tools/vault-secrets-provider:latest
vault-base-url / VAULT_BASE_URL The Vault server base API url none
vault-oidc-aud / VAULT_OIDC_AUD The aud claim for the JWT $CI_SERVER_URL
🔒 VAULT_ROLE_ID The AppRole RoleID must be defined
🔒 VAULT_SECRET_ID The AppRole SecretID must be defined

Usage

Then you may retrieve any of your secret(s) from Vault using the following syntax:

@url@http://vault-secrets-provider/api/secrets/{secret_path}?field={field}

With:

Parameter Description
secret_path (path parameter) this is your secret location in the Vault server
field (query parameter) parameter to access a single basic field from the secret JSON payload

Example

include:
  # main template
  - component: gitlab.com/to-be-continuous/openshift/gitlab-ci-openshift@5.2.3
  # Vault variant
  - component: gitlab.com/to-be-continuous/openshift/gitlab-ci-openshift-vault@5.2.3
    inputs:
      # audience claim for JWT
      vault-oidc-aud: "https://vault.acme.host"
      vault-base-url: "https://vault.acme.host/v1"
      # $VAULT_ROLE_ID and $VAULT_SECRET_ID defined as a secret CI/CD variable

variables:
  # Secrets managed by Vault
  OS_TOKEN: "@url@http://vault-secrets-provider/api/secrets/b7ecb6ebabc231/my-app/openshift/noprod?field=token"
  OS_PROD_TOKEN: "@url@http://vault-secrets-provider/api/secrets/b7ecb6ebabc231/my-app/openshift/noprod?field=token"

Examples

Back-end application

Context

  • review & staging environments enabled on Kermit no prod,
  • production environment enabled on Kermit prod,
  • implements automated acceptance (functional) tests: manual on review env, auto on staging.

.gitlab-ci.yml

include:
  - component: gitlab.com/to-be-continuous/openshift/gitlab-ci-openshift@5.2.3
    inputs:
      url: "https://openshift-noprod.acme.host" # noprod cluster is default (review & staging)
      prod-url: "https://openshift-prod.acme.host/" # prod cluster for prod env only
      # OS_TOKEN and OS_PROD_TOKEN are defined as a protected project variable
      review-project: "myproj-noprod" # activates 'review' env in CI pipeline
      staging-project: "myproj-noprod" # activates 'staging' env in CD pipeline
      prod-project: "myproj"
      review-environment-domain: "apps-noprod.acme.host" # intranet route
      staging-environment-url: "https://myproj-staging.apps-noprod.acme.host" # internet route
      prod-environment-url: "https://myproj.apps.acme.com" # internet route

OpenShift template

# This generic template instantiates all required OpenShift objects
# It uses the following parameters that will be dynamically replaced by the deployment script:
# - ${environment_name}
# - ${environment_name_ssc}
# - ${hostname}
# - ${environment_name}
apiVersion: v1
kind: Template
metadata:
  name: my-application-template
  description: an OpenShift template for my application
# template parameters
parameters:
  - name: environment_name
    description: "the application target name to use in this environment (provided by GitLab CI template)"
    required: true
  - name: environment_name_ssc
    description: "the application target name in SCREAMING_SNAKE_CASE format (provided by GitLab CI template)"
    required: true
  - name: hostname
    description: "the environment hostname (provided by GitLab CI template)"
    required: true
  - name: docker_image
    description: "the Docker image build in upstream stages (provided by the Docker template)"
    required: true
objects:
# === Service
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      description: Exposes and load balances the application pods.
    labels:
      app: ${environment_name}
    name: ${environment_name}
  spec:
    ports:
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080
    selector:
      app: ${environment_name}
# === DeploymentConfig
- apiVersion: apps.openshift.io/v1
  kind: DeploymentConfig
  metadata:
    annotations:
      description: The deployment configuration of application.
    labels:
      app: ${environment_name}
    name: ${environment_name}
  spec:
    replicas: 1
    revisionHistoryLimit: 2
    selector:
      app: ${environment_name}
    strategy:
      type: Rolling
      rollingParams:
        timeoutSeconds: 3600
    template:
      metadata:
        labels:
          app: ${environment_name}
      spec:
        containers:
        - image: ${docker_image}
          imagePullPolicy: Always
          name: spring-boot
          ports:
          - containerPort: 8080
            name: http
            protocol: TCP
          securityContext:
            privileged: false
    triggers:
    - type: ConfigChange
# === Route
- apiVersion: route.openshift.io/v1
  kind: Route
  metadata:
    annotations:
      description: The route exposes the service at a hostname.
    labels:
      app: ${environment_name}
    name: ${environment_name}
  spec:
    host: ${hostname}
    port:
      targetPort: 8080
    to:
      kind: Service
      name: ${environment_name}

hook scripts

os-post-apply.sh

This script - when found by the template - is executed after running oc apply, to perform specific environment post-initialization (for e.g. start build).

#!/bin/bash

set -e

# create a source-to-image binary build if does not exist
oc get buildconfig "$environment_name" 2> /dev/null || oc new-build openshift/redhat-openjdk18-openshift:1.4 --binary="true" --name="$environment_name" --labels="app=$environment_name"

# prepare build resources
mkdir -p target/openshift/deployments && cp target/my-application-1.0.0-SNAPSHOT.jar target/openshift/deployments/

# trigger build: this will trigger a deployment
oc start-build "$environment_name" --from-dir=target/openshift --wait --follow

# example for force_rollout
force_rollout $environment_name
os-readiness-check.sh

This script - when found by the template - is used to wait & check for the application to be ready.

It uses the template variable $environment_url to build absolute urls to the application.

It is supposed to exit with status 0 on success (the template will go on with deployment), or any non-0 value in case of error (the template will stop and as much as possible revert the ongoing deployment).

#!/bin/bash
for attempt in {1..20}
do
    echo "Testing application readiness ($attempt/20)..."
    if wget --no-check-certificate -T 2 --tries 1 "$environment_url/healthcheck"
    then
        echo "[INFO] healthcheck response: OK"
        exit 0
    fi
    sleep 5
done

echo "[ERROR] max attempts reached: failed"
exit 1