Running a Cluster using Helm

Running a Cluster using Helm

operate

Overview

The Helm chart allows you to easily create a cluster of several instances of the Curity Identity Server. It provides features such as easily scaling up and down the number of nodes, automatic creation of required keys and configuration items, and supports annotations that are compatible with the Helm chart for Prometheus for a smooth integration.

This article goes through some specifics regarding the Helm chart for the Curity Identity Server. If you are new to Helm start with the tutorial Install the Curity Identity Server with Helm.

Backup

When you setup a cluster with the Helm chart any of your configuration changes that you commit on the admin node will not be persisted outside of that node. As soon as the admin node changes, for example as part of an Helm upgrade, a new container will be created and any changes in the old container will get lost. This is by design. Read up on Docker containers and layers if you want to get into details.

You will have to make a backup of your configuration if you want to keep any changes beyond revisions. The Helm chart allows you to configure a backup that will automatically be created every time there is a new commit to the configuration. Enable the backup with the following parameter setting: curity.config.backup=true

Every time a change is committed the admin node will connect to Kubernetes API and ask it to update a Secret with the current configuration. The Secret is called <fullname>-config-backup and will contain an entry for each backed up configuration. Each entry can be identified by the date and the transaction-id of the commit.

The backup Secret may look like the following for a release called myrelease. In this example there is only one backup in the Secret and it is saved under the name 2020-02-24-62E-829B3-45BC1.xml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ kubectl describe Secret myrelease-idsvr-config-backup
Name:         myrelease-idsvr-config-backup
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
2020-02-24-62E-829B3-45BC1.xml:  27252 bytes
placeholder:                     6 bytes

Use curity.config.configurationSecret and curity.config.configurationSecretItemName to restore the backup. For a step-to-step guide on this topic follow the tutorial Clustering using the Helm Chart.

Logging

To view any log file of the Curity Identity Server you will have to enter the container and open the files. To make viewing logs more convenient some logging parameters were added to the Helm chart. When enabling curity.runtime.logging.stdout Helm will create and mount a volume for the log files in the nodes. In addition, for every entry in curity.runtime.logging.logs an additional lightweight container will be created that forwards the specified log file to stdout. The same applies for the configuration parameters curity.admin.logging.stdout and curity.admin.logging.logs accordingly. You can use the command line to easily view a specific log from an instance.

When assigning an array-value to a parameter with --set as part of the Helm install command, use {...}. You will have to escape the curly brackets in the command. Check out the following example:

1
2
3
4
$ helm install myrelease curity/idsvr \
    --set curity.config.password=secr3t \
    --set curity.runtime.logging.stdout=true \
    --set curity.runtime.logging.logs=\{audit\}

The release will contain a pod for the runtime node. That pod will contain two containers: One for the Curity Identity Server and one for audit logs. To access the audit logs of the Identity Server running on one of the containers execute kubectl logs:

1
$ kubectl logs myrelease-idsvr-runtime-5b65fff85-bvxsh audit

In this way you may access the following log files:

  • audit
  • request
  • cluster
  • confsvc
  • confsvc-internal
  • post-commit-scripts

This list is expandable and you can refer to any log file name. You may adapt the logging behavior of the Curity Identity Server to your needs by providing a configuration file compliant with Log4j 2. Refer to the product documentation for more information.

Persistent Logs

Logs stored in containers suffer from the same problem as describe under Backup. If not persisted outside the container, any logs will be lost together with the node. It is therefore highly recommended to collect the logs outside the container and make use of some log management software. Check out the product documentation on that topic.

Let’s Stay in Touch!

Get the latest on identity management, API Security and authentication straight to your inbox.

Was this page helpful?