/images/resources/operate/tutorials-helm.jpg

Configure Deployments using Helm

On this page

The Install using Helm tutorial explains the most basic way to run the Curity product in a Kubernetes cluster. The next step is usually to deploy test systems and plan deployment pipelines. To run the Curity Identity Server or the Curity Token Handler in Kubernetes, deploy the admin workload, runtime workload and configuration. The Configure Deployed Environments tutorial explains the approach using Docker Compose on a local computer. This tutorial explains Kubernetes specific behaviors.

Set Deployment Parameters

The Helm chart creates Kubernetes YAML resources to express a desired state. Supply parameters to control the final YAML and customize the deployment. The following resources provide details on all of the parameters and their meanings:

Although it is possible to pass parameters to the Helm chart from the command line it is usually more maintainable to use a values file that overrides a subset of the Helm chart's default values.

yaml
12345678910111213141516171819202122
replicaCount: 2
image:
repository: curity.azurecr.io/curity/idsvr
tag: latest
curity:
adminUiHttp: true
admin:
serviceAccount:
name: curity-idsvr-admin
logging:
level: INFO
runtime:
serviceAccount:
name: curity-idsvr-runtime
logging:
level: INFO
config:
uiEnabled: true

When getting started, consider the following settings for your Kubernetes deployments:

  • Use curity.config.uiEnabled to enable the Admin UI.
  • Use curity.adminUiHttp to enable access to the Admin UI via HTTP.
  • Use curity.admin.serviceAccount and curity.runtime.serviceAccount.

Save the parameters to a file called values.yaml and create the service accounts if they do not exist. Consider placing all resources related to the Curity Identity Server in a namespace to isolate them from other resources within the cluster. The following commands create a namespace called curity and create service accounts for the admin and runtime workloads:

bash
123
kubectl create namespace curity
kubectl -n curity create serviceaccount curity-idsvr-admin
kubectl -n curity create serviceaccount curity-idsvr-runtime

The Curity Identity Server configuration contains sensitive values, so create an encryption key to protect them. Run the following commands to create a key and store it in an environment variable:

bash
12
openssl rand 32 | xxd -p -c 64 > config_encryption.key
export CONFIG_ENCRYPTION_KEY="$(cat config_encryption.key)"

Run the First Deployment

Next, run a Helm deployment that uses the values file and pass sensitive parameters on the command line. To set an initial password for the admin user for the Admin UI, set the curity.config.password parameter on the first deployment. Also supply the curity.config.encryptionKey value:

bash
12345
helm repo add curity https://curityio.github.io/idsvr-helm/
helm repo update
helm upgrade --install curity curity/idsvr --values=values.yaml --namespace curity \
--set curity.config.password=Password1 \
--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY"

The Helm chart creates YAML Kubernetes resources, including Services and Deployments for the runtime and admin workloads, and applies them to the cluster. To view the details, run the following command and inspect the text file.

bash
123
helm template curity curity/idsvr --values=values.yaml --namespace curity \
--set curity.config.password=Password1 \
--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY" > resources.yaml

View Deployed Workloads

Wait for the system to come up and then run the following command to view the services and workloads within the curity namespace:

bash
1
kubectl -n curity get all

There are admin and runtime Kubernetes services of type ClusterIP. In this example deployment, the admin workload consists of a single pod and the runtime workload consists of 2 pods. Each pod runs the Docker container for the Curity product. The Kubernetes runtime service load balances requests to the runtime pods that provide OAuth endpoints for applications.

text
1234567891011
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/curity-idsvr-admin-svc ClusterIP 10.110.145.195 <none> 6789/TCP,6790/TCP,4465/TCP,4466/TCP,6749/TCP 3m
service/curity-idsvr-runtime-svc ClusterIP 10.96.32.142 <none> 8443/TCP,4465/TCP,4466/TCP 3m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/curity-idsvr-admin 1/1 0 0 3m
deployment.apps/curity-idsvr-runtime 2/2 0 0 3m
NAME DESIRED CURRENT READY AGE
replicaset.apps/curity-idsvr-admin-84849c8f5c 1 1 1 2m59s
replicaset.apps/curity-idsvr-runtime-75fb7576cb 2 2 2 2m59s

Use a fullnameOverride top level Helm chart setting to change the name of the Kubernetes service and related resources. For example, to deploy only the Curity Token Handler you might set fullnameOverride=tokenhandler so that the names of Kubernetes services are tokenhandler-admin-svc and tokenhandler-runtime-svc.

Run the First Configuration

The first deployment generates an initial configuration. To administer the configuration expose the Admin UI from the cluster:

bash
12
ADMIN_POD="$(kubectl -n curity get pod -l 'role=curity-idsvr-admin' -o jsonpath='{.items[0].metadata.name}')"
kubectl -n curity port-forward "$ADMIN_POD" 6749

Next, open a browser at http://localhost:6749 and sign in with the admin user and password to configure the system and enable OAuth endpoints. Then perform the initial configuration depending on the system type:

Run the First Configuration to make OAuth endpoints available to applications.

Call OAuth Endpoints

Both external clients and internal workloads can connect to endpoints once the initial setup is complete. Run the following command to provide initial connectivity to OAuth endpoints:

bash
12
RUNTIME_POD="$(kubectl -n curity get pod -l 'role=curity-idsvr-runtime' -o jsonpath='{.items[0].metadata.name}' | tail -n 1)"
kubectl -n curity port-forward "$RUNTIME_POD" 8443

The following example commands show how to call OAuth endpoints:

Run the following command to download OpenID Connect metadata. Inspect the response to view the locations of other OAuth endpoints.

bash
1
curl http://localhost:8443/oauth/v2/oauth-anonymous/.well-known/openid-configuration

Application workloads that run in the cluster can use internal Kubernetes URLs to call OAuth endpoints. For example, APIs call the internal JWKS URI endpoint to download token signing public keys so that they can validate JWT access tokens. Run the following commands to deploy a pod that acts as an API workload:

bash
12345678910111213
kubectl create namespace api
kubectl -n api apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: curlimages/curl:latest
command: ["sleep"]
args: ["infinity"]
EOF

Get a remote shell to the deployed pod:

bash
1
kubectl -n api exec -it curl -- sh

Then call the JWKS URI with the following internal URL to reference the Kubernetes runtime service within the curity namespace:

bash
1
curl http://curity-idsvr-runtime-svc.curity.svc:8443/oauth/v2/oauth-anonymous/jwks

Run the First Upgrade

Use the Admin UI to update configuration settings, for example to register an OAuth client. Then follow the instructions from the Import and Export Configurations tutorial to use the ChangesDownload option of the Admin UI and download the configuration and save it to a file called curity-config.xml. To upgrade the system correctly you must make the latest configuration data and the same configuration encryption key available to new pods.

Use a ConfigMap for Configuration Data

The following example supplies the configuration data in a Kubernetes ConfigMap:

bash
12
kubectl -n curity create configmap idsvr-config \
--from-file='idsvr-config=curity-config.xml'

The Helm chart values.yaml file can then reference the ConfigMap:

yaml
12345678910111213141516171819202122232425262728
replicaCount: 2
image:
repository: curity.azurecr.io/curity/idsvr
tag: latest
curity:
adminUiHttp: true
admin:
serviceAccount:
name: curity-idsvr-admin
logging:
level: INFO
runtime:
serviceAccount:
name: curity-idsvr-runtime
logging:
level: INFO
config:
uiEnabled: true
configuration:
- configMapRef:
name: idsvr-config
items:
- key: idsvr-config
path: curity-config.xml

Use a PersistentVolume for Configuration Data

When getting started, an alternative option is to use a PersistentVolume for configuration data. On the first deployment, express a PersistentVolumeClaim in the Helm chart. Configuration data storage is then external to pods, so new pods automatically receive the latest configuration, without the need to download configuration and resupply it as a ConfigMap.

yaml
123456789101112131415161718192021222324252627
replicaCount: 2
image:
repository: curity.azurecr.io/curity/idsvr
tag: latest
curity:
adminUiHttp: true
admin:
serviceAccount:
name: curity-idsvr-admin
logging:
level: INFO
runtime:
serviceAccount:
name: curity-idsvr-runtime
logging:
level: INFO
config:
uiEnabled: true
persistentConfigVolume:
enabled: true
storageClass: standard
accessModes: ReadWriteOnce
size: 1Gi

Use a Post Commit Script to Save Configuration Data

Another option is to use a post commit script to persist all configuration changes to items within a Kubernetes secret. You can read more about this feature in the Configuration Backups and Logging using Helm tutorial.

Ensure Zero Downtime

Use helm upgrade --install on all subsequent redeployments, so that the Kubernetes platform adds new pods, waits for them to reach a ready state and then terminates old pods. Therefore, the platform ensures zero downtime for OAuth endpoints.

bash
12
helm upgrade --install curity curity/idsvr --values=values.yaml --namespace curity \
--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY"

Finalize Configuration

Storing configuration external to pods can be a convenient option for early stages of the deployment pipeline, like development systems. You can then can use a Configuration as Code approach to minimize duplication for other stages of the deployment pipeline and to improve the reliability of production upgrades:

  • Split the backed up configuration into muiltiple files.
  • Parameterize the configuration files to minimize duplication.
  • Protect sensitive parameters during deployment.
  • Copy configuration files and other shared resources into a Docker image.
  • Update Helm to use the custom Docker image.

The use of a custom Docker image also reduces the need to use Kubernetes techniques to deploy files, which simplifies the Helm values.yaml file:

text
1234567891011121314151617181920212223242526
replicaCount: 2
image:
repository: custom_idsvr
tag: 1.0.0
curity:
adminUiHttp: true
admin:
serviceAccount:
name: curity-idsvr-admin
logging:
level: INFO
runtime:
serviceAccount:
name: curity-idsvr-runtime
logging:
level: INFO
config:
uiEnabled: true
environmentVariableConfigMaps:
- idsvr-parameters
environmentVariableSecrets:
- idsvr-protected-parameters

Deployment Example

The GitHub link at the top of this page provides some Kubernetes deployment examples that you can run on a local computer. Once the configuration techniques work locally they will also work in any deployed Kubernetes environment.

  • The Basic example shows how to run an initial deployment and upgrade for either the Curity Identity Server or the Curity Token Handler.
  • The Curity Identity Server and Curity Token Handler examples show a more complete deployment that uses configuration as code.

Conclusion

This tutorial provides an initial Kubernetes deployment, though the OAuth endpoints are not yet available to remote clients. The Expose OAuth Endpoints from Kubernetes tutorial explains the approach to use a Kubernetes API gateway to provide external OAuth URLs.

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial