On this page
The Install using Helm tutorial explains the most basic way to run the Curity product in a Kubernetes cluster. The next step is usually to deploy test systems and plan deployment pipelines. To run the Curity Identity Server or the Curity Token Handler in Kubernetes, deploy the admin workload, runtime workload and configuration. The Configure Deployed Environments tutorial explains the approach using Docker Compose on a local computer. This tutorial explains Kubernetes specific behaviors.
Set Deployment Parameters
The Helm chart creates Kubernetes YAML resources to express a desired state. Supply parameters to control the final YAML and customize the deployment. The following resources provide details on all of the parameters and their meanings:
Although it is possible to pass parameters to the Helm chart from the command line it is usually more maintainable to use a values file that overrides a subset of the Helm chart's default values.
replicaCount: 2image:repository: curity.azurecr.io/curity/idsvrtag: latestcurity:adminUiHttp: trueadmin:serviceAccount:name: curity-idsvr-adminlogging:level: INFOruntime:serviceAccount:name: curity-idsvr-runtimelogging:level: INFOconfig:uiEnabled: true
When getting started, consider the following settings for your Kubernetes deployments:
- Use
curity.config.uiEnabled
to enable the Admin UI. - Use
curity.adminUiHttp
to enable access to the Admin UI via HTTP. - Use
curity.admin.serviceAccount
andcurity.runtime.serviceAccount
.
Save the parameters to a file called values.yaml
and create the service accounts if they do not exist. Consider placing all resources related to the Curity Identity Server in a namespace to isolate them from other resources within the cluster. The following commands create a namespace called curity
and create service accounts for the admin and runtime workloads:
kubectl create namespace curitykubectl -n curity create serviceaccount curity-idsvr-adminkubectl -n curity create serviceaccount curity-idsvr-runtime
The Curity Identity Server configuration contains sensitive values, so create an encryption key to protect them. Run the following commands to create a key and store it in an environment variable:
openssl rand 32 | xxd -p -c 64 > config_encryption.keyexport CONFIG_ENCRYPTION_KEY="$(cat config_encryption.key)"
Run the First Deployment
Next, run a Helm deployment that uses the values file and pass sensitive parameters on the command line. To set an initial password for the admin
user for the Admin UI, set the curity.config.password
parameter on the first deployment. Also supply the curity.config.encryptionKey
value:
helm repo add curity https://curityio.github.io/idsvr-helm/helm repo updatehelm upgrade --install curity curity/idsvr --values=values.yaml --namespace curity \--set curity.config.password=Password1 \--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY"
The Helm chart creates YAML Kubernetes resources, including Services and Deployments for the runtime and admin workloads, and applies them to the cluster. To view the details, run the following command and inspect the text file.
helm template curity curity/idsvr --values=values.yaml --namespace curity \--set curity.config.password=Password1 \--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY" > resources.yaml
View Deployed Workloads
Wait for the system to come up and then run the following command to view the services and workloads within the curity namespace:
kubectl -n curity get all
There are admin and runtime Kubernetes services of type ClusterIP. In this example deployment, the admin workload consists of a single pod and the runtime workload consists of 2 pods. Each pod runs the Docker container for the Curity product. The Kubernetes runtime service load balances requests to the runtime pods that provide OAuth endpoints for applications.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/curity-idsvr-admin-svc ClusterIP 10.110.145.195 <none> 6789/TCP,6790/TCP,4465/TCP,4466/TCP,6749/TCP 3mservice/curity-idsvr-runtime-svc ClusterIP 10.96.32.142 <none> 8443/TCP,4465/TCP,4466/TCP 3mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/curity-idsvr-admin 1/1 0 0 3mdeployment.apps/curity-idsvr-runtime 2/2 0 0 3mNAME DESIRED CURRENT READY AGEreplicaset.apps/curity-idsvr-admin-84849c8f5c 1 1 1 2m59sreplicaset.apps/curity-idsvr-runtime-75fb7576cb 2 2 2 2m59s
Use a fullnameOverride
top level Helm chart setting to change the name of the Kubernetes service and related resources. For example, to deploy only the Curity Token Handler you might set fullnameOverride=tokenhandler
so that the names of Kubernetes services are tokenhandler-admin-svc
and tokenhandler-runtime-svc
.
Run the First Configuration
The first deployment generates an initial configuration. To administer the configuration expose the Admin UI from the cluster:
ADMIN_POD="$(kubectl -n curity get pod -l 'role=curity-idsvr-admin' -o jsonpath='{.items[0].metadata.name}')"kubectl -n curity port-forward "$ADMIN_POD" 6749
Next, open a browser at http://localhost:6749
and sign in with the admin user and password to configure the system and enable OAuth endpoints. Then perform the initial configuration depending on the system type:
Run the First Configuration to make OAuth endpoints available to applications.
Call OAuth Endpoints
Both external clients and internal workloads can connect to endpoints once the initial setup is complete. Run the following command to provide initial connectivity to OAuth endpoints:
RUNTIME_POD="$(kubectl -n curity get pod -l 'role=curity-idsvr-runtime' -o jsonpath='{.items[0].metadata.name}' | tail -n 1)"kubectl -n curity port-forward "$RUNTIME_POD" 8443
The following example commands show how to call OAuth endpoints:
Run the following command to download OpenID Connect metadata. Inspect the response to view the locations of other OAuth endpoints.
curl http://localhost:8443/oauth/v2/oauth-anonymous/.well-known/openid-configuration
Application workloads that run in the cluster can use internal Kubernetes URLs to call OAuth endpoints. For example, APIs call the internal JWKS URI endpoint to download token signing public keys so that they can validate JWT access tokens. Run the following commands to deploy a pod that acts as an API workload:
kubectl create namespace apikubectl -n api apply -f - <<EOFapiVersion: v1kind: Podmetadata:name: curlspec:containers:- name: curlimage: curlimages/curl:latestcommand: ["sleep"]args: ["infinity"]EOF
Get a remote shell to the deployed pod:
kubectl -n api exec -it curl -- sh
Then call the JWKS URI with the following internal URL to reference the Kubernetes runtime service within the curity
namespace:
curl http://curity-idsvr-runtime-svc.curity.svc:8443/oauth/v2/oauth-anonymous/jwks
Run the First Upgrade
Use the Admin UI to update configuration settings, for example to register an OAuth client. Then follow the instructions from the Import and Export Configurations tutorial to use the Changes → Download option of the Admin UI and download the configuration and save it to a file called curity-config.xml
. To upgrade the system correctly you must make the latest configuration data and the same configuration encryption key available to new pods.
Use a ConfigMap for Configuration Data
The following example supplies the configuration data in a Kubernetes ConfigMap:
kubectl -n curity create configmap idsvr-config \--from-file='idsvr-config=curity-config.xml'
The Helm chart values.yaml
file can then reference the ConfigMap:
replicaCount: 2image:repository: curity.azurecr.io/curity/idsvrtag: latestcurity:adminUiHttp: trueadmin:serviceAccount:name: curity-idsvr-adminlogging:level: INFOruntime:serviceAccount:name: curity-idsvr-runtimelogging:level: INFOconfig:uiEnabled: trueconfiguration:- configMapRef:name: idsvr-configitems:- key: idsvr-configpath: curity-config.xml
Use a PersistentVolume for Configuration Data
When getting started, an alternative option is to use a PersistentVolume for configuration data. On the first deployment, express a PersistentVolumeClaim in the Helm chart. Configuration data storage is then external to pods, so new pods automatically receive the latest configuration, without the need to download configuration and resupply it as a ConfigMap.
replicaCount: 2image:repository: curity.azurecr.io/curity/idsvrtag: latestcurity:adminUiHttp: trueadmin:serviceAccount:name: curity-idsvr-adminlogging:level: INFOruntime:serviceAccount:name: curity-idsvr-runtimelogging:level: INFOconfig:uiEnabled: truepersistentConfigVolume:enabled: truestorageClass: standardaccessModes: ReadWriteOncesize: 1Gi
Use a Post Commit Script to Save Configuration Data
Another option is to use a post commit script to persist all configuration changes to items within a Kubernetes secret. You can read more about this feature in the Configuration Backups and Logging using Helm tutorial.
Ensure Zero Downtime
Use helm upgrade --install
on all subsequent redeployments, so that the Kubernetes platform adds new pods, waits for them to reach a ready state and then terminates old pods. Therefore, the platform ensures zero downtime for OAuth endpoints.
helm upgrade --install curity curity/idsvr --values=values.yaml --namespace curity \--set curity.config.encryptionKey="$CONFIG_ENCRYPTION_KEY"
Finalize Configuration
Storing configuration external to pods can be a convenient option for early stages of the deployment pipeline, like development systems. You can then can use a Configuration as Code approach to minimize duplication for other stages of the deployment pipeline and to improve the reliability of production upgrades:
- Split the backed up configuration into muiltiple files.
- Parameterize the configuration files to minimize duplication.
- Protect sensitive parameters during deployment.
- Copy configuration files and other shared resources into a Docker image.
- Update Helm to use the custom Docker image.
The use of a custom Docker image also reduces the need to use Kubernetes techniques to deploy files, which simplifies the Helm values.yaml
file:
replicaCount: 2image:repository: custom_idsvrtag: 1.0.0curity:adminUiHttp: trueadmin:serviceAccount:name: curity-idsvr-adminlogging:level: INFOruntime:serviceAccount:name: curity-idsvr-runtimelogging:level: INFOconfig:uiEnabled: trueenvironmentVariableConfigMaps:- idsvr-parametersenvironmentVariableSecrets:- idsvr-protected-parameters
Deployment Example
The GitHub link at the top of this page provides some Kubernetes deployment examples that you can run on a local computer. Once the configuration techniques work locally they will also work in any deployed Kubernetes environment.
- The
Basic
example shows how to run an initial deployment and upgrade for either the Curity Identity Server or the Curity Token Handler. - The
Curity Identity Server
andCurity Token Handler
examples show a more complete deployment that uses configuration as code.
Conclusion
This tutorial provides an initial Kubernetes deployment, though the OAuth endpoints are not yet available to remote clients. The Expose OAuth Endpoints from Kubernetes tutorial explains the approach to use a Kubernetes API gateway to provide external OAuth URLs.
Join our Newsletter
Get the latest on identity management, API Security and authentication straight to your inbox.
Start Free Trial
Try the Curity Identity Server for Free. Get up and running in 10 minutes.
Start Free Trial