Kubernetes Demo Installation

Kubernetes Demo Installation


In this tutorial we will enable any developer or architect to quickly deploy a demo system using the Free Community Edition of the Curity Identity Server. This will be done by running four simple bash scripts on a macOS or Windows computer:

  • ./create-cluster.sh
  • ./create-certs.sh
  • ./deploy-postgres.sh
  • ./deploy-idsvr.sh

No prior Kubernetes experience is needed, and the result will be a system with real world URLs and a working sample application. This type of initial end to end setup can be useful for designing deployment solutions:

Working Sample

Design the Deployment

Software companies often start their OAuth infrastructure by designing base URLs that reflect their company and brand names. In this tutorial we will use a subdomain based approach and the following URLs:

Base URLDescription
https://login.curity.localThe URL for the Identity Server, which users see when login screens are presented
https://admin.curity.localAn internal URL for the Curity Identity Server Admin UI

The Curity Identity Server supports some advanced deployment scenarios but we will use a fairly standard setup consisting of the following components:

Server RoleNumber of Containers
Identity Server Admin Node1
Identity Server Runtime Node2
Database Server1

Get a License File

For the deployed system to work you will need a license file. Any developer new to Curity can quickly sign in to the Developer Portal with their GitHub account to get one:

Developer Portal

Create the Kubernetes Cluster

Install Prerequisites

The system can be deployed on a macOS or Windows workstation via bash scripts, and has the following prerequisites.

Get the Kubernetes Helper Scripts

This article is accompanied by some bash scripts and other resources that can be downloaded from GitHub:

Deployment Scripts

Run the Cluster Install Script

View the create-cluster.sh script to understand what it does, and then run it as follows. Once complete the cluster can be administered using standard kubectl commands.

Create Cluster

Minikube Profiles

The Kubernetes cluster runs on minikube, the standard platform for Kubernetes development. We also use minikube profiles, which are very useful for switching between multiple 'deployed systems' on a development computer:

- minikube start --cpus=2 --memory=8192 --disk-size=50g --profile curity
- minikube stop --profile curity
- minikube delete --profile curity

Create SSL Certificates

Next run the create-certs.sh script, which will use OpenSSL to create some test certificates for the external URLs of the demo system:

Create Certs

Deploy an Identity Server Database

Curity Schema

The Curity Identity Server can save its data to a number of database systems, but we will use PostgreSQL since it is a popular choice. You can download and unzip the latest Curity release from the developer portal and get database creation scripts from the idsvr/etc folder:

Database Scripts

Run the Deployment Script

We will keep database deployment simple, by just copying in a backed up PostgreSQL script. This will create the schema and add some initial data, including a test user account which can be used to sign into the code example.

We can then deploy the database by running the deploy-postgres.sh script, which will download and deploy the PostgreSQL Docker image:

Postgres Deployment

Later in this article we will run some SQL queries against this database and look at some OAuth data related to user accounts, tokens and audit data.

Deploy the Curity Identity Server

Prepare Identity Server Configuration

The helper scripts include a configuration backup file called idsvr-config-backup.xml, which uses the URLs we designed at the start of this article. Before deploying you also need to copy your license.json file into the idsvr folder:

Copy License Key

Custom Docker Image

The deployment will create a custom docker image and copy in some resources, including the license file, some custom logging / auditing configuration, and the Hypermedia API Code Example:

FROM curity.azurecr.io/curity/idsvr:7.1.0
COPY idsvr/license.json /opt/idsvr/etc/init/license/
COPY haapi-web/*        /opt/idsvr/usr/share/webroot/
COPY idsvr/log4j2.xml   /opt/idsvr/etc/

Run the Deployment Script

Next run the deploy-idsvr.sh script, which deploys a working configuration that we will explain next:

Deploy Identity Server

Kubernetes Configuration

The Curity Identity Server consists of a number of Kubernetes components and reliable deployment is simplified via our Helm Chart. The top level settings are specified by a values file, and a preconfigured helm-values.yaml file is included:

Helm Values File

The values file can be used to understand values that are configurable, which include the docker image to use and the number of runtime nodes. A couple of areas are of particular interest when new to the Curity Identity Server:

ConfigurationBacked up configuration is supplied via a Kubernetes config map, which is deployed to new containers at /opt/idsvr/etc/init/configmap-config.xml
External URLsThe example deployment uses a Kubernetes ingress to expose the admin node and runtime nodes on SSL URLs, using the certificates created earlier

Understand Kubernetes YAML

By default the Helm Chart hides most of the Kubernetes details. To gain a closer understanding it can be useful to run helm template followed by kubectl apply, which is the approach taken by the deploy-idsvr.sh script. This provides visibility of the final Kubernetes YAML, in the generated idsvr-helm.yaml file:

# Source: idsvr/templates/service-runtime.yaml
apiVersion: v1
kind: Service
  name: curity-idsvr-runtime-svc
    app.kubernetes.io/name: idsvr
    helm.sh/chart: idsvr-0.10.2
    app.kubernetes.io/instance: curity
    app.kubernetes.io/managed-by: Helm
    role: curity-idsvr-runtime
  type: ClusterIP
    - port: 8443
      targetPort: http-port
      protocol: TCP
      name: http-port
    - port: 4465
      targetPort: health-check
      protocol: TCP
      name: health-check
    - port:  4466
      targetPort: metrics
      protocol: TCP
      name: metrics
    app.kubernetes.io/name: idsvr
    app.kubernetes.io/instance: curity
    role: curity-idsvr-runtime

Configure Local Domain Names

At this point the demo system's external URLs have been configured in Kubernetes but will not yet work from the host machine. To resolve this we first need to configure DNS for the host names, and this requires the IP address of the minikube virtual machine:

minikube ip --profile curity

The resulting IP address should be specified against both domain names in the system hosts file, which will exist at one of the below locations:

Operating SystemHosts File Location
Windowsc:\windows\system32\drivers\etc\hosts  login.curity.local  admin.curity.local

Also trust the self signed root authority we are using for our demo system's SSL certificates, by adding the file certs/curity.local.ca.pem to one of these locations:

Operating SystemLocation
macOSKey Chain / System / Certificates
WindowsMicrosoft Management Console / Certificates / Local Computer / Trusted Root Certification Authorities

Use the Curity Identity Server

OpenID Connect Discovery Endpoint

The discovery endpoint is often the first endpoint that applications connect to, and for our sample deployment this is at the following URL:

Discovery Endpoint

Admin UI

The Identity Server's Admin UI is then available, which provides rich options for managing OAuth applications, authentication and token issuing behavior:

Admin UI URLhttps://admin.curity.local/admin
User Nameadmin

Admin UI

Run the Hypermedia Web Example

Next we can run the sample application that we showed at the beginning of this article, which has been deployed with the Curity Identity Server, and the SQL database contains this user credential:

Sample URLhttps://login.curity.local/demo-client.html
User Namejohn.doe

Runtime URLs

The following base URLs are used by the deployed system:

https://login.curity.localExternalThe URL used by internet clients such as browsers and mobile apps for user logins and token operations
https://admin.curity.localExternalThis URL should not be exposed to the internet, but we are doing so for development purposes
https://curity-idsvr-runtime-svc:8443InternalThis URL would be called by APIs and web back ends running inside the cluster, during OAuth operations
https://curity-idsvr-admin-svc:6749InternalThis URL can be called inside the cluster to test connections to the RESTCONF API

For development purposes we have deployed the curl tool to our containers, which enables us to test internal connections:

ADMIN_NODE=$(kubectl get pods -o name | grep curity-idsvr-admin)
kubectl exec -it $ADMIN_NODE -- bash
curl -k 'https://curity-idsvr-runtime-svc:8443/oauth/v2/oauth-anonymous/jwks'
RUNTIME_NODE=$(kubectl get pods -o name | grep -m1 curity-idsvr-runtime)
kubectl exec -it $RUNTIME_NODE -- bash
curl -k -u 'admin:Password1' 'https://curity-idsvr-admin-svc:6749/admin/api/restconf/data?depth=unbounded&content=config'

Manage Identity Server Data

Database Connections

From the Admin UI you can browse to Facilities / Default Data Source to view the details for the PostgreSQL connection:

Database Connection

For convenience, the deployment script has also exposed the database to the host computer, to enable development queries via the psql tool:

Inside the Clusterexport PGPASSWORD=Password1 && psql -p 5432 -d idsvr -U postgres
Outside the Clusterexport PGPASSWORD=Password1 && psql -h $(minikube ip --profile curity) -p 30432 -d idsvr -U postgres

Identity Server Data

The database schema files we looked at earlier support three main types of data. These can be stored in different locations if required, though our sample deployment uses a single database:

Data TypeDescription
User AccountsUsers, hashed credentials and Personally Identifiable Information (PII)
Security StateToken hashes and other data used by client applications and APIs
Audit InformationInformation about authentication events, tokens issued, and to which areas of data

You can then run queries against the deployed end to end system to understand data that is written as a result of logins to the example application:

Audit Data

User Management via SCIM

The deployed system has an activated SCIM 2.0 endpoint, which can be called via the following script. This GET request returns a list of users stored in our SQL database, then uses the jq tool to render the response:

ACCESS_TOKEN=$(curl -s -X POST https://login.curity.local/oauth/v2/oauth-token \
-H 'content-type: application/x-www-form-urlencoded' \
-d 'grant_type=client_credentials' \
-d 'client_id=scim-client' \
-d 'client_secret=Password1' \
-d 'scope=read' \
| jq -r '.access_token')
curl -H "Authorization: Bearer $ACCESS_TOKEN" https://login.curity.local/user-management/admin/Users | jq


Log Configuration

We deployed a custom log4j2.xml file in our Docker image, which can be used for troubleshooting. For development purposes we set the root logger to level INFO, and this can be increased to level DEBUG when troubleshooting:

<AsyncRoot level="INFO">
    <AppenderRef ref="rewritten-stdout"/>
    <AppenderRef ref="request-log"/>
    <AppenderRef ref="metrics"/>

We can use standard Kubernetes commands to tail logs when troubleshooting, and issues are most commonly caused by configuration problems:

  • RUNTIME_NODE=$(kubectl get pods -o name | grep -m1 curity-idsvr-runtime)
  • kubectl logs -f $RUNTIME_NODE

Tailed Logs

Backup of Configuration and Data

Finally, the initial system could be backed up in a basic way by periodically updating the highlighted files below:

Backup Files

After changes are made in the Admin UI the configuration can be backed up using the Download option, or alternatively via the REST API:

Config Backup

Similarly the Postgres data can be backed up from the host computer via this command:

POSTGRES_POD=$(kubectl get pods -o=name | grep postgres)
kubectl exec -it $POSTGRES_POD -- bash -c "export PGPASSWORD=Password1 && pg_dump -U postgres -d idsvr" > ./postgres/idsvr-data-backup.sql


We ran four simple scripts to demonstrate how to quickly deploy a working load balanced setup for the Curity Identity Server:

  • ./create-cluster.sh
  • ./create-certs.sh
  • ./deploy-postgres.sh
  • ./deploy-idsvr.sh

Our example deployment is focused only on a productive start for developers, when new to the Curity Identity Server. Real world solutions also need to deploy images down a pipeline and deal better with data management and security hardening.

Once the moving parts are understood, companies can customize the deployment in many ways, to deploy anywhere using the Kubernetes platform. Many built in cloud native patterns can then be used to manage, scale and upgrade the system reliably.