On this page
Overview
It is possible to add extended infrastructure behavior to a Kubernetes cluster by incorporating a service mesh. For some use cases, this can help to further separate infrastructure concerns from application code. In this architecture, multi-container pods are used, consisting of both the application and a sidecar. Applications that use sidecars route all requests to other components via the sidecar. Most real-world service meshes contain a mix of some components that use sidecars and others that do not:
Sidecars can handle aspects such as retrying requests when there are temporary network failures. They can also implement TLS or Mutual TLS on behalf of the application, though this will not usually provide a complete security solution. The service mesh provides additional building blocks that can be used, and there may be both pros and cons to integrating one.
This tutorial demonstrates deploying the Curity Identity Server to an Istio service mesh. This enables applications that use sidecars to call OAuth endpoints in the standard way. The Curity Identity Server itself is a specialist security component though, and manages its own security, so does not support hosting sidecars in its own pods.
Tutorial Resources
Start by cloning the GitHub repository, which contains a number of resources, with helper scripts to quickly spin up a deployment. The demo system's behavior is almost identical to the Kubernetes Local Installation, so this tutorial only describes the differences in behavior when running in a service mesh.
Prerequisites
To deploy the system, first ensure that the following prerequisites are installed:
Also, copy a license file for the Curity Identity Server into the idsvr
folder of the tutorial resources.
Deploy the System
The deployed system will use the following base URLs:
Base URL | Description |
---|---|
https://login.curity.local | The URL for the Identity Server, which users see when login screens are presented |
https://admin.curity.local | An internal URL for the Curity Identity Server Admin UI |
Run the following scripts in this sequence to trigger the cluster creation and then deploy the Curity Identity Server's components, including ingress resources and certificates for external HTTPS URLs. You may then need to wait for a couple of minutes until the system is ready.
./create-cluster.sh./create-certs.sh./deploy-postgres.sh./deploy-idsvr.sh
Next, add the host names for the Curity Identity Server to the local hosts file:
127.0.0.1 login.curity.local admin.curity.local
Also, trust the self-signed root authority that is created for the demo system's SSL certificates. Add the file certs/curity.local.ca.pem
to the host operating system's trust store.
Later, once you are finished with the demo installation, you can free all resources by running the following script:
./delete-cluster.sh
Deployment Details
KIND is a Kubernetes cluster for development, which has convenient features for running multi-node clusters locally. This section highlights some technical details specific to KIND and Istio, including the use of Istio specific custom resource definitions (CRDs).
Nodes and Containers
First, view the multiple nodes that have been created, each of which represents a virtual machine that hosts pods:
kubectl get node
The demo installation uses a KIND cluster named curity
, and creates two worker nodes to host application containers:
NAME STATUS ROLES AGE VERSIONcurity-control-plane Ready control-plane 27m v1.24.0curity-worker Ready <none> 27m v1.24.0curity-worker2 Ready <none> 27m v1.24.0
To view details of the deployed instances of the Curity Identity Server, run this command:
kubectl get pod -o wide
This shows how pods for the Curity Identity Server are distributed across the nodes:
NAME READY STATUS RESTARTS AGE IP NODEcurity-idsvr-admin-7b8596f4b6-jh6c7 1/1 Running 0 28m 10.244.2.5 curity-workercurity-idsvr-runtime-7f85c6b8df-4652m 1/1 Running 0 28m 10.244.1.5 curity-worker2curity-idsvr-runtime-7f85c6b8df-lh6qw 1/1 Running 0 28m 10.244.2.4 curity-workerpostgres-8cb58c56f-ftz9t 1/1 Running 0 29m 10.244.1.4 curity-worker2
Istio Resources
When the create-cluster.sh
script is run, an Istio installation script is also downloaded. The script is then executed to install Istio components for a demo setup:
curl -L https://istio.io/downloadIstio | sh -cd istio*./bin/istioctl install --set profile=demo -y
To view Istio specific resources that have been deployed, run the following command:
kubectl get pod -n istio-system
Istio is deployed with its own ingress and egress gateways for inbound and outbound traffic to the cluster. Both the gateways and sidecars use the Envoy Proxy for traffic routing.
NAME READY STATUS RESTARTS AGEistio-egressgateway-666cdd84d7-t2x5s 1/1 Running 0 34mistio-ingressgateway-56f8485977-qmgnv 1/1 Running 0 33mistiod-dfb7f5d4f-5zdxs 1/1 Running 0 35m
Ingress
The Curity Identity Server URLs are exposed over HTTPS URLs using the Istio ingress gateway. This uses Istio custom resource definitions for Gateway, VirtualService and DestinationRule:
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:name: curity-idsvr-admin-virtual-servicespec:hosts:- admin.curity.localgateways:- curity-idsvr-gatewayhttp:- route:- destination:host: curity-idsvr-admin-svcport:number: 6749---apiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata:name: curity-idsvr-runtime-virtual-servicespec:host: curity-idsvr-runtime-svctrafficPolicy:tls:mode: SIMPLE
The demo deployment uses some KIND specific patches to expose the ingress to the development computer. This step is not required in a real Kubernetes cluster, such as when deploying to a cloud platform.
Use the System
Once the system is deployed, with DNS and certificate trust configured, you can access the Curity Identity Server via the following URLs, in the same way as the Kubernetes demo installation:
Endpoint | URL |
---|---|
Admin UI | https://admin.curity.local/admin |
OpenID Connect Metadata | https://login.curity.local/oauth/v2/oauth-anonymous/.well-known/openid-configuration |
Hypermedia Web Code Example | https://login.curity.local/demo-client.html |
SCIM Endpoint | https://login.curity.local/user-management/admin/Users |
Deploy an API with a Sidecar
You can then deploy applications that interact with the Curity Identity Server. The following commands deploy one of the Istio sample resources to a namespace for which sidecars are enabled:
kubectl create namespace applicationskubectl label namespace applications istio-injection=enabledkubectl -n applications apply -f resources/istio*/samples/httpbin/httpbin-nodeport.yaml
Once the pod is up you can describe it to see information about both the application container and its sidecar:
HTTPBIN_CONTAINER_ID="$(kubectl -n applications get pod -o name)"kubectl -n applications describe $HTTPBIN_CONTAINER_ID
Next, get a shell to the application container:
kubectl -n applications exec -it $HTTPBIN_CONTAINER_ID -- bash
Then run the following commands to call from the application to the Curity Identity Server via the sidecar:
apt-get updateapt-get install curl -ycurl -k https://curity-idsvr-runtime-svc.curity.svc.cluster.local:8443/oauth/v2/oauth-anonymous/jwks
This successfully returns the JSON Web Keyset (JWKS) containing token signing public keys. In a real setup the response would be used by APIs to validate JWT access tokens.
Conclusion
This tutorial showed how the Curity Identity Server can be deployed to a service mesh. A basic development cluster was used, though the same principles would apply to a real-world deployment. Applications that use sidecars can then interact with the OAuth endpoints of the Curity Identity Server.