Installing the Curity Identity Server with Kong or NGINX Ingress Controller on GKE

Installing the Curity Identity Server with Kong or NGINX Ingress Controller on GKE

Overview

This tutorial will enable any developer or an architect to quickly run the Curity Identity Server and the Phantom Token Pattern in Kubernetes using Kong Ingress controller or NGINX Ingress controller, via the Google Cloud Platform.

This installation follows the security best practice to host the Identity server and the APIs behind an Ingress controller acting as an Reverse proxy/API gateway. This will ensure that opaque access tokens are issued to internet clients, while APIs receive JWT access tokens.

This tutorial could be completed by using the Google Cloud Platform free tier option without incurring any cost.

Components and URLs

Following components are deployed in the k8s cluster by running ./deploy-idsvr-gke.sh --install

ComponentBase URLnamespaceDescription
Curity Adminhttps://admin.example.gkecurityThe URL for the Identity Server admin console
Curity Runtimehttps://login.example.gkecurityThe URL for runtime nodes of the Identity Server
Example APIhttps://api.example.gke/echoapiUpstream API proxy endpoint
Phantom Token PluginNANAPlugin for transforming opaque tokens in to by value JWT tokens
NGINX Ingress controllerNAingress-nginxNGINX Ingress controller for routing requests to different services in the k8s cluster, also acts as the gateway in front of the APIs and transforms opaque access tokens to JWTs
Kong Ingress controllerNAkongKong Ingress controller for routing requests to different services in the k8s cluster, also acts as the gateway in front of the APIs and transforms opaque access tokens to JWTs

Curity Admin URL is typically not exposed to the internet and kept internal but since this is a demo installation for evaluation and study purposes, the admin url have been exposed.

URLs

This tutorial will alias the load balancer public IP address to local development domain names, which provides an easy and free way for developers to use real world URLs. However in an enterprise setup, you would create globally resolvable custom internet domain names using a paid domain name service like google cloud DNS.

Installation

Installation will create a new private GKE cluster as per the configuration options defined in cluster-config/gke-cluster-config.json.

The deployment process is automated via a simple bash script.

    ./deploy-idsvr-gke.sh --install
    ./deploy-idsvr-gke.sh --delete

Installation Prerequisites

The following prerequisites must be met before proceeding ahead with the installation.

There are a few steps needed to initialize a new google cloud project and link the gcloud cli with the project

  • Sign in to Google Cloud Console with your email address
  • An initial project is auto-created by the GCP platform such as My First Project
  • If required, sign up and add a credit card under Billing section
  • Enable Kubernetes Engine API
  • Running the CLI will prompt you to sign in with the same email, so that the CLI is linked to the project in the console

Please also copy a license file to the idsvr-config/license.json location. If needed, you can also get a free community edition license from the Curity Developer Portal.

Deployment

First clone the installation repository to your local computer

    git clone https://github.com/curityio/curity-idsvr-gke-installation
    cd curity-idsvr-gke-installation

Deployment Scripts

Run the installation

    ./deploy-idsvr-gke.sh --install

The installation script prompts for input choices, and one of the choices is which Ingress controller to deploy. Once selected, the ingress controller is deployed with a customized docker image containing the required plugins.

Add following entry to the /etc/hosts file after the installation is completed to access the systems.

  < LoadBalancer-IP >  admin.example.gke login.example.gke api.example.gke

k8s services

Stop the environment

    ./deploy-idsvr-gke.sh --stop

Start the environment

    ./deploy-idsvr-gke.sh --start

View logs

     kubectl -n curity logs -f -l role=curity-idsvr-admin  
     kubectl -n curity logs -f -l role=curity-idsvr-runtime
     kubectl -n ingress-nginx logs -f -l app.kubernetes.io/component=controller
     kubectl -n kong logs -f -l app.kubernetes.io/component=controller
     kubectl -n api logs -f -l app=simple-echo-api 

Here are a few useful kubectl commands

     kubectl get namespaces  # Get all namespaces in the cluster
     kubectl get nodes -o wide # Get all of the worker nodes in the cluster
     kubectl get pods -n curity # Get all pods running the curity namespace
     kubectl get pods -n kong # Get all pods running the kong namespace
     kubectl get pods -n ingress-nginx # Get all pods running the ingress-nginx namespace
     kubectl get pods -n api # Get all pods running the api namespace
     kubectl get ingress -n curity # Get ingress rules defined in the curity namespace
     kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath="{.status.loadBalancer.ingress[0].ip}" # Get public ip address of the load balancer
     kubectl -n kong get svc kong-kong-proxy -o jsonpath="{.status.loadBalancer.ingress[0].ip}" # Get public ip address of the load balancer

Later, when you have finished with this tutorial, run the following command to free cloud resources

    ./deploy-idsvr-gke.sh --delete

Trust Self signed root CA certificate

All of the URLs are accessible over https for secure communication. In our demo setup, we have used self signed certificates to achieve that (refer to create-self-signed-certs.sh) but since self-signed certificates are not trusted by default by the browser, we have to add the root CA certificate to the operating system's truststore to prevent untrusted certificate warnings.

root ca configuration

Add the self signed root ca certificate certs/example.gke.ca.pem to the operating system trust store.

Operating SystemLocation
macOSKey Chain / System / Certificates
WindowsMicrosoft Management Console / Certificates / Local Computer / Trusted Root Certification Authorities

Identity Server Configuration

The idsvr-config/helm-values.yaml.template file contains the Identity server deployment configuration. You can add any additional configurations to the file if needed.

An exhaustive set of configuration options can be found in the github repository.

Kong Ingress Controller Configuration

If you selected Kong as the ingress controller, there are 3 important configuration files. Let's take a look each of those.

  • kong-config/Dockerfile
  • kong-config/helm-values.yaml
  • kong-config/kong-phantom-token-plugin-crd.yaml.template
  FROM kong:2.8.1-alpine

  # Fetch from luarocks, and set git options if required
  USER root
  RUN git config --global url."https://".insteadOf git:// && \
      git config --global advice.detachedHead false && \
      luarocks install kong-phantom-token

  USER kong

kong-config/Dockerfile builds a custom kong ingress controller image containing the phantom token plugin binaries.

  image:
    repository: curity/kong-custom
    tag: "2.8.1-alpine"

  proxy:
    enabled: true
    type: LoadBalancer

  ingressController:
    enabled: true
    installCRDs: false
    ingressClass: kong
    ingressClassAnnotations: {}
    rbac:
      create: true

  admin:
    tls:
      parameters: []

  env:
    database: "off"
    LOG_LEVEL: "error"
    plugins: 'bundled,phantom-token' 

The kong-config/helm-values.yaml file contains the Kong Ingress controller configurations. You can add any additional configurations to the file if needed.

Please note the custom docker image curity/kong-custom and plugins: 'bundled,phantom-token' in the helm values.yaml file.

  apiVersion: configuration.konghq.com/v1
  kind: KongPlugin
  metadata:
    name: phantom-token
  config:
    introspection_endpoint: http://curity-idsvr-runtime-svc.$idsvr_namespace.svc.cluster.local:8443/oauth/v2/oauth-introspect # k8s cluster internal URL
    client_id: api-gateway-client
    client_secret: Password123
    token_cache_seconds: 900
    scope: read
  plugin: phantom-token

Kong Ingress controller provides a kubernetes native way to deploy plugins in the kubernetes cluster via a custom resource definition type KongPlugin as shown in the above manifest.

The different looking url for the introspection_endpoint is the cluster internal url for identity server token introspection endpoint.

After the plugin is deployed in the cluster, it could be activated on any ingress resource simply by adding konghq.com/plugins: phantom-token annotation to the ingress resource.

NGINX Ingress Controller Configuration

If instead you are using NGINX as the ingress controller, the configuration works a little differently. Kubernetes NGINX ingress controller provides a set of annotations and configMap keys to manage Ingress controller configurations.

To load the phantom-token plugin in to the NGINX Ingress controller, configMap key main-snippet: load_module /usr/lib/nginx/modules/ngx_curity_http_phantom_token_module.so; should be added to the configMap.

Now to activate the plugin on a specific ingress resource, the configuration needed is more verbose than kong but still not too complex.

 nginx.ingress.kubernetes.io/configuration-snippet: |
          phantom_token on;
          phantom_token_client_credential api-gateway-client Password123;
          phantom_token_introspection_endpoint curity;
          phantom_token_scopes read;
 nginx.ingress.kubernetes.io/server-snippet: |
          location curity {
              proxy_pass http://curity-idsvr-runtime-svc.curity.svc.cluster.local:8443/oauth/v2/oauth-introspect;
              proxy_cache_methods POST;
              proxy_cache api_cache;
              proxy_cache_key §request_body;
              proxy_ignore_headers Set-Cookie;
          }

Testing

Run the following steps to test the phantom token flow, these steps will be same irrespective of the type of Ingress controller deployed :

1. Obtain an opaque (a.k.a reference token) access token using client credentials grant type

curl --location --request POST 'https://login.example.gke/oauth/v2/oauth-token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=simple-echo-api' \
--data-urlencode 'client_secret=Password123' \
--data-urlencode 'scope=read' \
--data-urlencode 'grant_type=client_credentials'

The response returned to the client includes an opaque access token.

  {"access_token":"_0XBPWQQ_453276d1-8c29-4913-be07-e1f16b0323e3","scope":"read","token_type":"bearer","expires_in":299}

2. Call the API proxy endpoint using the opaque access token

  curl https://api.example.gke/echo -H 'Authorization: Bearer _0XBPWQQ_453276d1-8c29-4913-be07-e1f16b0323e3' | jq .

3. Observe that the opaque access token was transformed in to a by-value access token (= JWT) by the phantom token plugin and passed to the upstream simple-echo-api call. API logs it and also returns the JWT token as response for easy verification.

 kubectl -n api logs -f -l app=simple-echo-api
 
 Simple Echo API listening on port : 3000 
 JWT token echoed back from the upstream API = eyJraWQiOiIxMjEyNzc5MTk1IiwieDV0IjoiN25LNEFDeDA3VHVWd0Q1d0pvejByYmR2YVhFIiwiYWxnIjoiUlMyNTYifQ.eyJqdGkiOiJjNGNjZTJkYy1hODdiLTQwMmEtOWY2Ny01ZTBmZDlhMmQ1ZjgiLCJkZWxlZ2F0aW9uSWQiOiI1OTYxYzI5ZS0zOWE1LTQ3NTItODVlNC1kODE1OGZlZTg2N2QiLCJleHAiOjE2NTIzMzQ4NDYsIm5iZiI6MTY1MjMzNDU0Niwic2NvcGUiOiJyZWFkIiwiaXNzIjoiaHR0cHM6Ly9sb2dpbi5leGFtcGxlLmdrZS9-Iiwic3ViIjoiNmEzNzVhMzAxYmJlNGQ4ZjliMjg5MmFiMjRkOWJkZjIzNTVmYjUyZTFjZWJiY2I0ODkwMTUyNWMwYWNkYjZiNyIsImF1ZCI6InNpbXBsZS1lY2hvLWFwaSIsImlhdCI6MTY1MjMzNDU0NiwicHVycG9zZSI6ImFjY2Vzc190b2tlbiJ9.wx2mumnlq_YVTfbxUdJXtwhwAANTkC7avBLhg5G-gi52Sc8veD8PMM3ZwszkE_3ejDAtXpizAI7mWnzMy45cHMTviJUbxjJf7-xsi3izKE8d-tmECfEJGRwCXmlG0kguwKwC1IStExU6-KBGQ1sfftkDBbp3mYsFDTGYxumtm0wInBf0_tuKP1m625h_Xs-S-4pBBRa7BvDGCq7bNzE8kbnRELQXXJxExEgMIeLtvaCg5nK5KYMfA20Ah-X65tkX4XbXZnrd8IkQK0nwsNMC0jzauw66PmsvHB2jEvR-QmQBx7D_Pgme62nqcvMDzPavzzsj5Pi4PGJ75XpFa9ptGw

Let's have a look at the decoded JWT, you can use oauth tools to decode JWT tokens and run various OAuth flows.

  eyJraWQiOiIxMjEyNzc5MTk1IiwieDV0IjoiN25LNEFDeDA3VHVWd0Q1d0pvejByYmR2YVhFIiwiYWxnIjoiUlMyNTYifQ.eyJqdGkiOiJjNGNjZTJkYy1hODdiLTQwMmEtOWY2Ny01ZTBmZDlhMmQ1ZjgiLCJkZWxlZ2F0aW9uSWQiOiI1OTYxYzI5ZS0zOWE1LTQ3NTItODVlNC1kODE1OGZlZTg2N2QiLCJleHAiOjE2NTIzMzQ4NDYsIm5iZiI6MTY1MjMzNDU0Niwic2NvcGUiOiJyZWFkIiwiaXNzIjoiaHR0cHM6Ly9sb2dpbi5leGFtcGxlLmdrZS9-Iiwic3ViIjoiNmEzNzVhMzAxYmJlNGQ4ZjliMjg5MmFiMjRkOWJkZjIzNTVmYjUyZTFjZWJiY2I0ODkwMTUyNWMwYWNkYjZiNyIsImF1ZCI6InNpbXBsZS1lY2hvLWFwaSIsImlhdCI6MTY1MjMzNDU0NiwicHVycG9zZSI6ImFjY2Vzc190b2tlbiJ9.wx2mumnlq_YVTfbxUdJXtwhwAANTkC7avBLhg5G-gi52Sc8veD8PMM3ZwszkE_3ejDAtXpizAI7mWnzMy45cHMTviJUbxjJf7-xsi3izKE8d-tmECfEJGRwCXmlG0kguwKwC1IStExU6-KBGQ1sfftkDBbp3mYsFDTGYxumtm0wInBf0_tuKP1m625h_Xs-S-4pBBRa7BvDGCq7bNzE8kbnRELQXXJxExEgMIeLtvaCg5nK5KYMfA20Ah-X65tkX4XbXZnrd8IkQK0nwsNMC0jzauw66PmsvHB2jEvR-QmQBx7D_Pgme62nqcvMDzPavzzsj5Pi4PGJ75XpFa9ptGw

Decoded JWT

Summary

You have learned how to deploy Curity Identity Server and manage custom plugins with Kong & NGINX Ingress Controllers in the Google Kubernetes Engine and tested the workings of phantom token flow. For further information, please refer to Phantom Token Flows.