/images/resources/howtos/kubernetes/kubernetes-ingress.png

Expose OAuth Endpoints from Kubernetes

On this page

The Configure Deployments using Helm tutorial shows how to run admin and runtime workloads in a Kubernetes cluster, deploy configuration and locate OAuth endpoints. For convenience, it uses HTTP endpoints on localhost. In the Configure Deployed Environments tutorial you can learn about the general approach to configure HTTPS and external URLs. This tutorial explains Kubernetes specific behaviors and summarizes how to use an API gateway in Kubernetes to provide external HTTPS URLs.

Ingress Overview

This tutorial provides an introduction on how to enable ingress traffic for a Kubernetes cluster. It addresses readers new to Kubernetes who want to expose endpoints of the Curity Identity Server or the Curity Token Handler from a Kubernetes cluster.

Expose Public Endpoints

To expose public endpoints, design domain-based URLs for the admin and runtime workloads.

  • Admin Base URL: https://admin.testcluster.example
  • OAuth Base URL: https://login.testcluster.example

Deploy an API Gateway

There are many cloud native API gateways that can run in Kubernetes. Usually, an API gateway uses a service type of LoadBalancer and the Curity workloads use a service type of ClusterIP. The API gateway deployment typically includes a Kubernetes controller that watches particular Kubernetes resources and updates the underlying gateway product when required. The following subsections provide example deployments using NGINX and Kong. Use an equivalent approach for other API gateways.

When choosing an API gateway, consider a future proof implementation that supports the newer Kubernetes Gateway API. For NGINX you could use an NGINX Gateway Fabric deployment:

bash
1234
helm install nginx oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric \
--namespace apigateway \
--create-namespace \
--set nginxGateway.replicaCount=2

Alternatively, deploy the NGINX ingress controller with the following example commands or customize the installation using the NGINX Helm chart.

bash
123456
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace apigateway \
--create-namespace \
--set controller.replicaCount=2

View the API Gateway Service

Run a command of the following form to view the main resources deployed to the API gateway's namespace:

bash
1
kubectl get all -n apigateway

A typical deployment consists of one or more API gateway pods and a service with type LoadBalancer. The service gets an external IP address that may initially remain in a Pending state.

text
123456789101112
NAME READY STATUS RESTARTS AGE
pod/kong-kong-5b874fd59-97dqk 2/2 Running 2 3m48s
pod/kong-kong-5b874fd59-c2dxh 2/2 Running 2 (3m46s ago) 3m48s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-kong-proxy LoadBalancer 10.96.159.208 <pending> 443:31025/TCP 3m48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-kong 2/2 2 2 3m48s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-kong-5b874fd59 2 2 2 3m48s

Enable Network Connectivity

Usually network connectivity starts with the creation of DNS domains and external IP addresses. OAuth clients can then use domain-based URLs to call OAuth endpoints inside the Kubernetes cluster.

Gateway Flow

A Kubernetes deployment requires some additional components to route the client requests to the workloads in a cluster.

  • A load balancer uses one or more external IP addresses to receive traffic for one or more host names.
  • The load balancer can route traffic to the API gateway that terminates the requests.
  • The API gateway can preprocess requests, then forward them to OAuth endpoints.

There are many possible ways to enable network connectivity to an API gateway. Usually, networking infrastructure spins up a load balancer when you deploy a service type of LoadBalancer to the cluster. Real world deployments often use sophisticated load balancers that can route to multiple regions or availability zones.

Kubernetes clusters often run on cloud platforms that support provisioning of load balancers. Check out the following example deployments to run the Curity product on various cloud platforms and retrieve the external IP address of the load balancer.

Cloud providers enable control over load balancing behavior. For example, you might apply load balancer service annotations to the API gateway's Helm chart to spin up the load balancer infrastructure:

To integrate with cloud platforms, use techniques like IAM Roles for Service Accounts to grant the Kubernetes cluster permissions to create load balancer resources.

Create Ingress Routes

Once network connectivity is in place, the ingress configuration is identical in any deployed environment, whether in the cloud, on premise or on your local machine. Instruct the API gateway to route to a Kubernetes Service with the following steps:

  • Create a Kubernetes HttpRoute or Ingress resource in the namespace of the Kubernetes Service.
  • Use the external hostname to match incoming requests.
  • Use a reference to match the HttpRoute or Ingress to the API gateway.

Save YAML resources to a file and apply them with the kubectl tool to expose external endpoints from the cluster. Some gateways may require you to use custom resource definitions to express ingress routes, but should follow the same concepts.

Enable Ingress using the Helm Chart

As a default option, enable ingress with the Helm chart, which currently only supports the use of Kubernetes Ingress resources. To enable ingress, express settings like those shown here to a Helm values.yaml file, to expose runtime and admin endpoints from the cluster.

yaml
123456789101112
ingress:
ingressClassName:
runtime:
enabled: true
host: login.testcluster.example
admin:
enabled: true
host: admin.testcluster.example
networkpolicy:
enabled: true
apigatewayNamespace: 'apigateway'

Network Policy

The Helm chart uses a Kubernetes NetworkPolicy where only runtime workloads can call admin service ports. To expose the Admin UI via an API gateway, make sure you grant the gateway's namespace access to the Admin UI.

Create Ingress Routes Manually

If the Helm chart's ingress support does not meet your needs, create the API gateway resources manually, to expose OAuth and admin endpoints from the cluster.

The following example YAML resources expose the Admin UI, OAuth endpoints for the Curity Identity Server and OAuth Agent endpoints for the Curity Token Handler using an HTTPRoute resource from the Kubernetes Gateway API. The parentRefs value selects the API gateway:

yaml
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: idsvr-admin-route
namespace: curity
spec:
parentRefs:
- name: kong-gateway
namespace: apigateway
hostnames:
- admin.testcluster.example
rules:
- matches:
- path:
value: /
backendRefs:
- name: curity-idsvr-admin-svc
kind: Service
port: 6749
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: idsvr-runtime-route
namespace: curity
spec:
parentRefs:
- name: kong-gateway
namespace: apigateway
hostnames:
- login.testcluster.example
rules:
- matches:
- path:
value: /
backendRefs:
- name: curity-idsvr-runtime-svc
kind: Service
port: 8443
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: idsvr-oauthagent-route
namespace: applications
spec:
parentRefs:
- name: kong-gateway
namespace: apigateway
hostnames:
- api.demoapp.example
rules:
- matches:
- path:
value: /oauthagent/example
backendRefs:
- name: curity-idsvr-runtime-svc
kind: Service
port: 8443

Call OAuth Endpoints

Once you have ingress working, teams can administer the Curity configuration using the Admin UI and remote OAuth clients can call OAuth endpoints.

Use URLs similar to the following to reach the Admin UI and OAuth endpoints:

bash
12
curl -i http://admin.testcluster.example/admin
curl -i http://login.testcluster.example/oauth/v2/oauth-anonymous/.well-known/openid-configuration

Enable HTTPS URLs

Next, you typically need to upgrade external endpoints to use HTTPS and there are multiple ways to enable this. Cloud platforms may enable you to terminate TLS at the load balancer and apply service annotations to the API gateway's service to use managed certificates that the cloud platform issues. The following format of annotation can be used with AWS load balancers:

yaml
123
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx

Cloud Native Certificate Automation

An alternative option that works in any deployed environment is to pass TCP level traffic through the load balancer and terminate most TLS traffic at the API gateway. When external TLS traffic terminates inside the cluster you must use a cloud native solution to provision external certificates.

The popular cert-manager tool can integrate with paid or free certificate providers using issuer resources. The following example uses a cert-manager ClusterIssuer to get certificates for AWS Route 53 domains from the free Let's Encrypt provider. Other options are possible, like a self signed issuer during rehearsal of deployments on a local computer.

yaml
12345678910111213
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: api-gateway-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@testcluster.example
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
route53: {}

The cert-manager workloads can integrate with the chosen certificate issuer, which must prove that you own TLS hostnames. The issuer then returns a certificate to cert-manager, which saves the certificate and its underlying key to a Kubernetes secret. You can instruct cert-manager to get HTTPS certificates in various ways.

To take fine control over certificate details you can declare an explicit Certificate resource. The following example requests a single certificate with multiple subject alternative names, to match the domain names from the URLs of the Admin UI, OAuth endpoints and OAuth Agent endpoints.

yaml
1234567891011121314151617181920
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: api-gateway-certificate
spec:
secretName: testcluster-example-tls
isCA: false
duration: 2160h
renewBefore: 1440h
privateKey:
algorithm: ECDSA
size: 256
dnsNames:
- admin.testcluster.example
- login.testcluster.example
- api.demoapp.example
issuerRef:
name: api-gateway-issuer
kind: ClusterIssuer
group: cert-manager.io

You can apply such a certificate to a Kubernetes Gateway resource, to associate hostnames with the certificate's Kubernetes secret, so that HTTPRoute resources that use those hostnames use HTTPS URLs. The API gateway's Kubernetes controller can also watch the certificate's secret and reload certificates and keys automatically upon renewal.

yaml
123456789101112131415161718192021222324252627282930
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong-gateway
spec:
gatewayClassName: kong
listeners:
- name: testcluster.example
port: 443
protocol: HTTPS
hostname: '*.testcluster.example'
allowedRoutes:
namespaces:
from: 'All'
tls:
mode: Terminate
certificateRefs:
- name: testcluster-example-tls
listeners:
- name: demoapp.example
port: 443
protocol: HTTPS
hostname: '*.demoapp.example'
allowedRoutes:
namespaces:
from: 'All'
tls:
mode: Terminate
certificateRefs:
- name: testcluster-example-tls

Deployment Example

The GitHub link at the top of this page provides some example Kubernetes deployment resources for a local computer. The ingress tutorial shows how to expose HTTPS OAuth domain based URLs for the admin and runtime workloads, for both the Curity Identity Server and the Curity Token Handler:

  • A local load balancer provider sets an external IP address for the API gateway.
  • The local computer acts as a DNS service to enable the use of domain-based URLs.
  • The OpenSSL tool creates a development root certificate authority.
  • The cert-manager tool uses the root certificate authority to issue API gateway certificates.

For other Kubernetes environments use similar techniques with a real load balancer and real certificate authority.

Conclusion

In Kubernetes, use an API gateway to expose both APIs and OAuth endpoints to clients. The API gateway uses a Kubernetes Service of type LoadBalancer and external networking infrastructure enables connectivity to the cluster from the outside world. Once clients have working OAuth endpoints, explore the following common next steps for your Curity product.

  • For the Curity Identity Server, take control over identity data for clustered deployments and integrate your preferred Kubernetes Data Storage.
  • For the Curity Token Handler, integrate Kubernetes API Gateway Plugins to finalize the endpoints for Single Page Applications.

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial