On this page
The Configure Deployments using Helm tutorial shows how to run admin and runtime workloads in a Kubernetes cluster, deploy configuration and locate OAuth endpoints. For convenience, it uses HTTP endpoints on localhost. In the Configure Deployed Environments tutorial you can learn about the general approach to configure HTTPS and external URLs. This tutorial explains Kubernetes specific behaviors and summarizes how to use an API gateway in Kubernetes to provide external HTTPS URLs.
Ingress Overview
This tutorial provides an introduction on how to enable ingress traffic for a Kubernetes cluster. It addresses readers new to Kubernetes who want to expose endpoints of the Curity Identity Server or the Curity Token Handler from a Kubernetes cluster.
Expose Public Endpoints
To expose public endpoints, design domain-based URLs for the admin and runtime workloads.
- Admin Base URL:
https://admin.testcluster.example
- OAuth Base URL:
https://login.testcluster.example
Deploy an API Gateway
There are many cloud native API gateways that can run in Kubernetes. Usually, an API gateway uses a service type of LoadBalancer
and the Curity workloads use a service type of ClusterIP
. The API gateway deployment typically includes a Kubernetes controller that watches particular Kubernetes resources and updates the underlying gateway product when required. The following subsections provide example deployments using NGINX and Kong. Use an equivalent approach for other API gateways.
When choosing an API gateway, consider a future proof implementation that supports the newer Kubernetes Gateway API. For NGINX you could use an NGINX Gateway Fabric deployment:
helm install nginx oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric \--namespace apigateway \--create-namespace \--set nginxGateway.replicaCount=2
Alternatively, deploy the NGINX ingress controller with the following example commands or customize the installation using the NGINX Helm chart.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo updatehelm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \--namespace apigateway \--create-namespace \--set controller.replicaCount=2
View the API Gateway Service
Run a command of the following form to view the main resources deployed to the API gateway's namespace:
kubectl get all -n apigateway
A typical deployment consists of one or more API gateway pods and a service with type LoadBalancer
. The service gets an external IP address that may initially remain in a Pending
state.
NAME READY STATUS RESTARTS AGEpod/kong-kong-5b874fd59-97dqk 2/2 Running 2 3m48spod/kong-kong-5b874fd59-c2dxh 2/2 Running 2 (3m46s ago) 3m48sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kong-kong-proxy LoadBalancer 10.96.159.208 <pending> 443:31025/TCP 3m48sNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/kong-kong 2/2 2 2 3m48sNAME DESIRED CURRENT READY AGEreplicaset.apps/kong-kong-5b874fd59 2 2 2 3m48s
Enable Network Connectivity
Usually network connectivity starts with the creation of DNS domains and external IP addresses. OAuth clients can then use domain-based URLs to call OAuth endpoints inside the Kubernetes cluster.
A Kubernetes deployment requires some additional components to route the client requests to the workloads in a cluster.
- A load balancer uses one or more external IP addresses to receive traffic for one or more host names.
- The load balancer can route traffic to the API gateway that terminates the requests.
- The API gateway can preprocess requests, then forward them to OAuth endpoints.
There are many possible ways to enable network connectivity to an API gateway. Usually, networking infrastructure spins up a load balancer when you deploy a service type of LoadBalancer
to the cluster. Real world deployments often use sophisticated load balancers that can route to multiple regions or availability zones.
Kubernetes clusters often run on cloud platforms that support provisioning of load balancers. Check out the following example deployments to run the Curity product on various cloud platforms and retrieve the external IP address of the load balancer.
Cloud providers enable control over load balancing behavior. For example, you might apply load balancer service annotations to the API gateway's Helm chart to spin up the load balancer infrastructure:
To integrate with cloud platforms, use techniques like IAM Roles for Service Accounts to grant the Kubernetes cluster permissions to create load balancer resources.
Create Ingress Routes
Once network connectivity is in place, the ingress configuration is identical in any deployed environment, whether in the cloud, on premise or on your local machine. Instruct the API gateway to route to a Kubernetes Service with the following steps:
- Create a Kubernetes
HttpRoute
orIngress
resource in the namespace of the Kubernetes Service. - Use the external hostname to match incoming requests.
- Use a reference to match the
HttpRoute
orIngress
to the API gateway.
Save YAML resources to a file and apply them with the kubectl
tool to expose external endpoints from the cluster. Some gateways may require you to use custom resource definitions to express ingress routes, but should follow the same concepts.
Enable Ingress using the Helm Chart
As a default option, enable ingress with the Helm chart, which currently only supports the use of Kubernetes Ingress resources. To enable ingress, express settings like those shown here to a Helm values.yaml
file, to expose runtime and admin endpoints from the cluster.
ingress:ingressClassName:runtime:enabled: truehost: login.testcluster.exampleadmin:enabled: truehost: admin.testcluster.examplenetworkpolicy:enabled: trueapigatewayNamespace: 'apigateway'
Network Policy
The Helm chart uses a Kubernetes NetworkPolicy where only runtime workloads can call admin service ports. To expose the Admin UI via an API gateway, make sure you grant the gateway's namespace access to the Admin UI.
Create Ingress Routes Manually
If the Helm chart's ingress support does not meet your needs, create the API gateway resources manually, to expose OAuth and admin endpoints from the cluster.
The following example YAML resources expose the Admin UI, OAuth endpoints for the Curity Identity Server and OAuth Agent endpoints for the Curity Token Handler using an HTTPRoute
resource from the Kubernetes Gateway API. The parentRefs
value selects the API gateway:
kind: HTTPRouteapiVersion: gateway.networking.k8s.io/v1metadata:name: idsvr-admin-routenamespace: curityspec:parentRefs:- name: kong-gatewaynamespace: apigatewayhostnames:- admin.testcluster.examplerules:- matches:- path:value: /backendRefs:- name: curity-idsvr-admin-svckind: Serviceport: 6749---kind: HTTPRouteapiVersion: gateway.networking.k8s.io/v1metadata:name: idsvr-runtime-routenamespace: curityspec:parentRefs:- name: kong-gatewaynamespace: apigatewayhostnames:- login.testcluster.examplerules:- matches:- path:value: /backendRefs:- name: curity-idsvr-runtime-svckind: Serviceport: 8443---kind: HTTPRouteapiVersion: gateway.networking.k8s.io/v1metadata:name: idsvr-oauthagent-routenamespace: applicationsspec:parentRefs:- name: kong-gatewaynamespace: apigatewayhostnames:- api.demoapp.examplerules:- matches:- path:value: /oauthagent/examplebackendRefs:- name: curity-idsvr-runtime-svckind: Serviceport: 8443
Call OAuth Endpoints
Once you have ingress working, teams can administer the Curity configuration using the Admin UI and remote OAuth clients can call OAuth endpoints.
Use URLs similar to the following to reach the Admin UI and OAuth endpoints:
curl -i http://admin.testcluster.example/admincurl -i http://login.testcluster.example/oauth/v2/oauth-anonymous/.well-known/openid-configuration
Enable HTTPS URLs
Next, you typically need to upgrade external endpoints to use HTTPS and there are multiple ways to enable this. Cloud platforms may enable you to terminate TLS at the load balancer and apply service annotations to the API gateway's service to use managed certificates that the cloud platform issues. The following format of annotation can be used with AWS load balancers:
service:annotations:service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
Cloud Native Certificate Automation
An alternative option that works in any deployed environment is to pass TCP level traffic through the load balancer and terminate most TLS traffic at the API gateway. When external TLS traffic terminates inside the cluster you must use a cloud native solution to provision external certificates.
The popular cert-manager tool can integrate with paid or free certificate providers using issuer resources. The following example uses a cert-manager ClusterIssuer
to get certificates for AWS Route 53 domains from the free Let's Encrypt provider. Other options are possible, like a self signed issuer during rehearsal of deployments on a local computer.
apiVersion: cert-manager.io/v1kind: ClusterIssuermetadata:name: api-gateway-issuerspec:acme:server: https://acme-v02.api.letsencrypt.org/directoryemail: admin@testcluster.exampleprivateKeySecretRef:name: letsencryptsolvers:- dns01:route53: {}
The cert-manager workloads can integrate with the chosen certificate issuer, which must prove that you own TLS hostnames. The issuer then returns a certificate to cert-manager, which saves the certificate and its underlying key to a Kubernetes secret. You can instruct cert-manager to get HTTPS certificates in various ways.
To take fine control over certificate details you can declare an explicit Certificate resource. The following example requests a single certificate with multiple subject alternative names, to match the domain names from the URLs of the Admin UI, OAuth endpoints and OAuth Agent endpoints.
apiVersion: cert-manager.io/v1kind: Certificatemetadata:name: api-gateway-certificatespec:secretName: testcluster-example-tlsisCA: falseduration: 2160hrenewBefore: 1440hprivateKey:algorithm: ECDSAsize: 256dnsNames:- admin.testcluster.example- login.testcluster.example- api.demoapp.exampleissuerRef:name: api-gateway-issuerkind: ClusterIssuergroup: cert-manager.io
You can apply such a certificate to a Kubernetes Gateway
resource, to associate hostnames with the certificate's Kubernetes secret, so that HTTPRoute
resources that use those hostnames use HTTPS URLs. The API gateway's Kubernetes controller can also watch the certificate's secret and reload certificates and keys automatically upon renewal.
apiVersion: gateway.networking.k8s.io/v1kind: Gatewaymetadata:name: kong-gatewayspec:gatewayClassName: konglisteners:- name: testcluster.exampleport: 443protocol: HTTPShostname: '*.testcluster.example'allowedRoutes:namespaces:from: 'All'tls:mode: TerminatecertificateRefs:- name: testcluster-example-tlslisteners:- name: demoapp.exampleport: 443protocol: HTTPShostname: '*.demoapp.example'allowedRoutes:namespaces:from: 'All'tls:mode: TerminatecertificateRefs:- name: testcluster-example-tls
Deployment Example
The GitHub link at the top of this page provides some example Kubernetes deployment resources for a local computer. The ingress tutorial shows how to expose HTTPS OAuth domain based URLs for the admin and runtime workloads, for both the Curity Identity Server and the Curity Token Handler:
- A local load balancer provider sets an external IP address for the API gateway.
- The local computer acts as a DNS service to enable the use of domain-based URLs.
- The OpenSSL tool creates a development root certificate authority.
- The cert-manager tool uses the root certificate authority to issue API gateway certificates.
For other Kubernetes environments use similar techniques with a real load balancer and real certificate authority.
Conclusion
In Kubernetes, use an API gateway to expose both APIs and OAuth endpoints to clients. The API gateway uses a Kubernetes Service of type LoadBalancer and external networking infrastructure enables connectivity to the cluster from the outside world. Once clients have working OAuth endpoints, explore the following common next steps for your Curity product.
- For the Curity Identity Server, take control over identity data for clustered deployments and integrate your preferred Kubernetes Data Storage.
- For the Curity Token Handler, integrate Kubernetes API Gateway Plugins to finalize the endpoints for Single Page Applications.
Join our Newsletter
Get the latest on identity management, API Security and authentication straight to your inbox.
Start Free Trial
Try the Curity Identity Server for Free. Get up and running in 10 minutes.
Start Free Trial