/images/resources/tutorials/integration/plugin-kubernetes.png

Integrating Plugins in a Kubernetes Ingress

On this page

Overview

Nowadays, Kubernetes has become the defacto platform for running containers in the cloud. With the steep rise in the popularity of kubernetes, the ecosystem in and around kubernetes has also evolved at a rapid pace.

In this article we are going to discuss how to use Kubernetes Ingress Controllers as an API Gateway and how to run custom plugins to change the logic for HTTP requests sent to APIs.

Use Cases

There are many potential use cases for API Gateway Plugins, as summarized in API Gateway Guides . A couple of important responsibilities are to translate from opaque access tokens to JWTs, or to translate from secure cookies to JWTs. By implementing these tasks in a gateway the API code is kept simple.

Ingress Controller Requirements

Traffic routing with in a kubernetes cluster network is handled internally by kubernetes. By default, all of the pods running in a kubernetes cluster are allowed to talk to each other over the cluster's internal network unless custom network policies are deployed to restrict the traffic flow.

Ingress traffic is the one that requires much consideration since it opens up the applications for external access. The simplest option to expose a kubernetes service to the outside world would be to use a NodePort service, however it has certain drawbacks like it only exposes one service per port, uses non-standard ports in the range of 30,000 to 32,767 by default etc..

Some of the NodePort service limitations could be minimized by using a load balancer in front of the NodePort services but it still doesn't offer the flexibility needed for enterprise use cases. Many companies like to handle cross-cutting concerns such as authentication, SSL termination, load balancing, Proxy, response caching etc. at the reverse proxy layer, this is where the Ingress controller fits in.

An Ingress controller saves cost by keeping the number of load balancers down to a minimum. It also improves the security by keeping kubernetes services accessible only from within the cluster and acting as a single point of entry in to the cluster.

Depending on the type of Ingress controller used, it could meet few or most of the cross-cutting concerns.

Choosing an Ingress Controller

Companies when starting with the kubernetes typically choose the default Ingress controller provided by their selected kubernetes platform.

This could be an acceptable approach for getting started but before moving to production, companies should consider finding answers to at least the following questions :

  • Does the selected Ingress controller supports their load balancing scenarios ?
  • Does it provide adequate traffic routing and control possibilities?
  • Can it be deployed & extended/customized easily ?
  • Does it work seamlessly with the selected cloud platform ?

NGINX provides extensibility via plugins, though they may need to be coded in low level languages such as C. Some NGINX spin offs, such as Kong Open Source, provide support for writing plugins in higher level languages such as LUA.

Typically all of the main stream Ingress controllers provide kubernetes native ways to deploy the Ingress controllers in to the kubernetes cluster, for example via Helm charts.

Both NGINX and Kong provide Helm charts for easy deployment & configuration.

Extensibility

More and more companies are hosting API endpoints in the cloud, and as a hosting best practice they place a gateway in front of APIs, so that API servers are not directly exposed to the internet. An Ingress controller can act as an API gateway, or you can place a separate API gateway behind the Ingress controller and before APIs. Either way the gateway should be extensible, enabling you to perform tasks such as custom routing.

Different Ingress controllers provide varying levels of extensibility & customization. Plugins add the much needed customization layer to the Ingress controllers.

Kong Ingress controller has a wide range of plugins to support different customizations use cases. It provides an easy kubernetes native way to add plugins to customize the ingress controller behavior.

For example, to add and configure Curity's popular phantom-token plugin to the Kong Ingress controller, a KongPlugin kubernetes custom resource definition can be used as shown below.

yaml
1234567891011
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: phantom-token
config:
introspection_endpoint: http://curity-idsvr-runtime-svc.curity.svc.cluster.local:8443/oauth/v2/oauth-introspect # k8s cluster internal URL
client_id: api-gateway-client
client_secret: Password123
token_cache_seconds: 900
scope: read
plugin: phantom-token

Kong also provides an easy annotation konghq.com/plugins to enable the plugin on desired Ingress resources. Following configuration enables the phantom-token plugin to run on all endpoints under api.example.gke

yaml
123456789101112131415161718192021222324252627282930
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
konghq.com/plugins: phantom-token
name: echo-api-ingress
namespace: api
spec:
ingressClassName: kong
rules:
- host: api.example.gke
http:
paths:
- backend:
service:
name: simple-echo-api-service
port:
name: http-port
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- api.example.gke
secretName: example-gke-tls
kind: List
metadata:
resourceVersion: ""
selfLink: ""
yaml
123456789101112131415161718192021222324252627282930313233343536373839404142
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
phantom_token on;
phantom_token_client_credential api-gateway-client Password123;
phantom_token_introspection_endpoint curity;
phantom_token_scopes read;
nginx.ingress.kubernetes.io/server-snippet: |
location curity {
proxy_pass http://curity-idsvr-runtime-svc.curity.svc.cluster.local:8443/oauth/v2/oauth-introspect;
proxy_cache_methods POST;
proxy_cache api_cache;
proxy_cache_key §request_body;
proxy_ignore_headers Set-Cookie;
}
name: echo-api-ingress
namespace: api
spec:
rules:
- host: api.example.gke
http:
paths:
- backend:
service:
name: simple-echo-api-service
port:
name: http-port
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- api.example.gke
secretName: example-gke-tls
kind: List
metadata:
resourceVersion: ""
selfLink: ""

If you are interested in using plugins with NGINX Ingress controller then you could configure main-snippet in the NGINX configMap to load the plugin/module to NGINX Ingress controller and use nginx.ingress.kubernetes.io/configuration-snippet & nginx.ingress.kubernetes.io/server-snippet annotations as shown above in the Ingress resource to inject plugin configuration to the nginx.conf file.

Plugin Development

Plugins can be developed and tested locally. For example plugins that include integration tests, see the NGINX OAuth Proxy plugins written in C and Lua.

Plugin Deployment

Please refer to the Curity Deployment in GKE article for an end to end implementation & deployment of the NGINX & Kong Ingress controllers as shown in above diagram.

Conclusion

Ingress controllers are an important part of the kubernetes ecosystem and using plugins, these could be extended to handle most of the cross-cutting concerns in the Ingress layer.

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial