Token Handler Deployment Example

Token Handler Deployment Example

On this page

Overview

The SPA using Token Handler Pattern code example showed how to deploy an end-to-end solution on a development computer. This article will dive deeper into how the Docker based deployment works, which maps to the Cloud Native Use Case from Token Handler Deployment Patterns. By reading this article you will gain a better understanding of how to progress the setup to your deployed environments.

Deployment Overview

The example deployment runs a number of containers within a small Docker Compose network. This results in the following overall use of tokens, when triggered by the SPA:

SPA Flow

Deployment is triggered from the SPA Code Example repository, by running the following two scripts:

ScriptResponsibility
build.shBuilds simple application code into Docker containers
deploy.shTriggers deployment of the end-to-end code example

The detailed work related to token handler deployment is externalized to the child scripts in this (spa-deployments) Git repository, to avoid complicating the repository for application code:

ScriptResponsibility
build.shBuilds token handler components into Docker containers
deploy.shDoes the detailed deployment work for the end-to-end code example

Run parent scripts at least once

The parent scripts must be run at least once, so that the application level Docker containers are built. If you want to study deployment in more depth, or troubleshoot a failed deployment, you can then re-run only the child scripts.

Running the Deployment

It is possible to deploy the example SPA with either a Standard OAuth Agent, which uses the most mainstream OpenID Connect messages, or with a Financial-grade OAuth Agent, which uses state-of-the-art security standards and https URLs for all components. Both the ./build.sh and ./deploy.sh scripts support two optional command line arguments:

ArgumentAllowed Values
oauth-agentstandard or financial
oauth-proxykong or nginx or openresty

To use the default Node.js OAuth Agent, and the default Kong reverse proxy to host the OAuth proxy, run the following commands:

./build.sh
./deploy.sh

To use an alternative setup you can instead supply the desired options as follows, with the OAuth Agent value specified first and the OAuth Proxy value specified second:

./build.sh standard nginx
./deploy.sh standard nginx

Environment Variables

For finer control, a number of environment variables can be used in the SPA's deploy.sh script, to enable the configuration to be adapted so that you can better represent your company or product. This can be useful when evaluating the token handler pattern and demonstrating it to work colleagues.

Environment VariableDescription
BASE_DOMAINThe base domain defaults to example.com but can be changed to a value of your choice, e.g, to represent a company or product name. Ensure that you use a valid domain suffix so that browsers do not drop cookies unexpectedly.
WEB_SUBDOMAINThis is www by default so that the SPA is located at http://www.example.com, but can be changed to a different value if you prefer, or to blank in order to navigate to the SPA using the base domain, at http://example.com.
API_SUBDOMAINThis defaults to api and results in a public API base URL of http://api.example.com, which is the URL of the reverse proxy rather than of the physical API.
IDSVR_SUBDOMAINThe subdomain used by the Identity Server defaults to login and results in a public base URL of http://login.example.com:8443.

The code example remains an educational resource to promote understanding of the token handler architecture and how to deploy it. It is therefore not fully configurable, so settings such as redirect paths, passwords and other values are intentionally not customizable.

Using Custom Domains

An example customized setup is shown below, where the code example's deploy.sh script has been edited to suit a company's preferences:

export BASE_DOMAIN='myapp.com'
export WEB_SUBDOMAIN=''
export API_SUBDOMAIN='api'
export IDSVR_SUBDOMAIN='idsvr'

For the end-to-end setup to continue to work, you must also ensure that these domain names are resolvable, which is most commonly done by editing your local computer's hosts file, and for this example the following values would be configured:

127.0.0.1 myapp.com api.myapp.com idsvr.myapp.com
:1 localhost

You can then browse to a working example SPA that uses your custom URLs:

SPA with Custom URL

The example setup only deploys components to domains within the base domain, in order to keep the deployment easy to reason about. In the financial-grade scenario this enables the browser to trust a single wildcard development certificate:

Wildcard Certificate

Pointing to a Deployed Identity Server

In order for the SPA to receive first-party cookies, the SPA's Web Origin and the domain used by token handler components must be hosted in the same site, which in the above example is myapp.com. The Identity Server can be hosted in a completely different domain though, e.g., that of a test environment. To do so, configure the following additional environment variable. This will prevent the Docker deployment from spinning up its own instance of the Curity Identity Server:

export EXTERNAL_IDSVR_ISSUER_URI=http://idsvr.mycompany.com:8443/oauth/v2/oauth-anonymous

If the Docker deployment is done on a shared server that exposes SPA and API domains, this type of setup will enable you to publish the demo app to your team or other stakeholders, who can then all sign into the app using your company's own user accounts, e.g, to review the login user experience.

Importing Clients

To ensure that the end-to-end solution still works when using an external identity server and the standard scenario you will need to import OAuth client details using the following XML, updated with your choice of web origin. This contains the code flow client for the SPA, and the introspection client used by the phantom token plugin, which runs within the reverse proxy:

<config xmlns="http://tail-f.com/ns/config/1.0">
    <profiles xmlns="https://curity.se/ns/conf/base">
    <profile>
    <id>token-service</id>
    <type xmlns:as="https://curity.se/ns/conf/profile/oauth">as:oauth-service</type>
        <settings>
        <authorization-server xmlns="https://curity.se/ns/conf/profile/oauth">
        <client-store>
        <config-backed>
            <client>
                <id>spa-client</id>
                <client-name>spa-client</client-name>
                <description>SPA with Standard OAuth Agent</description>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <redirect-uris>http://myapp.com/</redirect-uris>
                <scope>openid</scope>
                <scope>profile</scope>
                <user-authentication>
                <allowed-post-logout-redirect-uris>http://myapp.com/</allowed-post-logout-redirect-uris>
                </user-authentication>
                <capabilities>
                    <code>
                    </code>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>spa-client</sector-identifier>
                </use-pairwise-subject-identifiers>
                    <validate-port-on-loopback-interfaces>true</validate-port-on-loopback-interfaces>
            </client>
            <client>
                <id>api-gateway-client</id>
                <client-name>api-gateway-client</client-name>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <capabilities>
                    <introspection/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>api-gateway-client</sector-identifier>
                </use-pairwise-subject-identifiers>
            </client>
        </config-backed>
        </client-store>
        </authorization-server>
        </settings>
    </profile>
    </profiles>
</config>

For the financial-grade scenario the setup you will need to first ensure that Mutual TLS is allowed on the token endpoint of the Curity Identity Server, then use the Facilities menu of the Admin UI to import the root CA at resources/certs/example.ca.pem as a Client Trust Store. The following client settings can then be imported:

<config xmlns="http://tail-f.com/ns/config/1.0">
    <profiles xmlns="https://curity.se/ns/conf/base">
    <profile>
    <id>token-service</id>
    <type xmlns:as="https://curity.se/ns/conf/profile/oauth">as:oauth-service</type>
        <settings>
        <authorization-server xmlns="https://curity.se/ns/conf/profile/oauth">
        <client-store>
        <config-backed>
            <client>
                <id>spa-client</id>
                <client-name>spa-client</client-name>
                <description>SPA with Financial-grade OAuth Agent</description>
                <mutual-tls>
                    <client-dn>CN="financial-grade-spa, OU=myapp.com, O=Curity AB, C=SE"</client-dn>
                    <trusted-ca>financial_grade_client_ca</trusted-ca>
                </mutual-tls>
                <redirect-uris>https://myapp.com/</redirect-uris>
                <scope>openid</scope>
                <scope>profile</scope>
                <user-authentication>
                    <allowed-post-logout-redirect-uris>https://myapp.com/</allowed-post-logout-redirect-uris>
                </user-authentication>
                <capabilities>
                    <code>
                    </code>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>spa-client</sector-identifier>
                </use-pairwise-subject-identifiers>
                <validate-port-on-loopback-interfaces>true</validate-port-on-loopback-interfaces>
            </client>
            <client>
                <id>api-gateway-client</id>
                <client-name>api-gateway-client</client-name>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <capabilities>
                    <introspection/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>api-gateway-client</sector-identifier>
                </use-pairwise-subject-identifiers>
            </client>
        </config-backed>
        </client-store>
        </authorization-server>
        </settings>
    </profile>
    </profiles>
</config>

Once the import is complete, you can then run the demo SPA while using your preconfigured Identity Server. This will enable logins with familiar user accounts and your preferred authentication options and login user experience.

External Identity Server

Deployment Internals

Since the code example deployment is based on the cloud native use case, back end components make HTTP requests inside the cluster, and use internal host names. This is more efficient than making outbound calls, and also reduces the number of endpoints you need to expose to the internet.

Internal HTTP Calls

Firstly the OAuth Agent makes back channel token requests to the Identity Server before returning tokens to the browser as encrypted HTTP-Only cookies. When the SPA makes API requests, the Phantom Token Plugin will call the Identity Server's introspection endpoint to get, and cache, a JWT and forward it to the Example API. Finally, the example API downloads, and caches, JWKS token signing keys from the Identity Server, so that it can validate JWT access tokens.

OAuth Endpoints

In total the following main endpoint URLs are used with the standard OAuth Agent, and the first and last of these are invoked by the browser, and must therefore use external URLs. In a real world cluster environment, such as Kubernetes, these URLs will be considerably different, with unrelated domain names and different TLS certificates for internal and external URLs. If you have pointed the code example to your own Identity Server, this will also be the case.

Endpoint NameCode Example Default URL
Authorizehttp://login.example.com:8443/oauth/v2/oauth-authorize
Tokenhttp://login-internal.example.com:8443/oauth/v2/oauth-token
Introspecthttp://login-internal.example.com:8443/oauth/v2/oauth-introspect
JWKShttp://login-internal.example.com:8443/oauth/v2/oauth/v2/oauth-anonymous/jwks
User Infohttp://login-internal.example.com:8443/oauth/v2/oauth-userinfo
End Sessionhttp://login.example.com:8443/oauth/v2/oauth/v2/oauth-session/logout

GitHub Repository

The deployment logic is provided in a separate repository to the main SPA code example, so that deployment and extending it can be managed separately to the application code. If you clone the repo via this page's download link, you can then study the files to understand the scripted logic:

Deployment Repo

In order to understand the deployment, these are the main files to browse:

FileUsage
build.shCalled from the build.sh script of the SPA code example, to do the main downloading and building of resources for token handler components
deploy.shCalled from the deploy.sh script of the SPA code example, to manage configuration updates, then running docker compose
docker-compose.ymlExpresses each component to be deployed, with its configuration values, some of which are calculated at runtime

The deploy.sh script deals with calculating the OAuth endpoints, which involves a metadata lookup when pointing to an external Identity Server. Some of the components use configuration files instead, in which case the envsubst command is used to produce the final file from a template file. Environment variables are then exported from deploy.sh to the Docker Compose file:

export SCHEME
export BASE_DOMAIN
export WEB_DOMAIN
export API_DOMAIN
export IDSVR_DOMAIN
export INTERNAL_DOMAIN
export IDSVR_BASE_URL
export IDSVR_INTERNAL_BASE_URL
export ISSUER_URI
export AUTHORIZE_ENDPOINT
export AUTHORIZE_INTERNAL_ENDPOINT
export TOKEN_ENDPOINT
export USERINFO_ENDPOINT
export INTROSPECTION_ENDPOINT
export JWKS_ENDPOINT
export LOGOUT_ENDPOINT
export ENCRYPTION_KEY
export SSL_CERT_FILE_PATH
export SSL_CERT_PASSWORD
export CORS_ENABLED

Cross Origin Request Sharing (CORS)

The example deployment also configures OAuth Agent and OAuth Proxy components with a CORS_ENABLED flag. This must be set to false in same site setups, where the token handler components and web static content are hosted behind the same reverse proxy, as in this example:

ComponentURL
Web Static Contenthttps://www.example.com
OAuth Agenthttps://www.example.com/oauth-agent
API Routeshttps://www.example.com/api

In other deployments, the SPA will send HTTP OPTIONS pre-flight requests, and CORS_ENABLED must be set to true. The OAuth Agent and OAuth Proxy will then implement CORS request validation and write CORS response headers, which enables the SPA to successfully send requests to the API domain.

OAuth Agent Configuration

The Docker Compose file provides configuration values for the OAuth endpoints. The OAuth Agent uses the environment variables exported from the deploy.sh script:

oauth-agent:
image: oauthagent-standard:1.0.0
hostname: oauthagent-${INTERNAL_DOMAIN}
environment:
    PORT: 3001
    TRUSTED_WEB_ORIGIN: "http://${WEB_DOMAIN}"
    AUTHORIZE_ENDPOINT: "${AUTHORIZE_ENDPOINT}"
    TOKEN_ENDPOINT: '${TOKEN_ENDPOINT}'
    USERINFO_ENDPOINT: '${USERINFO_ENDPOINT}'
    LOGOUT_ENDPOINT: '${LOGOUT_ENDPOINT}'
    CLIENT_ID: 'spa-client'
    REDIRECT_URI: 'http://${WEB_DOMAIN}/'
    POST_LOGOUT_REDIRECT_URI: 'http://${WEB_DOMAIN}/'
    SCOPE: 'openid profile'
    COOKIE_DOMAIN: '${API_DOMAIN}'
    COOKIE_NAME_PREFIX: 'example'
    COOKIE_ENCRYPTION_KEY: "${ENCRYPTION_KEY}"
    CORS_ENABLED: '${CORS_ENABLED}'

Reverse Proxy Plugins

Each reverse proxy is deployed as a custom Docker image, which downloads plugins at build time. For Kong this requires the following commands, to download the LUA files and deploy them to standard locations. The Dockerfile commands for NGINX and OpenResty are similar.

FROM kong:3.0.0-alpine

USER root
RUN luarocks install kong-oauth-proxy   1.3.0 && \
    luarocks install kong-phantom-token 2.0.0

USER kong

The OAuth Proxy Plugin translates secure cookies from the SPA to opaque access tokens. It is typically combined with the Phantom Token Plugin, which translates opaque access tokens to JWT access tokens. The JWT is then forwarded to the target API. All of this keeps the security plumbing out of the example API's code.

The example deployment provides an SPA end-to-end solution with both plugins configured, and the GitHub files can be inspected to understand how this works. For further details on the individual plugins for Kong, NGINX and OpenResty, see the below tutorials:

Reverse Proxy

If Kong Open Source is used as the reverse proxy hosted in front of APIs, then the following Docker Compose deployment is used, including a Kong YAML configuration file containing API routes:

kong_reverse-proxy:
    image: custom_kong:3.0.0-alpine
    hostname: reverseproxy
    ports:
      - 80:3000
    volumes:
      - ./reverse-proxy/kong.yml:/usr/local/kong/declarative/kong.yml
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: '/usr/local/kong/declarative/kong.yml'
      KONG_PROXY_LISTEN: '0.0.0.0:3000'
      KONG_LOG_LEVEL: 'info'
      KONG_PLUGINS: 'bundled,oauth-proxy,phantom-token'
      KONG_NGINX_HTTP_LUA_SHARED_DICT: 'phantom-token 10m'

If NGINX or OpenResty is used as the reverse proxy then different configuration resources are deployed with the custom Docker image:

nginx_reverse-proxy:
    image: custom_nginx:1.21.3-alpine
    hostname: reverseproxy
    ports:
      - 80:3000
    volumes:
      - ./components/reverse-proxy/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./components/reverse-proxy/nginx/default.conf:/etc/nginx/templates/default.conf.template

OAuth Agent Routes

The SPA will call the OAuth Agent via the reverse proxy and no plugins are used for these requests. For Kong this is expressed in the kong.yml file:

- name: oauth-agent
  url: $SCHEME://oauthagent-$INTERNAL_DOMAIN:3001/oauth-agent
  routes:
  - name: oauth-agent-api-route
    paths:
    - /oauth-agent

For NGINX or OpenResty the proxy-pass directive is instead used, within the default.conf file:

location /oauth-agent {
    proxy_pass $SCHEME://oauthagent-$INTERNAL_DOMAIN:3001/oauth-agent;
}

API Routes

For API routes the two plugins are configured, with the OAuth Proxy running first and the Phantom Token running next. The configuration looks like this for Kong:

- name: business-api
  url: http://api-$INTERNAL_DOMAIN:3002
  routes:
  - name: business-api-route
    paths:
    - /api

  plugins:
  - name: oauth-proxy
    config:
      cookie_name_prefix: example
      encryption_key: $ENCRYPTION_KEY
      trusted_web_origins:
      - $SCHEME://$WEB_DOMAIN
      cors_enabled: $CORS_ENABLED
  - name: phantom-token
    config:
      introspection_endpoint: $INTROSPECTION_ENDPOINT
      client_id: api-gateway-client
      client_secret: Password1
      token_cache_seconds: 900
      trusted_web_origins:
      - $SCHEME://$WEB_DOMAIN

For NGINX and OpenResty a cache must first be configured, and the syntax is different, but the same tasks are performed:

location /api/ {

    rewrite_by_lua_block {

        local oauthProxy = require 'resty.oauth-proxy'
        local oauthProxyConfig = {
            cookie_name_prefix = 'example',
            encryption_key = '$ENCRYPTION_KEY',
            trusted_web_origins = {
                '$SCHEME://$WEB_DOMAIN'
            },
            cors_enabled = true
        }
        oauthProxy.run(oauthProxyConfig)

        local phantomToken = require 'resty.phantom-token'
        local phantomTokenConfig = {
            introspection_endpoint = '$INTROSPECTION_ENDPOINT',
            client_id = 'api-gateway-client',
            client_secret = 'Password1',
            cache_name = 'phantom-token',
            time_to_live_seconds = 900
        }
        phantomToken.execute(phantomTokenConfig)
    }

    proxy_pass $SCHEME://api-$INTERNAL_DOMAIN:3002/;

The repo's deploy.sh script creates a new encryption key on every deployment. This is used later to protect cookies returned to the browser, via AES256 encryption. The openssl tool is used to create a compliant 32 byte key, which is then configured as 64 hex characters. This hex string is then deployed to both the OAuth Agent and the OAuth Proxy.

ENCRYPTION_KEY=$(openssl rand 32 | xxd -p -c 64)

If the Docker based deployment is re-run when the browser contains a cookie from the previous deployment, the example SPA deals with this reliably. This is done by handling 401 errors from the token handler API and prompting the user to re-authenticate. Even if you do not want to frequently renew the cookie encryption key, we recommend coding this logic in your SPAs.

Designing your Deployment Pipeline

In a real company setup you will follow continuous delivery best practices for your platform. This will always involve building code once, then deploying down a pipeline, e.g. from DEV to STAGING to PRODUCTION, with different configuration each time. To add token handler components to your process and support your SPAs you need to take the following main steps:

  • Understand the architecture both inside and outside the cluster
  • Identify each API and OAuth URL used in the end-to-end solution
  • Identify the settings for the OAuth Agent, the OAuth Proxy Plugin and the Phantom Token Plugin
  • Configure these values in template files or via environment variables
  • Use your deployment tools to push token handler components down the pipeline

Conclusion

The spa-deployments code example is a reference implementation to show how to deploy the moving parts of an SPA that uses the token handler pattern, and this tutorial has explained the key behavior. The deployment is only done to a demo level, using Docker Compose. In a real company setup, some areas, such as secret management, would need hardening in line with best practices for your platform.

Once you have made the investment to deploy token handler components, you will have a very clean separation of concerns. This will enable your apps to use an optimal SPA architecture, with best browser security, simple code and an architecture than can be scaled to many SPAs and APIs.