Token Handler Deployment Example

Token Handler Deployment Example

Overview

The SPA using Token Handler Pattern code example showed how to deploy an end-to-end solution on a development computer. This article will dive deeper into how the Docker based deployment works, which maps to the Cloud Native Use Case from Token Handler Deployment Patterns. By reading this article you will fain a better understanding of how to progress the setup to your deployed environments.

Deployment Overview

The deployment will be done via two scripts, the first of which downloads code for each component and builds Docker images, and the second of which applies configuration and runs containers within a small Docker Compose network. The end result will be a number of components and the following overall use of tokens, when triggered by the SPA:

SPA Flow

Note that Docker Compose is a development environment, with limitations on URLs and ports, whereas in a production cloud native setup, all of the URLs called from the browser would use port 443 and requests from the SPA to the Identity Server would also be routed via the reverse proxy.

Running the Deployment

It is possible to run the example SPA with either a Standard OAuth Agent, which uses the most mainstream OpenID Connect messages, or with a Financial-grade OAuth Agent, which uses state-of-the-art security standards and https URLs for all components.

The deployment is done by running the ./build.sh script followed by the ./deploy.sh script. The scenario to use is supplied as a command line argument, and defaults to standard if not supplied:

./build.sh standard
./deploy.sh standard

Environment Variables

For finer control, a number of environment variables can be used in the SPA's deploy.sh script, to enable the configuration to be adapted so that you can better represent your company or product. This can be useful when evaluating the token handler pattern and demonstrating it to work colleagues.

Environment VariableDescription
BASE_DOMAINThe base domain defaults to example.com but can be changed to a value of your choice, e.g, to represent a company or product name. Ensure that you use a valid domain suffix so that browsers do not drop cookies unexpectedly.
WEB_SUBDOMAINThis is www by default so that the SPA is located at http://www.example.com, but can be changed to a different value if you prefer, or to blank in order to navigate to the SPA using the base domain, at http://example.com.
API_SUBDOMAINThis defaults to api and results in a public API base URL of http://api.example.com:3000, which is the URL of the reverse proxy rather than of the physical API.
IDSVR_SUBDOMAINThe subdomain used by the Identity Server defaults to login and results in a public base URL of http://login.example.com:8443.

The code example remains an educational resource to promote understanding of the token handler architecture and how to deploy it. It is therefore not fully configurable, so settings such as redirect paths, passwords and other values are intentionally not customizable.

Using Custom Domains

An example customized setup is shown below, where the code example's deploy.sh script has been edited to suit a company's preferences:

export BASE_DOMAIN='myapp.com'
export WEB_SUBDOMAIN=''
export API_SUBDOMAIN='api'
export IDSVR_SUBDOMAIN='idsvr'

For the end-to-end setup to continue to work, you must also ensure that these domain names are resolvable, which is most commonly done by editing your local computer's hosts file, and for this example the following values would be configured:

127.0.0.1 myapp.com api.myapp.com idsvr.myapp.com
:1 localhost

You can then browse to a working example SPA that uses your custom URLs:

SPA with Custom URL

The example setup only deploys components to domains within the base domain, in order to keep the deployment easy to reason about. In the financial-grade scenario this enables the browser to trust a single wildcard development certificate:

Wildcard Certificate

Pointing to a Deployed Identity Server

In order for the SPA to receive first-party cookies, the SPA's Web Origin and the domain used by token handler components must be hosted in the same site, which in the above example is myapp.com. The Identity Server can be hosted in a completely different domain though, e.g., that of a test environment. To do so, configure the following additional environment variable. This will prevent the Docker deployment from spinning up its own instance of the Curity Identity Server:

export EXTERNAL_IDSVR_ISSUER_URI=http://idsvr.mycompany.com:8443/oauth/v2/oauth-anonymous

If the Docker deployment is done on a shared server that exposes SPA and API domains, this type of setup will enable you to publish the demo app to your team or other stakeholders, who can then all sign into the app using your company's own user accounts, e.g, to review the login user experience.

Importing Clients

To ensure that the end-to-end solution still works when using an external identity server and the standard scenario you will need to import OAuth client details using the following XML, updated with your choice of web origin. This contains the code flow client for the SPA, and the introspection client used by the phantom token plugin, which runs within the reverse proxy:

<config xmlns="http://tail-f.com/ns/config/1.0">
    <profiles xmlns="https://curity.se/ns/conf/base">
    <profile>
    <id>token-service</id>
    <type xmlns:as="https://curity.se/ns/conf/profile/oauth">as:oauth-service</type>
        <settings>
        <authorization-server xmlns="https://curity.se/ns/conf/profile/oauth">
        <client-store>
        <config-backed>
            <client>
                <id>spa-client</id>
                <client-name>spa-client</client-name>
                <description>SPA with Standard OAuth Agent</description>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <redirect-uris>http://myapp.com/</redirect-uris>
                <scope>openid</scope>
                <scope>profile</scope>
                <user-authentication>
                <allowed-post-logout-redirect-uris>http://myapp.com/</allowed-post-logout-redirect-uris>
                </user-authentication>
                <capabilities>
                    <code>
                    </code>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>spa-client</sector-identifier>
                </use-pairwise-subject-identifiers>
                    <validate-port-on-loopback-interfaces>true</validate-port-on-loopback-interfaces>
            </client>
            <client>
                <id>api-gateway-client</id>
                <client-name>api-gateway-client</client-name>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <capabilities>
                    <introspection/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>api-gateway-client</sector-identifier>
                </use-pairwise-subject-identifiers>
            </client>
        </config-backed>
        </client-store>
        </authorization-server>
        </settings>
    </profile>
    </profiles>
</config>

For the financial-grade scenario the setup you will need to first ensure that Mutual TLS is allowed on the token endpoint of the Curity Identity Server, then use the Facilities menu of the Admin UI to import the root CA at resources/financial/certs/example.ca.pem as a Client Trust Store. The following client settings can then be imported:

<config xmlns="http://tail-f.com/ns/config/1.0">
    <profiles xmlns="https://curity.se/ns/conf/base">
    <profile>
    <id>token-service</id>
    <type xmlns:as="https://curity.se/ns/conf/profile/oauth">as:oauth-service</type>
        <settings>
        <authorization-server xmlns="https://curity.se/ns/conf/profile/oauth">
        <client-store>
        <config-backed>
            <client>
                <id>spa-client</id>
                <client-name>spa-client</client-name>
                <description>SPA with Financial-grade OAuth Agent</description>
                <mutual-tls>
                    <client-dn>CN="financial-grade-spa, OU=myapp.com, O=Curity AB, C=SE"</client-dn>
                    <trusted-ca>financial_grade_client_ca</trusted-ca>
                </mutual-tls>
                <redirect-uris>https://myapp.com/</redirect-uris>
                <scope>openid</scope>
                <scope>profile</scope>
                <user-authentication>
                    <allowed-post-logout-redirect-uris>https://myapp.com/</allowed-post-logout-redirect-uris>
                </user-authentication>
                <capabilities>
                    <code>
                    </code>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>spa-client</sector-identifier>
                </use-pairwise-subject-identifiers>
                <validate-port-on-loopback-interfaces>true</validate-port-on-loopback-interfaces>
            </client>
            <client>
                <id>api-gateway-client</id>
                <client-name>api-gateway-client</client-name>
                <secret>Password1</secret> <!-- Don't forget to change this -->
                <capabilities>
                    <introspection/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                    <sector-identifier>api-gateway-client</sector-identifier>
                </use-pairwise-subject-identifiers>
            </client>
        </config-backed>
        </client-store>
        </authorization-server>
        </settings>
    </profile>
    </profiles>
</config>

Once the import is complete, you can then run the demo SPA while using your preconfigured Identity Server. This will enable logins with familiar user accounts and your preferred authentication options and login user experience.

External Identity Server

Deployment Internals

Since the code example deployment is based on the cloud native use case, back end components make HTTP requests inside the cluster, and use internal host names. This is more efficient than making outbound calls, and also reduces the number of endpoints you need to expose to the internet.

Internal HTTP Calls

Firstly the OAuth Agent makes back channel token requests to the Identity Server before returning tokens to the browser as encrypted HTTP-Only cookies. When the SPA makes API requests, the Phantom Token Plugin will call the Identity Server's introspection endpoint to get, and cache, a JWT and forward it to the Example API. Finally, the example API downloads, and caches, JWKS token signing keys from the Identity Server, so that it can validate JWT access tokens.

OAuth Endpoints

In total the following main endpoint URLs are used with the standard OAuth Agent, and the first and last of these are invoked by the browser, and must therefore use external URLs. In a real world cluster environment, such as Kubernetes, these URLs will be considerably different, with unrelated domain names and different TLS certificates for internal and external URLs. If you have pointed the code example to your own Identity Server, this will also be the case.

Endpoint NameCode Example Default URL
Authorizehttp://login.example.com:8443/oauth/v2/oauth-authorize
Tokenhttp://login-internal.example.com:8443/oauth/v2/oauth-token
Introspecthttp://login-internal.example.com:8443/oauth/v2/oauth-introspect
JWKShttp://login-internal.example.com:8443/oauth/v2/oauth/v2/oauth-anonymous/jwks
User Infohttp://login-internal.example.com:8443/oauth/v2/oauth-userinfo
End Sessionhttp://login.example.com:8443/oauth/v2/oauth/v2/oauth-session/logout

GitHub Repository

The deployment logic is provided in a separate repository to the main SPA code example, so that deployment and extending it can be managed separately to the application code. If you clone the repo via this page's download link, you can then inspect the files for the standard deployment scenario:

Deployment Repo

In order to understand the deployment, these are the main files to browse:

FileUsage
build.shCalled from the build.sh script of the SPA code example, to do the main downloading and building of resources for token handler components
deploy.shCalled from the deploy.sh script of the SPA code example, to manage configuration updates, then running docker compose
docker-compose.ymlExpresses each component to be deployed, with its configuration values, some of which are calculated at runtime

The deploy.sh script deals with calculating the OAuth endpoints, which involves a metadata lookup when pointing to an external Identity Server. Some of the components use configuration files instead, in which case the envsubst command is used to produce the final file from a template file. Environment variables are then exported from deploy.sh to the Docker Compose file:

export BASE_DOMAIN
export WEB_DOMAIN
export API_DOMAIN
export INTERNAL_DOMAIN
export IDSVR_BASE_URL
export IDSVR_INTERNAL_BASE_URL
export AUTHORIZE_ENDPOINT
export TOKEN_ENDPOINT
export INTROSPECTION_ENDPOINT
export JWKS_ENDPOINT
export USERINFO_ENDPOINT
export LOGOUT_ENDPOINT
export ENCRYPTION_KEY

OAuth Agent Configuration

The Docker Compose file provides configuration values for the OAuth endpoints. The OAuth Agent uses the environment variables exported from the deploy.sh script:

oauth-agent:
image: oauthagent-standard:1.0.0
hostname: oauthagent-${INTERNAL_DOMAIN}
environment:
    PORT: 3001
    TRUSTED_WEB_ORIGIN: "http://${WEB_DOMAIN}"
    AUTHORIZE_ENDPOINT: "${AUTHORIZE_ENDPOINT}"
    TOKEN_ENDPOINT: '${TOKEN_ENDPOINT}'
    USERINFO_ENDPOINT: '${USERINFO_ENDPOINT}'
    LOGOUT_ENDPOINT: '${LOGOUT_ENDPOINT}'
    CLIENT_ID: 'spa-client'
    REDIRECT_URI: 'http://${WEB_DOMAIN}/'
    POST_LOGOUT_REDIRECT_URI: 'http://${WEB_DOMAIN}/'
    SCOPE: 'openid profile'
    COOKIE_DOMAIN: '${API_DOMAIN}'
    COOKIE_NAME_PREFIX: 'example'
    COOKIE_ENCRYPTION_KEY: "${ENCRYPTION_KEY}"

Reverse Proxy

The example deployment uses Kong Open Source as the reverse proxy hosted in front of APIs. Its deployment consists of some plugin files and a YAML configuration file containing API routes:

reverse-proxy:
    image: kong:2.6.0-alpine
    hostname: reverseproxy
    ports:
      - 3000:3000
    volumes:
      - ./reverse-proxy/kong.yml:/usr/local/kong/declarative/kong.yml
      - ./kong-phantom-token-plugin/plugin:/usr/local/share/lua/5.1/kong/plugins/phantom-token
      - ./oauth-proxy-plugin/plugin/plugin.lua:/usr/local/share/lua/5.1/kong/plugins/oauth-proxy/access.lua
      - ./oauth-proxy-plugin/plugin/handler.lua:/usr/local/share/lua/5.1/kong/plugins/oauth-proxy/handler.lua
      - ./oauth-proxy-plugin/plugin/schema.lua:/usr/local/share/lua/5.1/kong/plugins/oauth-proxy/schema.lua
      - ./oauth-proxy-plugin/plugin/kong-oauth-proxy-1.0.0-1.rockspec:/usr/local/share/lua/5.1/kong/plugins/oauth-proxy/oauth-proxy-1.0.0-1.rockspec
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: '/usr/local/kong/declarative/kong.yml'
      KONG_PROXY_LISTEN: '0.0.0.0:3000'
      KONG_LOG_LEVEL: 'info'
      KONG_PLUGINS: 'bundled,oauth-proxy,phantom-token'

The OAuth Proxy Plugin translates secure cookies from the SPA to opaque access tokens, and the Phantom Token Plugin translates opaque access tokens to JWT access tokens. All of this keeps the security plumbing out of the example API's code. For further details on how these components work, see the below tutorials:

API Routes

These are expressed in the generated kong.yml file, and firstly this involves simply forwarding requests to the OAuth Agent:

- name: oauth-agent
  url: http://oauthagent-$INTERNAL_DOMAIN:3001/oauth-agent
  routes:
  - name: oauth-agent-api-route
    paths:
    - /oauth-agent

The routing to the example API needs to do the work to get from an incoming secure cookie to the API's JWT access token, and requires the two plugins to be configured:

- name: business-api
  url: http://api-$INTERNAL_DOMAIN:3002
  routes:
  - name: business-api-route
    paths:
    - /api

  plugins:
  - name: oauth-proxy
    config:
      cookie_name_prefix: example
      encryption_key: $ENCRYPTION_KEY
      trusted_web_origins:
      - http://$WEB_DOMAIN
      cors_enabled: true
  - name: phantom-token
    config:
      introspection_endpoint: $INTROSPECTION_ENDPOINT
      client_id: api-gateway-client
      client_secret: Password1
      token_cache_seconds: 900
      trusted_web_origins:
      - http://$WEB_DOMAIN

The repo's deploy.sh script creates a new encryption key on every deployment. This is used later to protect cookies returned to the browser, via AES256 encryption. The openssl tool is used to create a compliant 32 byte key, which is then configured as 64 hex characters. This hex string is then deployed to both the OAuth Agent and the OAuth Proxy.

ENCRYPTION_KEY=$(openssl rand 32 | xxd -p -c 64)

If the Docker based deployment is re-run when the browser contains a cookie from the previous deployment, the example SPA deals with this reliably. This is done by handling 401 errors from the token handler API and prompting the user to re-authenticate. Even if you do not want to frequently renew the cookie encryption key, we recommend coding this logic in your SPAs.

Designing your Deployment Pipeline

In a real company setup you will follow continuous delivery best practices for your platform. This will always involve building code once, then deploying down a pipeline, e.g. from DEV to STAGING to PRODUCTION, with different configuration each time. To add token handler components to your process and support your SPAs you need to take the following main steps:

  • Understand the architecture both inside and outside the cluster
  • Identify each API and OAuth URL used in the end-to-end solution
  • Identify the settings for the OAuth Agent, the OAuth Proxy Plugin and the Phantom Token Plugin
  • Configure these values in template files or via environment variables
  • Use your deployment tools to push token handler components down the pipeline

Conclusion

The spa-deployments code example is a reference implementation to show how to deploy the moving parts of an SPA that uses the token handler pattern, and this tutorial has explained the key behavior. The deployment is only done to a demo level, using Docker Compose. In a real company setup, some areas, such as secret management, would need hardening in line with best practices for your platform.

Once you have made the investment to deploy token handler components, you will have a very clean separation of concerns. This will enable your apps to use an optimal SPA architecture, with best browser security, simple code and an architecture than can be scaled to many SPAs and APIs.