Integrating with OpenResty

Integrating with OpenResty

tutorials

Overview

The OpenResty reverse proxy is derived from the main NGINX product, and has proven popular due to its out-of-the-box LUA support. We will show how to quickly integrate an API plugin that implements Curity’s Phantom Token Pattern, to securely manage access tokens. We will demonstrate a working deployment and will then run the following workflow:

  • An OAuth Client will get an opaque access token
  • The opaque access token will be sent to an API via OpenResty
  • The opaque token will be introspected by an OpenResty plugin, to get a JWT
  • The plugin will forward the JWT access token to the API
  • The API will use the JWT to implement its authorization

Run OpenResty

First ensure that Docker Desktop is installed locally as a prerequisite. Next create a minimal docker-compose.yml file:

version: '3.8'
services:
  openresty:
    image: openresty/openresty:1.19.9.1-2-bionic
    ports:
    - 8080:80

Then run OpenResty with the following command, which will download the docker image:

docker compose up --force-recreate

At this stage you can browse to http://localhost:8080 locally to see the standard OpenResty home page:

OpenResty Welcome

Build a Custom Image

Start by cloning the LUA Phantom Token Plugin code repository with the following command:

git clone https://github.com/curityio/lua-nginx-phantom-token-plugin

Then create a custom Dockerfile to copy in the phantom token plugin and download its dependencies:

FROM openresty/openresty:1.19.9.1-2-bionic

RUN luarocks install lua-resty-http
RUN luarocks install lua-resty-string
RUN luarocks install lua-resty-jwt

COPY lua-nginx-phantom-token-plugin/plugin/phantom-token-plugin.lua /usr/local/openresty/lualib

We can then build the custom Docker image with the following command:

docker build -f Dockerfile -t custom_openresty:1.19.9.1-2-bionic .

Configure OpenResty

Next create a file called default.conf that defines the HTTP behavior OpenResty will use, and integrate the LUA plugin:

error_log logs/error.log info;
lua_shared_dict phantom-token 10m;

server {

    server_name localhost;
    listen 8080;

    location / {
        root   /usr/local/openresty/nginx/html;
        index  index.html index.htm;
    }

    location /api {

        resolver 127.0.0.11;

        rewrite_by_lua_block {

            local config = {
                introspection_endpoint = 'http://curityserver:8443/oauth/v2/oauth-introspect',
                client_id = 'introspection-client',
                client_secret = 'Password1',
                cache_name = 'phantom-token',
                time_to_live_seconds = 900
            }

            local phantomTokenPlugin = require 'phantom-token-plugin'
            phantomTokenPlugin.execute(config)
        }

        proxy_pass http://host.docker.internal:3000/api;
    }
}

Note that OpenResty will send requests to two development URLs, and the Docker embedded DNS server is used to resolve these host names:

Base URLComponent
http://curityserver:8443Curity Identity Server, running within the Docker network
http://host.docker.internal:3000An example API, which will be run on the host computer

Deploy Components

Next update the Docker Compose file to use the custom Docker image and the OpenResty configuration. If required then also deploy the Curity Identity Server, as in the below example:

version: '3.8'
services:
  custom_openresty:
    image: custom_openresty:1.19.9.1-2-bionic
    hostname: openrestyserver
    ports:
    - 8080:8080
    volumes:
    - ./default.conf:/etc/nginx/conf.d/default.conf

  curity-idsvr:
    image: curity.azurecr.io/curity/idsvr:6.5.0
    hostname: curityserver
    ports:
     - 6749:6749
     - 8443:8443
    volumes:
     - ./license.json:/opt/idsvr/etc/init/license/license.json
    environment:
      PASSWORD: 'Password1'

Then re-run the deployment command:

docker compose up --force-recreate

Next run a command to call the API via OpenResty, to verify that the plugin is running:

curl http://localhost:8080/api

This will result in a 401 unauthorized response, since the request does not contain a valid access token, so it is not forwarded to the API:

{
    "code":"unauthorized",
    "message":"Missing, invalid or expired access token"
}

If you are deploying a fresh instance of the Curity Identity Server you will then need to run the basic setup wizard, as summarized in the First Configuration page. For this tutorial you can accept all default settings.

Run an API on the Host

You can run any API of your choice, but this tutorial will provide a default Node.js API that runs on port 3000, with just enough code to verify that a JWT is correctly received:

const http = require('http');
const port = 3000;

const server = http.createServer((req, res) => {

    const auth = req.headers['authorization'];
    let jwt = '[NONE]';
    if (auth && auth.startsWith('Bearer ')) {
        jwt = auth.substring(7);
    }

    const message = `API Received JWT: ${jwt}`;
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({message}));
});

server.listen(port, () => {
    console.log(`Server listening on port ${port}`);
});

Save the above code to a file called api.js and then run it with the following command:

node api.js

Configure OAuth Clients

A simple Client Credentials client will be used to get an opaque access token with which to call the API. An additional introspection client also needs to be configured, and is used by the phantom token module inside the OpenResty container.

Both of these need to be configured as clients within the Curity Identity Server. The following XML can be directly imported / merged into an instance of the Curity Identity Server if the default profile name is used for the token service:

<config xmlns="http://tail-f.com/ns/config/1.0">
  <profiles xmlns="https://curity.se/ns/conf/base">
    <profile>
      <id>token-service</id>
      <type xmlns:as="https://curity.se/ns/conf/profile/oauth">as:oauth-service</type>
      <settings>
        <authorization-server xmlns="https://curity.se/ns/conf/profile/oauth">
          <scopes>
            <scope>
              <id>read</id>
            </scope>
          </scopes>
          <client-store>
            <config-backed>
              <client>
                <id>test-client</id>
                <client-name>test-client</client-name>
                <secret>Password1</secret>
                <scope>read</scope>
                <capabilities>
                  <client-credentials/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                  <sector-identifier>test-client</sector-identifier>
                </use-pairwise-subject-identifiers>
              </client>
              <client>
                <id>introspection-client</id>
                <client-name>introspection-client</client-name>
                <secret>Password1</secret>
                <capabilities>
                  <introspection/>
                </capabilities>
                <use-pairwise-subject-identifiers>
                  <sector-identifier>introspection-client</sector-identifier>
                </use-pairwise-subject-identifiers>
              </client>
            </config-backed>
          </client-store>
        </authorization-server>
      </settings>
    </profile>
  </profiles>
</config>

Test the End-to-End Flow

Act as the test client, to first authenticate by getting an opaque access token, using the following simple curl request, or alternatively you could use OAuth Tools as a test client:

curl -k -u 'test-client:Password1' -X POST http://localhost:8443/oauth/v2/oauth-token \
-d grant_type=client_credentials \
-d scope=read

Then make an API call, using the access token from the response to the client credentials request:

curl -H "Authorization: Bearer 8ebb9c9d-4085-43e4-a406-4c4ab5da18fc" http://localhost:8080/api

The API request is routed via OpenResty, and the phantom token plugin introspects the opaque access token, then forwards a JWT to the API. The example API simply echos back the JWT, whereas a real API would continue by validating the JWT, then working with scopes and claims to implement the API’s authorization logic.

{
    "message": "API Received JWT: eyJraWQiOiIxMzQ2OT..."
}

Plugin Settings

Additional optional plugin settings are available and all settings are summarized in the following table:

SettingRequired?Description
introspection_endpointYesThe path to the Curity Identity Server’s introspection endpoint
client_idYesThe ID of the introspection client configured in the Curity Identity Server
client_secretYesThe secret of the introspection client configured in the Curity Identity Server
cache_nameYesThe name of the LUA shared dictionary in which introspection results are cached
time_to_live_secondsYesThe maximum time for which each result is cached
scopeNoThe configuration for a location can specify one or more scope values, and if any of these are missing during an API request, the client will receive a 403 forbidden response
trusted_web_originsNoFor browser clients, trusted origins can be configured, so that phantom token plugin error responses include CORS headers to enable Javascript to read the response
verify_sslNoThis can be used to temporarily disable SSL trust checks, which can be useful in initial development setups

The following example configuration includes all of the optional properties in an SSL based setup:

local config = {
    introspection_endpoint = 'https://localhost:8443/oauth/v2/oauth-introspect',
    client_id = 'introspection-client',
    client_secret = 'Password1',
    cache_name = 'phantom-token',
    time_to_live_seconds = 900,
    scope = 'read write',
    trusted_web_origins = {
        'https://www.example.com'
    },
    verify_ssl = true
}

Conclusion

The phantom token pattern can be quickly integrated with OpenResty by using the Curity LUA plugin. Once this is working on a development computer it is then easy to use Docker in the same way to publish to deployed environments. Using the phantom token approach results in a secure solution, where no sensitive token details are exposed to internet clients.