You might have read before about the Phantom Token Approach, which is a privacy-preserving token usage pattern for securing APIs and microservices that combines the security of opaque tokens with the convenience of JWTs. The idea is to have a pair of a by-reference and a by-value token. The by-value token (JWT) can be obtained with the help of a by-reference equivalent (opaque token) that is dereferenced for the JWT using introspection. The client is not aware of the JWT and therefore we call the token the Phantom Token.
The Phantom Token approach takes the burden of token introspection from the API microservice and puts in on the API gateway. It helps to limit the network traffic, especially when there are many services handling one request, which is quite often the case when microservices pattern is used. There are still setups though, where even with this approach, the network traffic between the API Gateway and the Token Service can be quite substantial. E.g. when your API Gateway is spread across many instances around the world whereas the Token Service is not. There are also APIs where latency is an important factor. In such situations having an additional request to the Token Service from the API might be a problem.
The Split Token Approach can help here.
The Split Token approach bases on the same principals as the Phantom Token approach - the client still gets an opaque token and the API gets a JWT. But in this approach there is no need for the API Gateway to exchange the opaque token for a JWT. What is more, the JWT is not simply cached in whole in the API Gateway, which helps increase security.
When the Token Service issues a token for the client, it splits the JWT into two parts:
- the signature of the JWT
- the head and body of the JWT
Then, it sends back the signature part of the JWT to the client, to be used as the opaque token. At the same time the Token Service hashes the signature part and sends the hash together with the second part of the JWT (head and body) to the API Gateway. The gateway then caches the token using the hashed signature as the key for the cache. The value is cached for as long as the expiration time of the token.
When the client sends a request, the API Gateway takes the signature part sent by the client, hashes it and looks it up in its cache. Then it can glue back the token - the head, body and signature, and forwards it to any API service handling the request. Thus, the API gets a whole JWT, ready to be deserialized and used as needed.
- The Token Service sends the client the signature part of the token.
- At the same time, the Token Service sends the API Gateway a hashed signature and the head and body parts of the token.
- The API Gateway caches the token parts.
- The client uses the signature as an opaque access token when sending requests to the API.
- The API Gateway hashes the signature and looks up the token in the cache.
- the head, body and signature of the JWT are glued together and forwarded to the API service.
The Split Token approach further improves security of your tokens. Neither the client nor the API gets the full information required to prepare a signed JWT usable with the API. Even if someone manages to break into the API Gateway’s cache database, the information stored there will not be useful without the original signature part - which is only available to the client. Should whole JWTs be cached by the gateway such data breach could pose a great danger to your users.
Furthermore this approach eliminates the need to ask the Token Service for a JWT in exchange for the opaque token, so the API Gateway won’t have the additional overhead of asking a remote service. This can be especially beneficial in setups where the API Gateway operates on numerous instances spread across the world and the Token Service is deployed on just a few.
Less network traffic going between the API Gateway and the Token Service means also that less resources is needed to operate the Token Service.
The Split Token Approach is compliant with the OAuth 2.0 standard. Neither the client nor the APIs have to implement any proprietary solution for this pattern. This makes the pattern vendor neutral and applicable for any OAuth 2.0 ecosystem.
As with many architectural approaches, there are also some considerations that should be taken under when applying the Split Token approach.
The pattern described in this article uses a hashing function and then uses the hashed value as the key to the cache.
This may raise some concerns about possible collisions of the hashes. You should remember though, that each JWT contains a
random ID (the
jti claim) which means that no two access tokens have the same payload. This fact, together with using a
good hashing algorithm, like SHA-256 means the probability of a hash collision is close to zero.
As the Split Token pattern will usually be used together with CDNs serving as the API Gateway it means that very often the
Gateway and its cache will be outside of your infrastructure, e.g. provided by a third party. This means that you should
treat with caution the contents of this cache, as you will not be in charge of safeguarding it from poisoning. In order
to maintain a high level of security, it’s good to whitelist the issuer of your JWTs (value of the
iss claim) and the
algorithm used to sign the JWT (value of the
alg claim found in the JWTs head). Thanks to this you will be sure, that
even if someone manages to swap the contents of the JWT value kept in cache, you will still use proper data to verify the
Another concern might be the data that is kept together with the token in the cache. If your access tokens contain any sensitive data or Personally Identifiable Information you might want to consider using encrypted tokens, so that the data in the cache would remain safe, even if the cache itself becomes breached.
The cache used by the API Gateway might need to be invalidated - if an access token is revoked before their expiration time. If it’s not invalidated, then the API Gateway may still create a JWT and forward it to the API, which will believe the token is still a valid one.
The other thing that should be considered is the cache population. Especially in a global setup the population may take some time, and the client might not be able to use a generated token straight away. If that is a concern, then you should consider mechanisms to fallback to a classic Phantom Token approach.
- Integrating Curity Identity Server with Apigee Edge using the Split Token Approach
- Apigee Split Token Publisher Event Listener
- Integrating the Curity Identity Server with AWS API Gateway using the Split Token approach
- AWS Split Token Publisher Event Listener