The idea behind OAuth is to enable applications to use tokens to access protected resources like an API. OAuth eliminates the need for applications to store credentials for user authentication. Instead, it adds an authorization layer by introducing the role of the authorization server. The authorization server handles authentication logic, and issues access tokens to the application. Applications then use access tokens in their requests to the resource servers when accessing protected resources on behalf of the user.
OAuth 2.0 evolved into a framework of many specifications that extend or update the core protocol from 2012. In this landscape, adopters must study various related documents and aggregate the information to thoroughly understand the protocol. To implement OAuth 2.0 securely, you must understand the protocol and recommendations. It may pose a risk if implementers don't grasp the technical details or fail to aggregate the information correctly. They may also ignore additions to the core specification. Therefore, it is time to consolidate the updates and provide a state-of-the-(current)-art specification to make it easier to implement OAuth 2.0 securely. This is where OAuth 2.1 comes in.
OAuth 2.1 — as the name indicates — will be an update of OAuth 2.0. The draft incorporates updates, changes, and recommendations from best practices from the last few years. However, the fundamentals remain the same. OAuth 2.1 still defines the roles of a resource owner, a resource server, a client, and an authorization server. The interaction between the roles continues as follows: The resource owner provides an authorization grant to the client. The client can exchange the authorization grant for an access token at the authorization server. It then adds the access tokens in the request to the resource server to gain access to a protected resource.
Within OAuth 2.0, there are different options to obtain an access token. We call these options "OAuth flows." Yet, due to the progression of internet technologies, namely the browser's support for Cross-Origin Resource Sharing (CORS), the implicit flow defined in OAuth 2.0 is considered obsolete. Other flows have been proven insecure, and their use is no longer recommended. OAuth 2.1 aims to take such considerations into account and will remove the implicit and resource owner credential flow. The current draft of OAuth 2.1 adds clarity to some points by consolidating best practices. It provides an overview of the framework by listing all currently known extensions in the appendix.
Overall, the current state of OAuth 2.1 looks very promising. It has the potential to significantly improve the understanding of the OAuth framework. Below, I'll highlight some of the details.
In OAuth, there are two types of clients: the confidential client and the public client. In the OAuth 2.1 draft, a confidential client is simply defined as a client with credentials. Consequently, it "MUST take precautions to prevent leakage and abuse of [the] credentials." A public client does not have any credentials.
OAuth 2.0 makes some assumptions about the client type depending on its profile. It considers web applications to be confidential clients, and user-agent-based applications (browser-based applications in OAuth 2.1) and native applications to be public clients. The OAuth 2.1 draft does not make any such assumption. For example, a browser-based application may act as a confidential client and, when doing so, should follow a backend for frontend pattern. At Curity, we designed the Token Handler for that purpose. The OAuth 2.1 draft also suggests utilizing the backend for frontend pattern for native applications or using Dynamic Client Registration to issue credentials at runtime.
The only client authentication specified in the current draft of OAuth 2.1 remains the client secret (client password in OAuth 2.0) using the HTTP Basic authentication scheme. However, that does not mean that it is the preferred method. Instead, the draft recommends using public-key-based authentication methods, such as mutual TLS or JWT assertion for client authentication, all of which are extension specifications supported by the Curity Identity Server. When using public-key cryptography, the authorization server does not need to store any (shared) secrets for client authentication which reduces the attack vector.
However, it is up to the authorization server to determine if it relies on the client authentication. Simply because a client presents valid credentials does not mean it is trustworthy. This is a consequence of the updated definition of a confidential client. There may be differences between confidential clients in how the credentials were issued, registered, or distributed — cases that are not considered in OAuth 2.0 core. With OAuth 2.1, the authorization server should consider such circumstances when authenticating clients.
Client Redirection Endpoint
After having finished its interaction with the resource owner, such as for part of the authentication, the authorization server sends its response via the user agent to the client's redirect URI (redirection endpoint). The response may contain the authorization credentials, like the authorization code. Therefore, the response must be sent to the correct endpoint. Consequently, the OAuth 2.1 draft specifies some rules:
Clients must register one or more complete redirect URIs. Redirect URIs must use HTTPS (with an exception for loopback interface redirect URIs).
When using private URI schemes (custom URI schemes) for redirect URIs, clients must use schemes based on reverse domain names. The authorization server should enforce that rule.
The authorization server must check that any specified redirect URI exactly matches one of the registered URIs (with some exceptions concerning loopback redirects).
Some points regarding the redirect endpoint are clearer (and stricter) in the OAuth 2.1 draft than in OAuth 2.0. For example, in OAuth 2.1, the authorization server must perform exact string matching of the values as mentioned above. This is the default behavior of the Curity Identity Server. However, the Curity Identity Server allows for making a tradeoff between security and usability with custom redirect URI policies. After all, what is a secure system if you cannot work with it? Nevertheless, the Curity Identity Server is secure and compatible with OAuth 2.1 by default.
The OAuth 2.1 draft also includes requirements concerning the client. For example, the draft expects the client to prevent open redirects on the redirect endpoint. In other words, clients must not automatically forward any authorization response. Clients must also prevent cross-site request forgery (CSRF) attacks at the redirect URI. They must use the state parameter for CSRF mitigation. The state parameter is recommended but optional in OAuth 2.0. Alternatively, clients may utilize the code_challenge parameter defined in Proof Key for Code Exchange (PKCE) and OAuth 2.1 for CSRF protection.
According to the current draft of OAuth 2.1, clients must store the authorization server with which they start a flow and associate it with the session. Clients must then validate that an authorization response originates at the same authorization server that the session started with. They should use unique redirect URIs for each authorization server for that purpose. The Curity Identity Server supports RFC9207, the Authorization Server Issuer Identification specification, which makes it easy for the client to identify the authorization server without dedicated redirect endpoints. The client must send any subsequent token requests to the same authorization server as in the session. This is to prevent so-called mix-up attacks.
Secure OAuth Flows
As mentioned at the beginning of this blog post, OAuth 2.1 will remove some flows and secure the remaining ones. The flows currently included in the OAuth 2.1 draft are:
In OAuth 2.1, the code flow will receive some important updates. When using the code flow, the authorization server responds with an authorization code to the authorization request. The client uses this code to exchange it for an access token. Even though the authorization code is bound to the client, the code flow specified in OAuth 2.0 is vulnerable to code injection attacks.
In a code injection attack, a malicious actor intercepts an OAuth 2.0 flow, swaps the authorization code, and consequently logs in as a different user. Therefore, it is highly recommended to implement Proof Key for Code Exchange, PKCE. This extension is always enabled in the Curity Identity Server.
OAuth 2.1 incorporates PKCE and requires or recommends it for all OAuth clients running the code flow — confidential as well as public ones. Consequently, conformant implementations are expected to be significantly more secure by default. A client that already supports OAuth 2.0 code flow with PKCE can use the code_challenge parameter for CSRF protection and is thereby already compatible with OAuth 2.1 as outlined above.
The authorization server must consider the security implications when issuing tokens and take adequate measures to limit risks related to the exposure of tokens, especially when interacting with unauthenticated clients. Ways to mitigate this include limiting the lifetime of tokens and scopes, applying refresh token rotation (one-time refresh tokens), or using sender-constrained tokens. Also, consider applying a pattern such as the phantom token pattern or the split token pattern for a more secure architecture.
OAuth 2.1 won't bring any groundbreaking changes, but it will simplify things and clarify how to securely implement the protocol. There is no longer an assumption of what a confidential or public client is capable of. They may run any flow.
However, the authorization server has to consider security implications when issuing tokens. Critical technical changes target the code flow and the related redirect URI that must exactly match a pre-registered value. Using PKCE is recommended or required for all clients. It protects against both authorization code injection and cross-site request forgery threats.
The Curity Identity Server is the product for "OAuth and OpenID done better." It is, therefore, already compliant with the next generation of OAuth because it implements common security best practices like PKCE. OAuth 2.1 just proves that we have always been on the right track. If your client implementations also follow best practices, they will most likely live up to the upcoming requirements listed in OAuth 2.1 without much change.
OAuth 2.1 is what it promises to be: an updated, improved version of OAuth 2.0 and one step closer to a more secure Internet.
If you want to learn more, check out this webinar available on-demand - Next Generation OAuth and OpenID Connect. It provides an overview of the most important revisions and enhancements for the OAuth and OpenID Connect protocols, including OAuth 2.1, FAPI 2.0, SIOPv2 and verifiable credentials. Watch the webinar.