Per-client Debug Logging in Production Environments

Per-client Debug Logging in Production Environments



It is not uncommon that debugging an integration with a partner or customer has to be done in a pre-production environment where sensitive data may be used. In such environments, the Service Level Agreements (SLA) may be lower, but the danger of logging too much information still exists. Occasionally, debugging such integrations is even required in production environments where security is highest. In these, SLAs may even have penalties, and real Personally Identifiable Information (PII) will certainly be sent to the Curity Identity Server during login and token issuance.

The fundamental assumption about logging in the Curity Identity Server states that each log level (TRACE, DEBUG, INFO, etc.) has a different intended audience. The intended audience of DEBUG log messages is developers working for a customer to setup up a non-production environment. At the DEBUG level, passwords, cookies, tokens, and other sensitive data is assumed to be logged. This is helpful and necessary for debugging. Once the setup and integration is complete, the configuration is replicated to the production environment through a change control process, and the only alteration that is made is to environment-specific settings. Consequently, if integration issues arise, the root cause can only be changes to environment-specific settings, which are easy to debug without logs.

In this upper region, the intended audience of log messages is now operators, not developers. This follows from the typical Separation of Duties (SoD) policies applied to production environments. The log level for this new audience is INFO. This means that the only safe log level for a production system (or one where sensitive data is used) is INFO. Operators, however, cannot debug difficult problems; developers are needed for that. Without the logs, these developers will struggle.

How then can a developer debug production environments for certain integrations without compromising security and skirting the SoD restrictions? Given the fundamental assumption about logging, how can the General Data Protection Regulation (GDPR) be upheld and PII be safely treated while making difficult integrations in production or other sensitive environments?

These issues, like so many in the Curity Identity Server, can be overcome with just configuration. This remainder of this document explains a technique to implement such a solution, and guide customers seeking to solve challenges like those laid out above.

Solution Overview

As stated in the product documentation, logging in the Curity Identity Server is configured using Log4j. This ubiquitous framework is one of the most commonly used toolkits for performing logging — across all programming languages and operating environments. It is highly configurable and featureful. The configuration of this library is all that is needed to solve this use case.

To this end, the file located at $IDSVR_HOME/etc/log4j2.xml needs to be updated. By default, this file includes an appender that logs messages to the request log. To make sure that no PII is ever logged to this file, the Pattern element can be updated to this:


By doing this, the params column — the one that would contain query string and form parameters at DEBUG or TRACE level — is explicitly elided. It is done so in way that will not break any existing log parsers by using a value of {} which is what the value would always be at INFO level and above (when the parameters are never logged).

The next thing that should be done is to add a new logger:

<AsyncLogger name="" level="DEBUG">
    <AppenderRef ref="request-log" level="INFO"/>
    <AppenderRef ref="debug-log"/>

This will capture all messages sent by the certain component that logs request — the “request receiver”. This, contrary to the fundamental assumption of logging, sets this component to log at the DEBUG level, regardless of whether or not the Curity Identity Server is being run in a production or non-production environment. This logger, by itself, is dangerous. We partially address this by only appending to the request log at INFO level; we dully handle any issues there in the pattern of the request log above. We further address this in the debug log appender described below.

Another small change that should be made is to the root logger. The appender reference to the request log should be removed from that to avoid duplicate logging to the request log, since the a new appender will be added for this purpose. The desired root logger should look like this:

<AsyncRoot level="WARN">
    <AppenderRef ref="rewriten-stdout"/>
    <AppenderRef ref="metrics"/>

Finally, the most important part is a new appender that writes to a debug file after filters out all requests except for certain OAuth clients.

Client-Specific Log Appender

This client-specific appender uses a script filter to perform a number of checks. A file appender that uses this filter is shown in the following listing:

<File name="debug-log" fileName="${env:IDSVR_HOME}/var/log/debug.log" append="true">
    <PatternLayout header="#date|request-id|client|method|uri|params|protocol|lang|accept|status|size|content-type|secure|request-headers|response-headers|duration%n">
        <ScriptFilter onMatch="ACCEPT" onMisMatch="DENY">
	        <Script name="MyGoodFilter" language="groovy"><![CDATA[
	            // Log NO log messages from the Curity request logger...
                def result = false 

	            if (logEvent.getMarker()?.isInstanceOf("REQUEST")) {
	                def file = new"${env.IDSVR_HOME}/var/debug-client")

	                if (file.canRead()) {
                        // ...except the ones that DO match this client ID
	                    def clientId = file.text.trim() 

	                    if (!clientId?.isEmpty()) {
	                        def paramsContextMapEntry =["params"] ?: "{}"
	                        def params = new groovy.json.JsonSlurper().parseText(paramsContextMapEntry)
	                        result = params["client_id"] == [clientId]

	            return result

Firstly, note the pattern is the original one from the default request log appender configuration. This one logs the query string parameters, request headers, and response headers. Next, take notice of the ScriptFilter. If this matches (i.e., returns a true value), then the message will be logged (ACCEPT). If it does not match (i.e., returns a value of false), the message will not be logged (DENY). This logic is set by the ScriptFilter tag’s onMatch and onMisMatch annotations, respectively.

Installing Groovy

In order to use Groovy in the filter script, it needs to be available on each run-time node. The easist way to achieve this is to place [groovy-all version 2.4]( (or older) into `IDSVR_HOME/lib`

Whether or not a message matches is determined by the Script embedded within the ScriptFilter. In the example above, the Groovy programming language is used, but others are also possible. This script initially sets the result to false, meaning the message will not be logged. However, if the message has a REQUEST marker, some additional checks are made. The only log messages that will have this marker are request logs. If this appender were accidentally referenced by any other logger besides the request receiver, this filter would block them. Only the request receiver within the Curity Identity Server marks messages with this annotation.

When a request log message is found, the filter then checks for the presence of a file. This file is called debug-client and is located in the var directory under the root directory of the product’s deployment (by using the IDSVR_HOME environment variable). If this file exists, its contents are read in. If the contents are not empty, some final processing takes place.

The request log message written out by the request receiver is of type MapMessage. The script filter is given an implicitly defined object of type LogEvent which has a method called getMessage() (which can be accessed in Groovy as the message property). Using this, the script can access the original MapMessage. This map has a number of keys; the key names are the ones used in the %K conversion pattern of the Pattern element. One, in particular, that the script uses is the params key.

The params key, again, is the one that contains any query string or form parameters. This is encoded as JSON by the request receiver, so the script filter parses it into a Groovy Map object. Using this map, the script checks if the request parameters includes a client_id — which it will when performing the OAuth or OpenID Connect authorization request or token request. Because each HTTP request can contain multiple values for the same parameter, the associated object in the map is a collection. Therefore, the final check the script makes is to see if the client_id parameter is equal to a singleton containing the client ID in the debug file. If it is, the log message will be written out to the debug file; otherwise, it will not.

Example Execution

After making the changes to the log4j2.xml file described in the previous section, a few things should be noted about the execution of the Identity Server:

  • By default, the se.curity and io.curity logs are set to INFO, upholding the fundamental assumption of logging.
  • No client-specific logging is performed by default.
  • Performance has not been degraded by any significant amount.
  • Whenever client-specific logging is required, it is written to a separate log file which developers may be granted read access to.
  • This debug log need not be shipped to a remote log host, or, if it is, different security and rotation policies can easily be applied.

To see this working, do the following:

  1. Start the server
  2. Tail the request log: tail -F $IDSVR_HOME/var/request.log
  3. Tail the debug log: tail -F $IDSVR_HOME/var/log/debug.log
  4. Make a request with two different clients

Note that the server log only logs info messages. Also notice that the request log does not contain the client IP address, the request parameters, or any HTTP headers. The debug log has no entries for either client.

Next, create the file $IDSVR_HOME/var/debug-client, and add a single line to it containing the ID of the clients that should be debugged. Repeat step 4 above, and notice that the debug log contains a detailed log entry for that client but not the other. Update the debug-client with the other client ID. Repeat step 4, and observe how the detailed log entries are immediately changed from the previous client to the new one. Also, notice how the request log contains elided entries for both clients, and the server log contains only informational messages.

The result is a safe and secure logging setup that can be used in production environments (given certain other safeguards are in place).


Client-specific debug logging was achieved solely with configuration. Using the above tactic, the logging subsystem of the Curity Identity Server can be updated to write logs under certain conditions that can be toggled on and off. The result allowed SoD policies to be maintained and PII to be properly handled. This solution can easily be incorporated into a new or existing deployment. The results are simple to test and verify. Once in place, integrations can more easily be debugged — even in production!

Full example

You can find the full example on Curity's Github Repository

Let’s Stay in Touch!

Get the latest on identity management, API Security and authentication straight to your inbox.

Keep up with our latest articles and how-tos using RSS feeds