Logging in the Curity Identity Server is primarily configured using the popular Log4j 2 open source toolkit. The configuration file for specifying the loggers that are enabled and at what level they log at can be found in $IDSVR_HOME/etc/log4j2.xml. This file can be changed as needed, allowing customers to change:
$IDSVR_HOME/etc/log4j2.xml
The logging framework is very advanced and permissive. This allows customers to meet their requirements and customize the Curity Identity Server’s logging subsystem to meet their needs.
The fundamental assumption about logging in the Curity Identity Server is that different log levels have different intended audiences. These are summarized in the following table.
TRACE
DEBUG
INFO
WARN
ERROR
Danger
Given this fundamental assumption, there is no supported way to ensure that sensitive data is not logged when the se.curity and/or io.curity loggers are set to DEBUG or TRACE. Masking log messages can be used to limit this danger, but even this does not guarantee that unintended information will not be logged at the DEBUG and TRACE levels.
se.curity
io.curity
Because the configuration is very advanced and extensively documented on the Log4j web site, it will not be repeated here. Instead, administrators should refer to the online documentation for full details. By way of introduction, however, some of the basics are described here in the context of the Curity Identity Server.
The log4j2.xml file is marked up in XML; other formats can be used to store the configuration, like YAML, but these are not officially supported by Curity. In the default configuration, all changes to this file will be reloaded automatically. This is achieved by the inclusion of the monitorInterval attribute on the root Configuration element. By default this value is 30 seconds. Any change to this file has to be replicated to each node to have the same affect on the entire cluster.
log4j2.xml
monitorInterval
Configuration
The default configuration file is comprised of two important parts:
The former define where log messages should be appended to. These are things like files, databases, etc. For files, appenders are also used to define rollover policies.
By default, the following appenders are defined:
cluster-log
stdout
request-log
audit-log
audit-db
metrics
Appenders can have a layout which formats log messages. One type of layout that Log4j provides is called a ScriptPatternSelector. It is recommended to never use this in high-load systems, especially with the Console appender (described in the next subsection).
Console
Warning
Any use of the ScriptPatternSelector (regardless of which JSR-223-compliant language may be used) is explicitly unsupported by Curity in any deployment even if it may function.
ScriptPatternSelector
This is the most important log (but certainly not the only). The server uses this as its primary sink for log messages. This is commonly referred to as the “server log”. Because it uses standard out, this will need to be captured if the Curity Identity Server is running as a service or background job (e.g., in a Docker container or systemd service). Without the messages from this logger, debugging complex problems will be very difficult if possible at all.
By default, this logger includes the following in each message:
This logger logs in color if a TTY is available; otherwise, it logs without color. In other words, when a person is running the Curity Identity Server, logs are colorized; when not, they are logged only in black and white. The session ID in the default pattern layout uses SessionId; this is a truncated version of the session ID. If a non-truncated version (one longer than eight characters) is needed, this should be changed to LongSessionId.
SessionId
LongSessionId
As mentioned in the table above, cluster-related log messages are written to the cluster log. This is located in $IDSVR_HOME/var/log/cluster.log. These are messages related to configuration updates received by an admin, validation of configuration, configuration changes, the mode of a particular node, etc. At INFO level on a system that doesn’t undergo many configuration changes, this log will be very small.
$IDSVR_HOME/var/log/cluster.log
The appender for the cluster log is configured by default to roll. It creates at most five backups which are gzipped. The log file will be rolled (and zipped) when it exceeds 10 MB. By default, the log entries contain:
The request captures all requests received to the Web server(s) within the Curity Identity Server. These are, by default, written to $IDSVR_HOME/var/log/request.log. This could be the run-time node’s Web server, the one used to deliver the admin UI, and/or the REST(CONF) API. Like the server log, in its default configuration, it logs messages in color when appropriate. These messages are appended to a log file where each part is separated by a pipe (|). The various columns are these:
$IDSVR_HOME/var/log/request.log
|
This file is rolled by default. It creates at most five backups which are gzipped. The log file will be rolled (and zipped) when it exceeds 10 MB.
At DEBUG or TRACE level, this log will include extended request details. This includes HTTP headers, query string parameters, form parameters, and other potentially sensitive data. This is not the case by default. If the se.curity logger is set to TRACE or DEBUG or if the LOGGING_LEVEL environment variable is set to either of these, then extended request logging will be performed.
LOGGING_LEVEL
This is the appender used to create the fail safe audit log. This should always be enabled even if storing audit logs to a database is configured. See Audit for more details.
The metrics appender is a special Prometheus appender that captures the number of messages logged at the various log levels described above.
Tip
See Monitoring for details about monitoring.
This appender will only work if the Configuration root element of the $IDSVR_HOME/etc/log4j2.xml file contains an attribute called packages that has a value of io.prometheus.client.log4j2. This and an example of the appender is shown in the following listing:
packages
io.prometheus.client.log4j2
<Configuration packages="io.prometheus.client.log4j2"> <Prometheus name="metrics"> <filters> <MarkerFilter marker="REQUEST" onMatch="DENY" onMismatch="NEUTRAL"/> </filters> </Prometheus> </Configuration>
By default, the request log messages are filtered out. This behavior can be changed by removing the MarkerFilter. After doing so, the log4j2_appender_total metrics will also include request log messages for the applicable level.
MarkerFilter
log4j2_appender_total
level
The other main part of the log4j2.xml file in its default form is the loggers. These are all asynchronous loggers. The most important of these loggers are se.curity and io.curity. The former is used for the core product, whereas the latter is used for open source plug-ins that are provided with the product or as add-ons. All log messages in these loggers follow the fundamental assumption about logging mentioned above. Other loggers may not. If a logger is defined that starts with se.curity but includes additional sub-parts, this logger will be used. This is how Log4j is architected.
When a message is written for a logger other than the ones defined, the root logger (AsyncRoot) is used. This has a default log level of WARN. Any component that does not use the logging subsystem and tries to write to standard out or standard error after the logging subsystem has been initialized is redirected to this logger. Standard error is set to INFO level and standard out is set to DEBUG. This means that neither will be visible unless the root logger’s log level is changed from its default.
AsyncRoot
Logging incorrect Cookies can be enabled by adding the org.eclipse.jetty.http.ComplianceViolation logger with the level of DEBUG or higher (TRACE).
org.eclipse.jetty.http.ComplianceViolation
For the most part, log messages do not contain sensitive data like Personally Identifiable Information (PII), tokens, cookie values, etc. At the TRACE and DEBUG levels, they may, however. By default, such data is masked when logged by almost all of the se.curity and io.curity loggers. When a sensitive piece of data is logged, it will be redacted by replacing most (if not all) of the value with asterisks. This can make debugging harder, so masking can be turned off for all loggers or only certain ones. It is also possible to mask all parameters of a log message, even if such data is not masked by default. These options are controlled using Log4j’s conversion patterns and a marker. Beyond what Log4j provides itself, the Curity Identity Server includes support for two additional conversion patterns that control whether parameters of a message are masked or not:
%um
%unmask
%mm
%mask
Note
Regular messages that are logged by any of the se.curity and io.curity loggers are masked by default. This means that the %msg (and its equivalents %m and %message) are masked by default for these loggers.
%msg
%m
%message
The %um conversion pattern and its equivalent, %unmask, provide a mechanism to unmask any parameter that may be explicitly masked by the product’s implementation code. This provides a way to unmask values for debugging purposes, for instance. As an example, suppose that the the following message is logged:
DEBUG {thread1} se.curity.myGoodLogger The IP address = 192.**********
The parameter, IP address, is masked. This can make debugging harder, so this message could be unmasked by specifying that the logger that logged this message should append log events to an appender that uses a conversion pattern of %um. An example of this is shown in the following listing:
1 2 3 4 5 6 7 8 9 10 11 12
<Configuration> <Appenders> <Console name="stdout" target="SYSTEM_OUT"> <PatternLayout pattern="%um%n"/> </Console> </Appenders> <Loggers> <Logger name="se.curity.MyGoodLogger"> <AppenderRef ref="stdout"/> </Logger> </Loggers> </Configuration>
Note how the logger name (on line 8) matches the one in the log message in Listing 75.
Similarly, all parameters, masked or not, can be explicitly masked. This can be done using the %mm conversion pattern and its equivalent, %mask. For instance, suppose this message were logged:
DEBUG {thread1} se.curity.MyGoodLogger The IP address = 192.**********, client ID = my-good-client
In this example, IP address is masked, but client ID is not. To explicitly make this parameter masked as well, the same configuration as in Listing 76 can be used with the conversion pattern of %mm on line 4 (instead of %um shown in the listing). This would produce a log message similarly to the following:
DEBUG {thread1} se.curity.MyGoodLogger The IP address = 192.**********, client ID = my-go**********
Note the masked client ID in Listing 78 caused by the use of the %mm conversion pattern.
All messages that contained parameters that should be masked are marked with the MASK marker. This allows for filtering and formatting any message that contains sensitive data regardless of which logger produced the event. It also allows SDK plug-ins to mask their messages. This can be done in plug-in code by applying the MASK marker to any message that should be masked. An example is shown in the following listing:
MASK
val logger: Logger = LoggerFactory.getLogger(MyGoodClass::class.java) val MASK_MARKER : Marker = MarkerFactory.getMarker("MASK") logger.debug(MASK_MARKER, "The IP address = {}, client ID = {}", ipAddress, clientId)
Masking cannot be applied to entries in the Thread Context Map (i.e., Mapped Diagnostic Context or MDC) and MapMessage. These map entries are used in the request log appender by default, and are, thus, not masked.
MapMessage
It is common practice to ship logs to a remote host for persistence and search. For the sake of support all files in the $IDSVR_HOME/var/log directory should be shipped to the remote log host. If not, it will be harder to debug problems, and may prolong the time to resolve a support case. Be especially aware of the fundamental assumption about log levels described above when shipping logs to remote hosts. If misconfigured, sensitive data may be replicated to more machines, causing sensitive data to be scattered throughout your infrastructure. To limit the danger of this, be sure that messages are masked (the default).
$IDSVR_HOME/var/log
Log4j supports using scripting languages for advanced features. Since Curity 7.0, to enable Javascript scripting support, the log4j2.Script.enableLanguages=javascript system property must be set when the Curity Server is started. To enable other languages, the script manager jars for such languages must be added to the Server’s classpath.
log4j2.Script.enableLanguages=javascript
There are certain files that cannot be controlled by the Log4j configuration described above – at least not to the same extent. These include the following:
gc.log*
confsvc*.log
post-commit-scripts.log
The first is the log of Java garbage collection that takes place within the Curity Identity Server. This is an important file for debugging garbage collection issues that may arise in high throughput environments. This file is configured in the ${IDSVR_HOME}/bin/idsvr script if the GC_LOG_OPTS environment variable is not set. By default, this log captures the following:
GC_LOG_OPTS
The file is configured to be rolled whenever it reaches 100 MB, and only one backup log will be preserved. It’s format may change from one version of the product to another.
The post-commit-script.log file contains messages written when Commit Hooks are run.
post-commit-script.log
The following configuration log files are created in a production system (i.e., one where the se.curity logger is not set to TRACE or DEBUG):
confsvc.log
confsvc-audit.log
confsvc-error.log*
Additional configuration service logs may be created when the se.curity logger is configured in log4j2.xml with a level of TRACE or DEBUG. Other configuration service log files created in a production system are intentionally left undocumented but may be requested by support.
The log level of messages written to confsvc.log is taken from the se.curity logger in log4j2.xml. This file is not rolled, but should not contain many messages.
The confsvc-audit log file contains information about admin login events. It is not rolled, but should not contain many message on a production system.
confsvc-audit
The confsvc-error.log* files are rolled automatically whenever the log reaches 1 MB. Only five log files will be created before they are recycled in a first-in-first-out (FIFO) fashion.