Logging¶
Logging in the Curity Identity Server is primarily configured using the popular Log4j 2 open source toolkit. The configuration file for specifying the loggers that are enabled and at what level they log at can be found in $IDSVR_HOME/etc/log4j2.xml
. This file can be changed as needed, allowing customers to change:
- What is logged
- What loggers log messages
- The levels of logging
- The format of log messages
- What files log messages are stored in
- How often log files are rotated
- And much, much more.
The logging framework is very advanced and permissive. This allows customers to meet their requirements and customize the Curity Identity Server’s logging subsystem to meet their needs.
Log Levels¶
The fundamental assumption about logging in the Curity Identity Server is that different log levels have different intended audiences. These are summarized in the following table.
Log Level | Intended Audience |
---|---|
TRACE |
Curity engineers or customers who have been asked by Curity support to enable this level of logging |
DEBUG |
Customers who are testing and developing in a non-production environment or environment free of sensitive data |
INFO |
Customers who are running in a production environment or ones where sensitive data are in use |
WARN |
Customer who are expected to be able to fix the issue via configuration without necessarily contacting support |
ERROR |
Curity support because customers cannot fix these errors on their own |
Danger
Given this fundamental assumption, there is no supported way to ensure that sensitive data is not logged when the se.curity
and/or io.curity
loggers are set to DEBUG
or TRACE
. Masking log messages can be used to limit this danger, but even this does not guarantee that unintended information will not be logged at the DEBUG
and TRACE
levels.
Configuration Overview¶
Because the configuration is very advanced and extensively documented on the Log4j web site, it will not be repeated here. Instead, administrators should refer to the online documentation for full details. By way of introduction, however, some of the basics are described here in the context of the Curity Identity Server.
The log4j2.xml
file is marked up in XML; other formats can be used to store the configuration, like YAML, but these are not officially supported by Curity. In the default configuration, all changes to this file will be reloaded automatically. This is achieved by the inclusion of the monitorInterval
attribute on the root Configuration
element. By default this value is 30 seconds. Any change to this file has to be replicated to each node to have the same affect on the entire cluster.
The default configuration file is comprised of two important parts:
The former define where log messages should be appended to. These are things like files, databases, etc. For files, appenders are also used to define rollover policies.
Appenders¶
By default, the following appenders are defined:
Appender | Meaning |
---|---|
cluster-log |
A rolling file appender that stores log messages particular to clustering |
stdout |
The appender that logs standard output of the Curity Identity Server (i.e., the server log messages) |
request-log |
The HTTP request logs made to run-time or admin Web server |
audit-log |
A file where all non-administrator-related audit logs are written to |
audit-db |
An appender that will use any configured database to log message to |
metrics |
An appender that collects Log4j metrics in a format that the Prometheus monitoring tool can process |
Appenders can have a layout which formats log messages. One type of layout that Log4j provides is called a ScriptPatternSelector. It is recommended to never use this in high-load systems, especially with the Console
appender (described in the next subsection).
Warning
Any use of the ScriptPatternSelector
(regardless of which JSR-223-compliant language may be used) is explicitly unsupported by Curity in any deployment even if it may function.
Standard Out¶
This is the most important log (but certainly not the only). The server uses this as its primary sink for log messages. This is commonly referred to as the “server log”. Because it uses standard out, this will need to be captured if the Curity Identity Server is running as a service or background job (e.g., in a Docker container or systemd service). Without the messages from this logger, debugging complex problems will be very difficult if possible at all.
By default, this logger includes the following in each message:
- The date and time
- Log level
- Request ID
- Session ID
- Thread ID
- Logger name
- Log message
This logger logs in color if a TTY is available; otherwise, it logs without color. In other words, when a person is running the Curity Identity Server, logs are colorized; when not, they are logged only in black and white. The session ID in the default pattern layout uses SessionId
; this is a truncated version of the session ID. If a non-truncated version (one longer than eight characters) is needed, this should be changed to LongSessionId
.
Cluster Log¶
As mentioned in the table above, cluster-related log messages are written to the cluster log. This is located in $IDSVR_HOME/var/log/cluster.log
. These are messages related to configuration updates received by an admin, validation of configuration, configuration changes, the mode of a particular node, etc. At INFO
level on a system that doesn’t undergo many configuration changes, this log will be very small.
The appender for the cluster log is configured by default to roll. It creates at most five backups which are gzipped. The log file will be rolled (and zipped) when it exceeds 10 MB. By default, the log entries contain:
- The date and time
- Log level
- Logger name
- Log message
Request Log¶
The request captures all requests received to the Web server(s) within the Curity Identity Server. These are, by default, written to $IDSVR_HOME/var/log/request.log
. This could be the run-time node’s Web server, the one used to deliver the admin UI, and/or the REST(CONF) API. Like the server log, in its default configuration, it logs messages in color when appropriate. These messages are appended to a log file where each part is separated by a pipe (|
). The various columns are these:
- The date and time
- Request ID
- Client IP address (potentially that of a reverse proxy)
- HTTP method
- URI of the request
- HTTP form and/or query string arguments
- HTTP protocol used
- Accepted language of the caller (not included when extended logging is enabled)
- The client’s accepted content types (not included when extended logging is enabled)
- HTTP status code of the resulting request
- Size of the HTTP response body
- Content type of the HTTP response body
- Whether or not the connection was made over HTTPS
- HTTP request headers
- HTTP response headers
- How long the request look to process
- The location from where the client made the request
- Session ID (for persistent sessions only)
This file is rolled by default. It creates at most five backups which are gzipped. The log file will be rolled (and zipped) when it exceeds 10 MB.
At DEBUG
or TRACE
level, this log will include extended request details. This includes HTTP headers, query string parameters, form parameters, and other potentially sensitive data. This is not the case by default. If the se.curity
logger is set to TRACE
or DEBUG
or if the LOGGING_LEVEL
environment variable is set to either of these, then extended request logging will be performed.
Audit Log¶
This is the appender used to create the fail safe audit log. This should always be enabled even if storing audit logs to a database is configured. See Audit for more details.
Metrics¶
The metrics appender is a special Prometheus appender that captures the number of messages logged at the various log levels described above.
Tip
See Monitoring for details about monitoring.
This appender will only work if the Configuration
root element of the $IDSVR_HOME/etc/log4j2.xml
file contains an attribute called packages
that has a value of io.prometheus.client.log4j2
. This and an example of the appender is shown in the following listing:
<Configuration packages="io.prometheus.client.log4j2">
<Prometheus name="metrics">
<filters>
<MarkerFilter marker="REQUEST" onMatch="DENY" onMismatch="NEUTRAL"/>
</filters>
</Prometheus>
</Configuration>
By default, the request log messages are filtered out. This behavior can be changed by removing the MarkerFilter
. After doing so, the log4j2_appender_total
metrics will also include request log messages for the applicable level
.
Loggers¶
The other main part of the log4j2.xml
file in its default form is the loggers. These are all asynchronous loggers. The most important of these loggers are se.curity
and io.curity
. The former is used for the core product, whereas the latter is used for open source plug-ins that are provided with the product or as add-ons. All log messages in these loggers follow the fundamental assumption about logging mentioned above. Other loggers may not. If a logger is defined that starts with se.curity
but includes additional sub-parts, this logger will be used. This is how Log4j is architected.
When a message is written for a logger other than the ones defined, the root logger (AsyncRoot
) is used. This has a default log level of WARN
. Any component that does not use the logging subsystem and tries to write to standard out or standard error after the logging subsystem has been initialized is redirected to this logger. Standard error is set to INFO
level and standard out is set to DEBUG
. This means that neither will be visible unless the root logger’s log level is changed from its default.
Logging Incorrect Cookies¶
Logging incorrect Cookies can be enabled by adding the org.eclipse.jetty.http.ComplianceViolation
logger with the level of DEBUG or higher (TRACE).
Masking¶
For the most part, log messages do not contain sensitive data like Personally Identifiable Information (PII), tokens, cookie values, etc. At the TRACE
and DEBUG
levels, they may, however. By default, such data is masked when logged by almost all of the se.curity
and io.curity
loggers. When a sensitive piece of data is logged, it will be redacted by replacing most (if not all) of the value with asterisks. This can make debugging harder, so masking can be turned off for all loggers or only certain ones. It is also possible to mask all parameters of a log message, even if such data is not masked by default. These options are controlled using Log4j’s conversion patterns and a marker. Beyond what Log4j provides itself, the Curity Identity Server includes support for two additional conversion patterns that control whether parameters of a message are masked or not:
Conversion Pattern | Explanation |
---|---|
%um , %unmask |
Message parameters will be unmasked even if explicitly masked in the product |
%mm , %mask |
Message parameters will all be masked regardless of whether they were masked in the product |
Note
Regular messages that are logged by any of the se.curity
and io.curity
loggers are masked by default. This means that the %msg
(and its equivalents %m
and %message
) are masked by default for these loggers.
The %um
conversion pattern and its equivalent, %unmask
, provide a mechanism to unmask any parameter that may be explicitly masked by the product’s implementation code. This provides a way to unmask values for debugging purposes, for instance. As an example, suppose that the the following message is logged:
DEBUG {thread1} se.curity.myGoodLogger The IP address = 192.**********
The parameter, IP address, is masked. This can make debugging harder, so this message could be unmasked by specifying that the logger that logged this message should append log events to an appender that uses a conversion pattern of %um
. An example of this is shown in the following listing:
%um
pattern converter¶1 2 3 4 5 6 7 8 9 10 11 12 | <Configuration>
<Appenders>
<Console name="stdout" target="SYSTEM_OUT">
<PatternLayout pattern="%um%n"/>
</Console>
</Appenders>
<Loggers>
<Logger name="se.curity.MyGoodLogger">
<AppenderRef ref="stdout"/>
</Logger>
</Loggers>
</Configuration>
|
Note how the logger name (on line 8) matches the one in the log message in Listing 76.
Similarly, all parameters, masked or not, can be explicitly masked. This can be done using the %mm
conversion pattern and its equivalent, %mask
. For instance, suppose this message were logged:
DEBUG {thread1} se.curity.MyGoodLogger The IP address = 192.**********, client ID = my-good-client
In this example, IP address is masked, but client ID is not. To explicitly make this parameter masked as well, the same configuration as in Listing 77 can be used with the conversion pattern of %mm
on line 4 (instead of %um
shown in the listing). This would produce a log message similarly to the following:
DEBUG {thread1} se.curity.MyGoodLogger The IP address = 192.**********, client ID = my-go**********
Note the masked client ID in Listing 79 caused by the use of the %mm
conversion pattern.
All messages that contained parameters that should be masked are marked with the MASK
marker. This allows for filtering and formatting any message that contains sensitive data regardless of which logger produced the event. It also allows SDK plug-ins to mask their messages. This can be done in plug-in code by applying the MASK
marker to any message that should be masked. An example is shown in the following listing:
val logger: Logger = LoggerFactory.getLogger(MyGoodClass::class.java)
val MASK_MARKER : Marker = MarkerFactory.getMarker("MASK")
logger.debug(MASK_MARKER, "The IP address = {}, client ID = {}", ipAddress, clientId)
Warning
Masking cannot be applied to entries in the Thread Context Map (i.e., Mapped Diagnostic Context or MDC) and MapMessage
. These map entries are used in the request log appender by default, and are, thus, not masked.
Shipping Logs¶
It is common practice to ship logs to a remote host for persistence and search. For the sake of support all files in the $IDSVR_HOME/var/log
directory should be shipped to the remote log host. If not, it will be harder to debug problems, and may prolong the time to resolve a support case. Be especially aware of the fundamental assumption about log levels described above when shipping logs to remote hosts. If misconfigured, sensitive data may be replicated to more machines, causing sensitive data to be scattered throughout your infrastructure. To limit the danger of this, be sure that messages are masked (the default).
Log4j Scripting Languages¶
Log4j supports using scripting languages for advanced features.
Since Curity 7.0, to enable Javascript scripting support, the log4j2.Script.enableLanguages=javascript
system property must be set when the Curity Server is started. To enable other languages, the script manager jars
for such languages must be added to the Server’s classpath.
Files Not Configurable by Log4j¶
There are certain files that cannot be controlled by the Log4j configuration described above – at least not to the same extent. These include the following:
gc.log*
confsvc*.log
post-commit-scripts.log
The first is the log of Java garbage collection that takes place within the Curity Identity Server. This is an important file for debugging garbage collection issues that may arise in high throughput environments. This file is configured in the ${IDSVR_HOME}/bin/idsvr script if the GC_LOG_OPTS
environment variable is not set. By default, this log captures the following:
- Garbage collection details
- Garbage collection stop times
- Tenured distribution
- Garbage collection date and time
The file is configured to be rolled whenever it reaches 100 MB, and only one backup log will be preserved. It’s format may change from one version of the product to another.
The post-commit-script.log
file contains messages written when Commit Hooks are run.
Configuration Service Logs¶
The following configuration log files are created in a production system (i.e., one where the se.curity
logger is not set to TRACE
or DEBUG
):
Log File | Purpose |
---|---|
confsvc.log |
Log file containing information used to facilitate secure communication between admin and run-time nodes |
confsvc-audit.log |
An audit log of logins to the admin UI, REST(CONF) API, and CLI |
confsvc-error.log* |
An error log used for internal logging from the configuration service |
Additional configuration service logs may be created when the se.curity
logger is configured in log4j2.xml
with a level of TRACE
or DEBUG
. Other configuration service log files created in a production system are intentionally left undocumented but may be requested by support.
The log level of messages written to confsvc.log
is taken from the se.curity
logger in log4j2.xml
. This file is not rolled, but should not contain many messages.
The confsvc-audit
log file contains information about admin login events. It is not rolled, but should not contain many message on a production system.
The confsvc-error.log*
files are rolled automatically whenever the log reaches 1 MB. Only five log files will be created before they are recycled in a first-in-first-out (FIFO) fashion.