Best practices for managing logs and troubleshooting the Curity Identity Server.

Logging Best Practices

On this page

Intro

The Curity Identity Server writes a number of different types of logs. In this article we will summarize the behavior and describe some best practices for managing them. This will ensure that you can manage the Curity Identity Server at scale and troubleshoot effectively. Many companies will be already using similar techniques for their own APIs.

Loggers

Logs for the Curity Identity Server are written using the highly extensible log4j2 framework and details of how it can be customized are provided in the Curity Logging Documentation. Multiple loggers are used, each of which represents a different area of data. For technical support purposes the following logs are most frequently consulted:

LoggerData Type
System LogInformation on system events, such as warnings and errors due to connection or configuration problems
Request LogInformation about each HTTP request that the Curity Identity Server handles

Log Visualization

The Curity Identity Server logging has built in support for visualizing logs in color when required, to integrate with the Logfile Navigator. See the Curity lnav extensions for details, which will enable you to visualize logs via the following type of command:

bash
1
$IDSVR_HOME/bin/idsvr | tee -i $IDSVR_HOME/var/log/request.log | lnav

Application Specific Logs

If you ever need to drill deeper into a problem for a specific application, you can extend the logging behavior on a per-client basis, to capture more verbose details. The Per-client Debug Logging explores this topic further.

Basic Log Management

A default installation of the Curity Identity Server will store logs files locally on servers. The system log is written to standard output (i.e. "stdout") and cloud platforms will also capture this as a file.

Server Log Files

In a real world system you may then have many thousands of logs output per day, so that understanding important logging events is a challenge. If logging is left in the default state, then the following capabilities will be sub-optimal:

AreaDescription
Log AccessLogs can only be accessed by IT administrators and collecting the right logs can take time
Log AnalysisLogs are difficult to analyze when the system is under load and data is in a raw file format
Troubleshooting TimeInvestigating technical issues can take longer due to the above factors

Log Aggregation Components

For these reasons, it is recommended to use log aggregation for production systems under load, as part of your log management solution. This will require the following type of third-party component:

ComponentBehavior
Log ShipperA component that can take log files and send them to a log aggregation API
Log Aggregation APIAn API that can receive the data securely and then save it
Log Aggregation Data StoreA big data store that provides capabilities for managing a large amount of log data
Log Query ToolsThe ability to sign in authorized users who can then select and filter data on a field by field basis

In order to integrate with a log aggregation system, the Curity Identity Server needs to be configured accordingly. An example setup for the Kubernetes platform is shown below, and once understood this can also be adapted to other hosting architectures:

Log Aggregation example setup with the Kubernetes platform

The diagram shows a number of Kubernetes components, where the log shipper runs in a different container to the Curity Identity Server, and therefore has no impact on its performance:

ComponentBehavior
NodeEach node is a virtual machine that hosts multiple Curity instances
PodEach runtime instance of the Curity Identity Server is a Docker container and managed within a pod
Log DataThe log files are written to a virtual machine folder to make log shipping straightforward
Log ShipperLog shippers run in their own Docker containers and pick up logs from the virtual machine's file share

JSON Log Files

In a cloud native setup, logs end up as files on each node from which a log shipper can easily pick them up. Kubernetes stores these files in a /var/log/containers folder:

Node Logs

By default only the system logs are written here, though our Helm Chart enables you to include other logs by editing the details for the runtime node, setting stdout=true, then selecting extra logs that should be written to the virtual machine. In the following configuration the request logs are also being aggregated:

yaml
12345678910111213141516171819202122232425262728
runtime:
role: default
service:
type: ClusterIP
port: 8443
livenessProbe:
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
initialDelaySeconds: 120
readinessProbe:
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 3
periodSeconds: 10
initialDelaySeconds: 120
logging:
image: "busybox:latest"
level: INFO
stdout: true
logs:
- request
# - audit
# - request
# - cluster
# - confsvc
# - confsvc-internal
# - post-commit-scripts

The result is that Curity runtime nodes are then deployed as multi-container pods, where additional small containers do the job of tailing logs in order for the platform to pick them up. This can be visualized if you run a kubectl describe command against a runtime node:

Multi Container Pods

The mechanism of logging to files and then tailing logs is high performance and follows best practices described in Using Log4j in Cloud Enabled Applications.

Log Formats

Logs are output using layouts and a PatternLayout is used by default, which outputs a line of text. The JsonLayout usually works best when using log aggregation, where logs are written in a bare JSON format, with one logging event per line:

json
1234
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true">
<KeyValuePair key="hostname" value="${env:HOSTNAME}" />
<KeyValuePair key="timestamp" value="$${date:yyyy-MM-dd'T'HH:mm:ss.SSSZ}" />
</JSONLayout>

This maintains the structure of each logged row and the type of each field, and note that two additional fields were also added, which can be useful for querying later:

Custom FieldDescription
hostnameThe name of the container within the cluster, to enable us to understand the logging event source
timestampThe UTC time in an ISO8601 format, which Elasticsearch will deserialize correctly

If you need more fine grained control over the JSON data written to files, the JSON Template Layout provides powerful options and complete control over the log format.

Log Data Sensitivity

In production systems, it is recommend to set the log level for the se.curity, the io.curity and the root loggers to INFO, which ensures secure logs and also good log performance:

xml
12345
<AsyncRoot level="INFO">
<AppenderRef ref="rewritten-stdout"/>
<AppenderRef ref="request-log"/>
<AppenderRef ref="metrics"/>
</AsyncRoot>

At any log level, data is masked to ensure that no Personally Identifiable Information (PII) is recorded, such as user names, emails or passwords.

Log Schemas

Server Log Fields

The server log contains warning and error events, and exception stack traces will be written here in soem cases, such as a JDBC database connection failure.

ParameterDescription
dateThe UTC date and time of the log message
levelThe log level, usually INFO, WARN or ERROR
request-idA value that correlates to the request log
session-idA value that correlates to a particular user's authenticated session
thread-idThe thread of execution that processed the request
logger-nameThe logger name, which usually represents a Java class
log-messageThe message details, which can include an exception stack trace

Request Log Fields

This log outputs information about each HTTP request that the Curity Identity Server processes:

ParameterDescription
dateThe UTC date and time
request-idA unique ID for the request
clientThe client IP address
methodThe HTTP method
uriThe URI path of the request
paramsRequest query parameters
protocolThe HTTP protocol used
langThe accepted language of the caller
acceptThe HTTP accept headers
content-typeThe content type of the response body
statusThe HTTP response status code
sizeThe size of the response body
secureWhether or not the connection used HTTPS
request-headersIncoming request headers
response-headersOutgoing response headers
durationHow long the request took to process
locationThe location from where the client made the request
session-idThe session ID when persisted sessions are used

Log Ingestion

When log aggregation systems receive a logging event, it is common to transform data, to ensure that important fields are available for querying later:

Example FieldDescription
Host nameThe instance of the Curity Identity Server where the logging event occurred
Log timeThe time of each logging event should be usable as a timestamp when searching logs
Request DurationFields such as HTTP request duration should be numbers so that they sort correctly
MessageDetailed fields such as exception details should be searchable via contains queries

Production Log Access

Log aggregation systems support various types of user authentication, so that you can control access by user. Since the Curity Identity Server logs do not contain sensitive data, you should be able to grant technical support staff access to these systems to view production logs.

Helper Resources

When technical support staff need to look at logs for the Curity Identity Server, they will then be able to simply log in to this system, with no need for an IT administrator to be involved. Queries can then be issued, to enable the user to focus on an understandable subset of the production logs:

Cluster Wide Query

If technical support staff cannot resolve an issue themselves and need to consult Curity, this type of setup will also ensure that the correct logs can be quickly exported.

Modern Troubleshooting

Once the data is aggregated in the right format, support staff can use the log aggregation system's features to perform live analysis in many ways, to troubleshoot effectively when required:

Kibana Live Analysis

Tutorial Walkthrough

A Kubernetes Logging Tutorial is provided to demonstrate the concepts in this article, using the open source Elastic Stack system. Once understood, the patterns should be straightforward to follow in any similar log aggregation system.

Conclusion

The Curity Identity Server has a sophisticated logging architecture to enable effective technical support. Whatever your logging preferences, this will enable you to support your Identity and Access Management (IAM) system at scale. Aim for a setup where you can issue live real time queries across the whole cluster, so that your technical support people can quickly filter on only the relevant data.

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial