It is always recommended to operate the Curity Identity Server in a secure environment. To create such an enclave that exposes services to the Internet, a reverse proxy is typically used in front of the Curity Identity Server as a facade. By using a reverse proxy in a Demilitarized Zone (DMZ), private keys and credentials used by the Curity Identity Server can be kept in a private network. This is important because these keys are what clients are trusting when they accept tokens from the Curity Identity Server. If these keys are exposed, an attacker will be able to impersonate any user for which the Curity Identity Server is used to authenticate. Therefore, operating the Curity Identity Server directly in the DMZ is strongly discouraged. To further manage the risks associated with an exposure, the Curity Identity Server includes a feature that allows only certain endpoints to be exposed by a run-time node; only those needed for certain use cases should be exposed (directly or indirectly through a reverse proxy). For more information on how to remove certain endpoints, refer to the endpoint section of the configuration manual.
Important
Clustering a macOS and Linux version of the Curity Identity Server together is not supported even for testing.
In a two node scenario, both nodes provide run-time services, but the first node also includes the admin capability. The second node connects to the first to collect its configuration; changes that are made via the admin service’s north-bound APIs (e.g., the CLI and REST API) are replicated to both run-time instances. When these are fronted by a reverse proxy (as described above), such a deployment will look like that shown in the following figure:
Fig. 23 A two-node deployment behind a DMZ and a Reverse Proxy
This deployment is very typical. For pre-production environment, both instances of the Curity Identity Server can be deployed on the same server machine if hardware is an issue; moving one of these to another machine after the fact is easy and only requires minimal, network-related configuration changes.
Alternatively, the admin capability can be run standalone. This can be done by deploying another instance of the Curity Identity Server with all endpoints disabled and started in admin mode (as described below). Each run-time node will connect to this server for configuration data. In a typical deployment where the admin is run standalone like this, there will be at least two run-time instances of the Curity Identity Server which connect to the dedicated admin node. This scenario is depicted in the following figure:
Fig. 24 A three-node deployment where the admin service is run separately
As in the previously depicted topology, each run-time node in this deployment will continually attempt to reconnect to the admin node if it goes down. Therefore, high availability is achieved even if the admin node temporarily goes off-line.
If the admin is assigned a service-role that is not configured, it will simply run as admin node, not serving any runtime endpoints.
A more advanced setup usually suited for Kubernetes deployments where networks are easily formed (but works well on any infrastructure) is to create an asymmetric cluster. The nodes in the network have different service-role configurations which means they will serve different endpoints. This way we can segment the sensitive parts away from the internal parts. The internal nodes can serve endpoints that give internal tokens, and the external nodes are internet facing. This is done by configuring different service-roles in the configuration and starting the nodes with the appropriate --service-role or -s flag to let it know what role to operate as.
service-role
internal
external
--service-role
-s
Fig. 25 An asymmetric cluster setup
The Curity Identity Server scales linearly with the number of nodes. There are no runtime dependencies between nodes in the cluster. The factors to calculate with when designing a cluster is how the data layer should scale, and how many requests per second the cluster should handle. Depending on what backend data source is used, different data-cluster layouts may be needed. Curity requires the data layer to be in sync, which means that if a token is written from one node in the cluster to the database layer, then it is critical that another node in the cluster can verify the same token within milliseconds from the write. Even if that lookup is done on another data base node.
The benefit of the Curity cluster is that if the throughput needs to increase, adding nodes to the cluster can be done without any impact on the existing nodes.
Each node is defined in configuration, and can be pre-provisioned. This means that the configuration can contain dozens of nodes, and when need arise simply start new machines, and those nodes will connect and receive the relevant configuration from the admin node.
A cluster shares configuration and can transactionally be re-configured without any restarts necessary to any of the nodes. All communication in the cluster is transported over mutual TLS protected with the cluster key.
To create a cluster all nodes need the following:
This is part of the configuration, and is generated using the UI, the CLI or the genclust script to produce XML such as this:
1 2 3 4 5 6 7 8 9 10
<config xmlns="http://tail-f.com/ns/config/1.0"> <environments xmlns="https://curity.se/ns/conf/base"> <environment> <cluster> <host>my-server.example.net</host> <keystore>v:S.RjNrUEhZdnRqWnxnQEM6Fl.....3tPAgmbxWKILmz1wwwxr1dJFCeWCY=</keystore> </cluster> </environment> </environments> </config>
The key is a generated key. To create a new one, either use the UI or the script $IDSVR_HOME/bin/genclust. It is also possible to set one directly in the CLI with the following command
$IDSVR_HOME/bin/genclust
% request environments environment generate-cluster host 172.16.1.32
The host parameter needs to be provided; this is the host or IP address of the admin node (i.e., the master node where the configuration service is running).
host
Optionally, the following may be provided as well:
port
6789
admin-listening-host
0.0.0.0
This will create a cluster key and create configuration using the provided configruation service hsot such as this:
<config xmlns="http://tail-f.com/ns/config/1.0"> <environments xmlns="https://curity.se/ns/conf/base"> <environment> <cluster> <host>172.16.1.32</host> <keystore>v:S.RjNrUEhZdnRqWnxnQEM6Fl.....3tPAgmbxWKILmz1wwwxr1dJFCeWCY=</keystore> </cluster> </environment> </environments> </config>
The listening-host and port have good default values and only need to be updated to restrict the listening-host or if the port is not suitable.
If the key needs to be reset, the UI or CLI can be used. The CLI action, generate-cluster-keystore can be used for this purpose:
generate-cluster-keystore
% request environments environment cluster generate-cluster-keystore
Note
The cluster configuration can be part of the regular configuration files, it does not have to be in it’s own file.
Warning
Do not use the same cluster key for production and non-production environments.
Each node needs to be started with a minimum configuration containing the cluster configuration. It also needs to be told what role to operate as and optionally be given a name. There are two options to prepare the node:
${IDSVR_HOME}/bin/idsvr --service-role $THE_ROLE --service-name $THE_NAME --admin
${IDSVR_HOME}/bin/idsvr --service-role $THE_ROLE --service-name $THE_NAME --no-admin
--service-name
--admin
--no-admin
it is possible to make Curity nodes report their service ID, a unique identifier based on the service-name, on a header in every HTTP response. See Tracing for details.
service-name
The same arguments can be provided in startup.properties or as environment variables.
startup.properties
SERVICE_ROLE
SERVICE_NAME
ADMIN
true
false
Either place these as environment variables or in the file $IDSVR_HOME/etc/startup.properties on each node.
$IDSVR_HOME/etc/startup.properties
The role of the node -s or --service-role on the command line, has special meaning when deploying. This is what the runtime node will use to figure out what configuration to retrieve when when connecting to the Admin node. If there is no configuration for the name under the environments environments services service-role list then the node will remain idle until the administrator configures an entry in the service-role list with the node name.
environments environments services service-role
admin@prod-curity1% show environments environment services service-role service-role internal { location Lab; ssl-server-keystore server-key-1; endpoints [ anonymous assisted-token authenticate authorize introspection oauth-anonymous revoke token]; } service external { location Lab; ssl-server-keystore server-key-1; endpoints [ anonymous assisted-token authenticate authorize introspection introspection-jwt oauth-anonymous oauth-userinfo register revoke token um-admin]; } [ok][2019-03-19 23:02:39]
This example showed two roles that can be deployed. internal and external.
The admin node can serve both as admin and as runtime at the same time. By starting it with a role that exists in the service-role list it will start it’s runtime service.
In a running cluster the admin node keeps a list of the all nodes that are connected or has been connected since last restart of the admin.
To view runtime data in the CLI from the configuration mode (%) prepend the command with run. If in operational mode > omit the run.
%
run
>
admin@prod-curity1% run show environments environment services runtime-service ID NAME ROLE UPTIME ------------------------------------------------------------------------------ dQkZp940 Node1 internal 0 days, 0 hrs, 0 min, 10 sec g5Vyd150 Node2 extenal 0 days, 0 hrs, 0 min, 14 sec [ok][2019-04-20 15:14:25]
Each node in the cluster will boot as a standalone node. It will load any configuration files it finds in the $IDSVR_HOME/etc/init directory. This is where it will find the cluster configuration. If the cluster configuration is present it will attempt to connect to the admin node and sync the configuration from there.
$IDSVR_HOME/etc/init
If the configuration on the admin node is different from the runtime node, the runtime node will re-configure itself to match the admin’s configuration, and persist this latest configuration to a journal file on disk (i.e. not the xml files).
This means that it’s possible to boot all nodes with the same initial configuration, and then handle changes from the admin over time.
If a runtime node is restarted for any reason, it will attempt to load the last known configuration that it has journaled on disk, if no such configuration exist it will load the configuration from $IDSVR_HOME/etc/init. After it’s up it will again attempt to reconnect to the admin to re-sync configuration.
If the admin node is un-available when the runtime node is started or restarted, the runtime will continue to operate with the configuration it has, and once the admin is back up it will re-sync the configuration again.
It is not possible to configure a runtime node using REST, Web UI or the CLI. Only through startup configuration. All configuration changes in a cluster must be performed on the admin node.
The cluster key can be generated on the command line with the genclust script or in the Admin UI in the deployments section. However it is not recommended to rotate the cluster key on a running cluster, even if it is possible. Instead it is recommended to rotate the key when upgrading Curity to new versions. This way every version of the cluster only communicates with nodes of the same version.
genclust