Curity has an internal Distributed Service which is used to implement certain features that require communication between different Curity nodes in a Cluster.
This service is mostly independent of the clustering configuration used by the Configuration Service (see notes below about the cluster key), but can only be configured if the Configuration Service cluster is also configured.
While the Configuration Service cluster cannot be migrated between different Curity versions, the Distributed Service can. That means that it is possible, and desirable, for nodes that are being migrated to a newer version of Curity to communicate with nodes which are still running the previous version, as this allows the nodes to, for example, move any local data they may hold to the newer nodes during the migration.
This is achieved by keeping a stable key for the Distributed Service, separated from the Configuration cluster key. While the Configuration cluster key must be re-generated when a new version of Curity is deployed, the Distributed Service’s key must not. The Distributed Service’s key needs to be rotated separately, as explained below.
Note
If the distributed-service is not configured explicitly, it still runs in all Curity deployments where the Configuration cluster is configured. In such case, the Distributed Service uses the Configuration cluster key for communication.
The simplest way to rotate the Distributed Service’s key is to run the generate-distributed-service CLI action.
% request environments environment generate-distributed-service
The first time this action runs, it sets the secondary-key of the Distributed Service to the same value as the Configuration service key because that allows Curity nodes where the Distributed Service was not configured explicitly to communicate with the nodes using the newly generated key.
This action generates a new primary key for the Distributed Service and sets the secondary-key to the previous value of the primary key (or the Configuration cluster key if none was configured).
Warning
If the Distributed Service is not explicitly configured, when migrating to a new Curity version, all nodes will lose any state they may have kept. For this reason, if you intend to use the Distributed Service for caching, it is important to first configure it explicitly and to never rotate its key and the Configuration cluster key at the same time.
Running the action again has the result of rotating the key.
Even though it is possible to generate the keys externally and simply configure the Distributed Service to use them, it is highly recommended to use the action as explained above instead because that guarantees that the keys can be used securely.
For the Distributed Service to work, it is important that all nodes are able to connect to each other using the TCP protocol. By default, port 6790 is used, but that can be configured.
6790
When a cluster node starts up, it will immediately connect to the Admin node (as configured for the Configuration cluster) using the hostname configured for it. The Admin node does not initiate communication with other nodes. Each Runtime node sends its own hostname to Admin, which then informs all other nodes about which nodes are part of the cluster.
A node finds its own hostname or IP address by inspecting the available networks interfaces and choosing the first non-loopback network interface available.
If it does not find a suitable one, the node reads the HOST environment variable, which is set automatically in many deployments. If that is not set, 0.0.0.0 is used, which cannot work on real deployments but may be sufficient for testing purposes. Before going to production, ensure that the nodes are able to find their own addresses and report that correctly.
HOST
0.0.0.0
While it is not advisable, it is possible to disable the Distributed Service completely by setting the environment variable se.curity.distributed-service.enable to false. In the future, doing this may break important functionality of the Curity Identity Server, but currently only affects Admin UI functionality to display metrics about individual cluster nodes.
se.curity.distributed-service.enable
false
The Distributed Service requires a keystore with a key-pair in it to establish mutual TLS communication between nodes. Hence, communication is both encrypted and authenticated. Regardless, it is not advisable to expose the Distributed Service to the outside world unnecessarily (after all, its only purpose is to allow Curity nodes to communicate with each other).
The keystore used, by default, is the Configuration Service Cluster configuration’s keystore. As explained in the Rotating the Distributed Service Key section, when using the Distributed Service to store distributed data, this is not advisable. Make sure to follow the advice in that section and generate the Distributed Service configuration (and rotate the key periodically, for example every 90 days) so that it becomes possible to upgrade the Curity Identity Server without losing distributed data.