/images/resources/operate/tutorials-docker.png

Clustering with Docker Compose

On this page

In most setups, it is required to have multiple runtime nodes. It is often desired to use containers instead of virtual machines or hardware, and this is a guide how to achieve that using Docker and Docker Compose. In this example we use docker-compose to create a cluster with 2 nodes. One admin node that is only used as a configuration service, and one runtime node that is used to serve requests for end users.

Docker Compose

Preparing for clustering

Since we are building a clustered environment, we need to have a cluster configuration in our docker-compose setup. There are several ways to create an xml file that contains the cluster configuration.

  1. Run Curity locally and download one from the Admin UI.
  2. Download and unpack Curity and run the <idsvr_home>/bin/genclust -c <ADMIN_HOSTNAME_OR_IP command.
  3. Run this docker command docker run -e CONFIG_SERVICE_HOST=<ADMIN_HOSTNAME_OR_IP> --entrypoint="/opt/idsvr/bin/genclust" curity.azurecr.io/curity/idsvr > cluster-conf.xml

In this example, the hostname of the admin node will be admin.

Admin node

Start by creating a docker-compose.yml file in your working directory. Add the admin service to the file.

Curity Identity Server Docker Images

Refer to the Curity Identity Server Docker Images page for supported images and tags.

bash
123456
version: '3.2'
services:
admin:
image: curity.azurecr.io/curity/idsvr
ports:
- 6749:6749

Here, we have created a service and exposed the port we need (used by admin UI and configuration API). Since the communication between runtime and admin will be all internal, there is no need for exposing the communication port (default: 6789).

It will still not be possible to start this admin node however, since we need to add a couple of environment variables. The most important is PASSWORD, which is the admin user's password for access to the configuration admin interfaces. Adding the PASSWORD environment variable also instructs the container to run the unattended installer during its first boot. This will setup the admin user related configuration, enable the admin UI and generate the SSL cert used in the Admin UI.

We also change the start command of the server, to tell it to run with the service-role admin because we do not want this container to act as a runtime node. Update the service definition in docker-compose.yml with the relevant environment variables:

bash
123456789
version: '3.2'
services:
admin:
image: curity.azurecr.io/curity/idsvr
command: ["sh", "-c", "idsvr -s admin"]
...
environment:
- PASSWORD=<ADMIN_USER_PASSWORD>
...

We can also add the CONFIG_ENCRYPTION_KEY environment variable, if we want the sensitive parts of the configuration to be encrypted.

Finally, we need to include the cluster configuration xml we created previously. Assuming this file is in the same folder as your docker-compose.yml, add the following:

bash
1234567
version: '3.2'
services:
admin:
image: curity.azurecr.io/curity/idsvr
...
volumes:
- ./cluster-conf.xml:/opt/idsvr/etc/init/cluster-conf.xml

Now, you should be able to start the admin service!

bash
1
docker-compose up admin

Then open your browser and navigate to https://localhost:6749/admin to configure your system.

Runtime nodes

In this section we'll add a service definition for the runtime node.

bash
1234567891011121314
version: '3.2'
services:
...
runtime:
image: curity.azurecr.io/curity/idsvr
volumes:
- ./cluster-conf.xml:/opt/idsvr/etc/init/cluster-conf.xml
environment:
- SERVICE_ROLE=default
- CONFIG_ENCRYPTION_KEY="" #(optional)
ports:
- 8443:8443
depends_on:
- admin

As you can see, there are some differences.

  • Exposed port is 8443, which is the default port for a runtime service.
  • No PASSWORD set. That is important because it instructs the container that this node doesn't require installation.
  • SERVICE_ROLE=default instructs the service that its role will be default.
  • depends_on: admin makes this service start after the admin.

Now start the runtime node:

bash
1
docker-compose up -d runtime

When the service has started, you should be able to access it on https://localhost:8443.

Full docker-compose.yml

bash
123456789101112131415161718192021222324
version: '3.2'
services:
admin:
image: curity.azurecr.io/curity/idsvr
command: ["sh", "-c", "idsvr -s admin"]
environment:
- PASSWORD=<ADMIN_USER_PASSWORD>
- CONFIG_ENCRYPTION_KEY="" # optional
ports:
- 6749:6749
volumes:
- ./cluster-conf.xml:/opt/idsvr/etc/init/cluster-conf.xml
runtime:
image: curity.azurecr.io/curity/idsvr
volumes:
- ./cluster-conf.xml:/opt/idsvr/etc/init/cluster-conf.xml
environment:
- SERVICE_ROLE=default
- CONFIG_ENCRYPTION_KEY="" # optional
ports:
- 8443:8443
depends_on:
- admin

Note that the cluster.xml file is mounted here in both services, as it is needed by the admin node in order to enable cluster mode. Normally, that would be part of the full configuration.

Managing configuration

Exporting configuration

bash
1
docker-compose exec admin idsvr -d > backup-conf.xml

The full configuration will be in the backup-conf.xml.

Reloading from backup

Put the backup in admin/etc/init, and issue this command:

bash
1
docker-compose exec admin idsvr -f

This method can also be used to reconfigure using the xml config file.

Useful volumes

While the above example is all that is needed for the system to run, it might be helpful to add some volumes to be able to change the system from the host filesystem. For instance, these volumes would get you the log settings, and logs on the host OS.

bash
123
volumes:
- ./log4j2.xml:/opt/idsvr/etc/log4j2.xml
- ./logs/admin-log:/opt/idsvr/var/log

To be able to mount your branding onto all the nodes, this might be useful:

bash
1234
volumes:
- ./template-overrides:/opt/idsvr/usr/share/templates/overrides
- ./message-overrides:/opt/idsvr/usr/share/messages/overrides
- ./custom-webroot:/opt/idsvr/usr/share/webroot/custom

Conclusion

We have seen the process of adding nodes to the Curity Identity Server cluster. This example has a fixed number of nodes, but you should be able to take this further and automate the process of adding new nodes. As for the configuration part, it isn't necessary for it to be dynamically updated with new nodes. You can just as well add a big number of nodes, and start/stop them as wished. Same goes for the startup properties, they don't have to be created on the fly.

Resources

More information about running Curity Identity Server locally using Docker.

For more details on Docker see the Docker documentation.

For more details about running the Curity Identity Server in a container see Curity Documentation.

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial