Clustering with Docker Compose

Clustering with Docker Compose

operate

Clustering Curity using Docker Compose

Introduction

In most setups, it is required to have multiple runtime nodes. It is often desired to use containers instead of virtual machines or hardware, and this is a guide how to achieve that using Docker and Docker Compose.

Creating a Docker image

First, we create an image with the Curity distribution on it, including all the Curity dependencies. We don't want to run the install script since that will generate keys and user config, and we want that to be separate from the image.

To begin with, download the release tarball from the Curity developer portal. Create a separate directory, and put the tar ball and this Dockerfile in there.

Dockerfile

FROM ubuntu:16.04

LABEL maintainer="Curity AB <info@curity.se>"

EXPOSE 8443
EXPOSE 6749

RUN \
  apt-get update && \
  apt-get install -y openssl && \
  rm -rf /var/lib/apt/lists/*

ENV IDSVR_HOME /opt/idsvr
ENV JAVA_HOME $IDSVR_HOME/lib/java/jre
ENV PATH $IDSVR_HOME/bin:$JAVA_HOME/bin:$PATH


ARG VERSION
ADD idsvr-$VERSION-linux.tar.gz /opt
RUN ln -s /opt/idsvr-$VERSION/idsvr $IDSVR_HOME

WORKDIR $IDSVR_HOME

CMD ["sh", "-c", "idsvr -s ${SERVER_ROLE} --${MODE:-admin} --config-encryption-key ${CONFIG_ENCRYPTION_KEY}"]

This Dockerfile use Ubuntu 16.04 as a base and install openssl onto it. Then, it adds the release tar ball which will unpack it. It creates a link in to the working directory, and sets the start command.

The start commands accepts a server-role, a mode and a configuration encryption key which we will set later using Docker Compose.

Build the image

To build the image, run the following command in the directory of the Dockerfile

docker build -t curity/tutorial:latest --build-arg VERSION=4.1.0 .

This will build an image with the 4.1.0 tar ball, and tag it curity/tutorials:latest

Docker Compose

To start building our Docker Compose system, start by creating a directory for it. We'll call it the docker-compose directory.

Installation

As we stated earlier, we didn't install Curity in the container, so no configuration has been generated. But we will need the to have a minimum configuration to be able to access the admin-ui. So how do we achieve that? First, we make a copy of the release tar ball to the current directory and unpack it with:

tar xfzv idsvr-<version>-linux.tar.gz

Then, we run the unattendedinstall script locally to generate the minimum config.

cd idsvr-<version>
export PASSWORD=<ADMIN_PASSWORD>; ./idsvr/bin/unattendedinstall

This will generate a few new xml files in idsvr/etc/init, key-conf.xml and admin-user-conf.xml, which we can use as a base if this is a new system we're building. Copy both in the docker-compose directory.

If you are running this tutorial on a Mac, you would need to download the Darwin release and use that one to run the unattendedinstall locally.

Admin node

Now its time to start adding Docker Compose config.

First, we create a docker-compose.yml file in the docker-compose directory. Add the initial admin service to the file.

version: '3.2'
services:
  admin:
    image: curity/tutorial:latest
    ports:
      - 6749:6749

Here, we have created a service and exposed the ports we need. It will still not be possible to start this admin node however, since we need all of those environment variables set in the CMD of the Dockerfile. The most important is the mode the server will start on, in this case admin and secondly, the role of the server. Update the service definition in docker-compose.yml with the relevant environment variables

version: '3.2'
services:
  admin:
    image: curity/tutorial:latest
    ...
    environment:
    - MODE=admin
    - SERVER_ROLE=admin
  ...

Almost there! Now the server would start, but since it doesn't have any configuration, it won't be of any use. The solution is to add another volume. We tend to create a folder structure that mimics the structure inside of the container, so create the folders admin/etc/init/. If you have configuration ready, just put it in the init folder. If you are building something new, it is recommended you to run the basic configuration wizard in the admin UI after you start the admin node. Also move the admin-user-conf.xml and key-conf.xml created from the "fake" installation into the admin/etc/init folder.

Add another volume in the service definition

version: '3.2'
services:
  admin:
  ...
    volumes:
      - ./admin/etc/init/admin-user-conf.xml:/opt/idsvr/etc/init/admin-user-conf.xml
      - ./admin/etc/init/key-conf.xml:/opt/idsvr/etc/init/key-conf.xml

Now, you should be able to start the admin service!

docker-compose up admin

Then point your browser to https://localhost:6749/admin to configure your system.

Runtime nodes

When the configuration starts to look the way you want it to, its time to add some runtime nodes. Start with creating a cluster.xml. You can export one from the admin UI by going into "System/Deployments/Cluster" and enabling clustering. Docker Compose creates an internal network for the services defined, and will resolve the names from the service names, so add admin as Host. The rest can be left as is by default. Once you are done, commit your changes and press the button to download the cluster.xml file. Also, since the communication between runtime and admin will be all internal, there is no need for exposing the communication port (default:6789).

Then we can add a service definition for the runtime node.

version: '3.2'
services:
  ...
  runtime:
    image: curity/tutorial:latest
    volumes:
      - ./runtime/etc/init/cluster.xml:/opt/idsvr/etc/init/cluster.xml
    environment:
      - MODE=no-admin
      - SERVER_ROLE=default
    ports:
      - 8443:8443
    depends_on:
      - admin

As you can see, there are some differences.

  • No keys needed, only the cluster.xml
  • No configuration mounted. The configuration will be received from the admin node.
  • MODE=no-admin instructs the service to not start the configuration service on this node.
  • SERVER_ROLE=default instructs the service that its role will be default.
  • depends_on: admin makes this service start after the admin.

Now start the runtime node

docker-compose up -d runtime

When the service has started, you should be able to access it on https://localhost:8443.

Full docker-compose.yml

version: '3.2'
services:
  admin:
    image: curity/tutorial:latest
    environment:
    - MODE=admin
    - SERVER_ROLE=admin
    ports:
      - 6749:6749
    volumes:
      - ./admin/etc/init/admin-user-conf.xml:/opt/idsvr/etc/init/admin-user-conf.xml
      - ./admin/etc/init/key-conf.xml:/opt/idsvr/etc/init/key-conf.xml
      - ./admin/etc/init/cluster.xml:/opt/idsvr/etc/init/cluster.xml

  runtime:
    image: curity/tutorial:latest
    volumes:
      - ./runtime/etc/init/cluster.xml:/opt/idsvr/etc/init/cluster.xml
    environment:
      - MODE=no-admin
      - SERVER_ROLE=default
    ports:
      - 8443:8443
    depends_on:
      - admin

Note that the cluster.xml file is mounted here in both services, as it is needed by admin in order to enable cluster mode. Normally, that would be part of the full configuration.

Managing configuration

Exporting configuration

docker-compose exec admin idsvr -d > backup-conf.xml

The full configuration will be in the backup-conf.xml.

Reloading from backup

Put the backup in admin/etc/init, and issue this command:

docker-compose exec admin idsvr -f

This method can also be used to reconfigure using the xml config file.

Useful volumes

While the above example is all that is needed for the system to run, it might helpful to add some volumes to be able to change the system from the host filesystem. For instance, these volumes would get you the log settings, and logs on the host OS.

volumes:
  - ./log4j2.xml:/opt/idsvr/etc/log4j2.xml
  - ./logs/admin-log:/opt/idsvr/var/log

To be able to mount your branding onto all the nodes, this might be useful

volumes:
  - ./template-overrides:/opt/idsvr/usr/share/templates/overrides
  - ./message-overrides:/opt/idsvr/usr/share/messages/overrides
  - ./custom-webroot:/opt/idsvr/usr/share/webroot/custom

Conclusion

We we have seen the process of adding nodes to the Curity Identity Server cluster. This example has a fixed number of nodes, but you should be able to take this further and automate the process of adding new nodes. As for the configuration part, it isn't necessary for it to be dynamically updated with new nodes.You can just as well add a big number of nodes, and start/stop them as wished. Same goes for the startup properties, they don't have to be created on the fly.

Resources

For more details docker see the Docker documentation

For more details about running Curity in a container see Curity Documentation

Was this page helpful?