Building a Docker Container
This section describes how to build an auto-installable docker-container from a Curity installation package. Curity ships with an auto-installer that can be used for this purpose.
The steps needed to build a container are:
- Prepare the workspace
- Adding a docker file
- Possibly adding data-base drivers or plugins
- Building the container
- Run the container
1. Prepare the Workspace#
Place the Curity Linux installation in an empty workspace.
$ mkdir curity
$ cd curity
$ cp ~/downloads/idsvr-4.1.0-linux-release.tar.gz .
2. Add a Dockerfile to the Workspace#
Now it’s time to create the Dockerfile. There are no real restrictions on how it should look. Curity requires that libcrypto to be available. We recommend an Ubuntu or CentOS base image but that is not required.
The Dockerfile in this example runs the installation of Curity during the image build. This is important, because the cluster keys will be generated at this time, and will thus be the same on all instances of the image.
For more advanced deployments, it’s advisable to generate one admin image and one runtime image, and copy the cluster runtime keys from the admin image into the build of the runtime image.
FROM ubuntu:20.04
ARG RELEASE_VERSION
ARG PASSWORD
ENV IDSVR_HOME=/opt/idsvr
ENV JAVA_HOME=$IDSVR_HOME/lib/java/jre
# Install dependencies
RUN apt-get update && apt-get install -y \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy and extract the installation package
COPY idsvr-${RELEASE_VERSION}-linux-release.tar.gz /tmp/
RUN mkdir -p $IDSVR_HOME && \
tar -xzf /tmp/idsvr-${RELEASE_VERSION}-linux-release.tar.gz -C /tmp && \
/tmp/idsvr-${RELEASE_VERSION}/install.sh --auto-install \
--installation-path $IDSVR_HOME \
--password $PASSWORD && \
rm -rf /tmp/idsvr-*
WORKDIR $IDSVR_HOME
# Expose ports
EXPOSE 2024 6789 8443 6749
CMD ["bin/idsvr"]
The ports exposed are the following:
- 2024 = The SSH Port for the Admin CLI
- 6789 = The Cluster Communication port (only needed to be exposed on Admin node)
- 8443 = The default runtime port for the node
- 6749 = The default admin WebUI and API port
3. Add Drivers and Resources#
To add drivers to the installation, simply add those before the CMD line in the Dockerfile above.
Example of adding mysql drivers would look as follows:
...
WORKDIR $IDSVR_HOME
ADD mysql-connector-java-5.1.45-bin.jar $IDSVR_HOME/lib/plugins/data.access.jdbc/mysql-connector-java-5.1.45-bin.jar
...
Drivers and resources can of course be added via volume mounting at later stages as well. It’s up to the company docker strategy to decide.
4. Build the Container#
docker build --build-arg RELEASE_VERSION=4.1.0 --build-arg PASSWORD=SomeRandomPassword \
-t your-repo/curity:4.1.0 -t your-repo/curity:latest .
This will produce an image with the tags your-repo/curity:4.1.0 and your-repo/curity:latest.
Try to place the image in a company local namespace since Curity official docker images may become available in the future and could collide if a too generic namespace is used.
5. Run the Container#
$ docker run -it -p 8443:8443 -p 6749:6749 your-repo/curity:latest
If the node should be clustered on more than one docker-host then the cluster port 6789 needs to be published as well.
Docker images for some Linux distributions have a very large default value for the “open files” limit. In some systems this may cause Curity to attempt to allocate a large amount of memory during startup, eventually causing the startup to fail with an error similar to sys_alloc: Cannot allocate 34359738368 bytes of memory (of type "db_tabs"). In such cases it is recommended to set the limit of open files to a reasonable value (e.g. 1024) in the container definition.