When upgrading the Curity Identity Server between minor or major versions, it is necessary to perform the upgrade in an upgrade campaign. Depending on the changes between the versions the preparation may look different.
The release notes for each version describes breaking changes between the current version and the previous closest version. Also, this section describes the changes that are required to move from one version to the next. For upgrading specific versions, see the following subsections:
The binaries of Curity should be entirely replaced when upgrading from one version to the next. The only exception is when receiving a hotfix release which only contains a very small set of file (usually just one), then those files should replace the existing version of that file. In all other cases, the entire installation should be replaced.
The following files may be moved and upgraded between releases:
$IDSVR_HOME/etc/init
$IDSVR_HOME/usr/share/templates/overrides
$IDSVR_HOME/usr/share/templates/template-areas
$IDSVR_HOME/usr/share/messages/overrides
$IDSVR_HOME/usr/share/webroot
Each of these may require migration if the corresponding delivery states changes to the files are necessary. This will be described in the upgrade procedure in the corresponding section of this guide.
From a running system, use the following command to dump the active configuration:
1
$IDSVR_HOME/bin/idsvr -d > config-backup.xml
Warning
Upgrading the binary CDB files is not supported and may or may not work between versions.
See the section in this guide matching your version to walk through the updates.
Note
The dump of the configuration contains all procedures currently loaded into the system. If these are to be handled separately then they need to be removed from the config-xml before loading it to the new system.
Migrate all templates. Curity don’t require the templates to match exactly the version that is in core. But the logic-templates such as templates loading JavaScript or running logic, needs to be updated if changes are made to these.
Form elements such as input field names etc. needs to match the core version of the same template.
New locales may be provided in the new core messages files, if your installation uses other languages than the default ones, these need to be updated with the new message keys for that language.
If the Java SDK version of Curity has changed, the plugins needs to be recompiled against the new version of the SDK before deployed in the updated environment. If any of the provided dependencies are used by a plugin and those dependencies’ versions have changed, the plugin should be updated to use the new version of the dependency.
If you are using the built-in JDBC plug-in for database access and have added a JDBC driver (such as MySQL), be sure to copy that driver’s JAR file to $IDSVR_HOME/lib/plugins/data.access.jdbc on each node.
$IDSVR_HOME/lib/plugins/data.access.jdbc
Sometimes the DB schema needs updates, in order to make room for new functionality in Curity. This is accompanied with a migration script in $INSTALL_DIR/misc/upgrade/<Version> for each version. Either run this sql file or manually perform the steps described in the script.
$INSTALL_DIR/misc/upgrade/<Version>
sql
When upgrading a production cluster it is important to upgrade the system in a campaign. The following section describes a common upgrade procedure that can be used with Curity clusters. The nodes need to be taken out of the cluster and upgraded in order. First the admin node, then the runtime nodes. For this to work, new cluster keys must be used for the upgraded cluster. The procedure works as follows:
config-backup.xml
At this point the currently running runtime nodes won’t connect to the new admin nodes since they are not using the correct keys. They will continue to operate on their current active config until replaced.
For each runtime node perform the following operation:
startup.properties
$IDSVR_HOME/bin/genspf -s NEW_NODE_SERVER_ID_1 > startup.properties.node-1
$IDSVR_HOME/etc/startup-properties
Fig. 36 Before upgrading
Before upgrading all nodes are running on version X.
Fig. 37 Admin node is being upgraded, cluster communication is disabled
The admin node is taken out of the load balancer, and upgraded to version Y. Since the new admin node is installed with different cluster keys than the existing system, the runtime nodes won’t connect. Instead they keep running waiting for the admin to show up again.
Fig. 38 Runtime node is being upgraded
The admin node is put back in the load balancer and the first runtime node is upgraded. After it’s installed the startup.properties of the runtime node uses the cluster key from the new admin node. When the runtime node comes back up, it received the configuration from the admin node and can be put back into the cluster.
Fig. 39 All nodes are updated and cluster is back online
When the last node is put back in the cluster, the new system is up and operational.
The administration Web UI may have changed in ways that requires the browser cache to be purged for the UI to function properly.