2.3. Upgrading Volt in Kubernetes

Documentation

VoltDB Home » Documentation » Upgrade Guide

2.3. Upgrading Volt in Kubernetes

For customers who run Volt Active Data in Kubernetes, the steps for upgrading the database software are:

  1. Update your copy of the VoltDB Helm repository.

  2. Update the custom resource definition (CRD) for the VoltDB Operator.

  3. Upgrade the VoltDB Operator and software.

The following sections explain how to perform each step of this process, including a full example of the entire process in Example 2.1, “Process for Upgrading the VoltDB Software” However, when upgrading an XDCR cluster, there is an additional step required to ensure the cluster's schema is maintained during the upgrade process. Section 2.3.5, “Updating VoltDB for XDCR Clusters” explains the extra step necessary for XDCR clusters.

2.3.1. Updating Your Helm Repository

The first step when upgrading VoltDB is to make sure your local copy of the VoltDB Helm repository is up to date. You do this using the helm repo update command:

$ helm repo update

2.3.2. Updating the Custom Resource Definition (CRD)

The second step is to update the custom resource definition (CRD) for the VoltDB Operator. This allows the Operator to be upgraded to the latest version.

To update the CRD, you must first save a copy of the latest chart, then extract the CRD from the resulting tar file. The helm pull command saves the chart as a gzipped tar file and the tar command lets you extract the CRD. For example:

$ helm pull voltdb/voltdb
$ ls *.tgz
voltdb-3.1.0.tgz
$ tar --strip-components=2 -xzf voltdb-3.1.0.tgz  \
     voltdb/crds/voltdb.com_voltdbclusters_crd.yaml

Note that the file name of the resulting tar file includes the chart version number. Once you have extracted the CRD as a YAML file, you can use it to replace the CRD in Kubernetes:

$ kubectl replace -f voltdb.com_voltdbclusters_crd.yaml

2.3.3. Upgrading the VoltDB Operator and Software

Once you update the CRD, you are ready to upgrade VoltDB. You do this using the helm upgrade command and specifying the new software version you wish to use on the command line. What happens when you issue the helm upgrade command depends on whether you are performing a standard software upgrade or an in-service upgrade.

For a standard software upgrade, you simply issue the helm upgrade command specifying the software version in the global.voltdbVersion property. For example:

$ helm upgrade mydb voltdb/voltdb --reuse-values \
   --set global.voltdbVersion=13.2.1

When you issue the helm upgrade command, the operator saves a final snapshot, shuts down the cluster, restarts the cluster with the new version and restores the snapshot. For example, Example 2.1, “Process for Upgrading the VoltDB Software” summarizes all of the commands used to update a database release to VoltDB version 13.2.1.

Example 2.1. Process for Upgrading the VoltDB Software

$    # Update the local copy of the charts
$ helm repo update
$    # Extract and replace the CRD
$ helm pull voltdb/voltdb
$ ls *.tgz
voltdb-3.1.0.tgz
$ tar --strip-components=2 -xzf voltdb-3.1.0.tgz  \
     voltdb/crds/voltdb.com_voltdbclusters_crd.yaml
$ kubectl replace -f voltdb.com_voltdbclusters_crd.yaml
$
$    # Upgrade the Operator and VoltDB software
$ helm upgrade mydb voltdb/voltdb --reuse-values \
   --set global.voltdbVersion=13.2.1

2.3.4. Using In-Service Upgrade to Update the VoltDB Software

Standard upgrades are convenient and can upgrade across multiple versions of the VoltDB software. However, they do require downtime while the cluster is shutdown and restarted. In-Service Upgrades avoid the need for downtime by upgrading the cluster nodes one at a time, while the database remains active and processing transactions.

To use in-service upgrades, you must have an appropriate software license (in-service upgrades are a separately licensed feature), the cluster must be K-safe (that is, have a K-safety factor of one or more), and the difference between the current software version and the version you are upgrading to must fall within the limits of in-service upgrades. The following sections describe:

  • What versions can be upgraded using an in-service upgrade

  • How to perform the in-service upgrade

  • How to monitor the upgrade process

  • How to rollback an in-service upgrade if the upgrade fails

2.3.4.1. The Scope of In-Service Upgrades

There are limits to which software versions can use in-service upgrades. The following table describes the rules for which releases can be upgraded with an in-service upgrade and which releases cannot.

✔ Patch Releases

You can upgrade between any two patch releases. That is, any two releases where only the third and final number of the version identifier changes. For example, upgrading from 13.1.1 to 13.1.4.

✔ Minor Releases

You can also use in-service upgrades to upgrade between two consecutive minor releases. That is where the second number in the version identifier differ. For example, you can upgrade from V13.2.0 to V13.3.0. You can also upgrade between any patch releases within those minor releases. For example, upgrading from V13.2.3 to V13.3.0.

You cannot use in-service upgrades to upgrade more than one minor version at a time. In other words, you can upgrade from V13.2.0 to V13.3.0 but you cannot perform an in-service upgrade from V13.2.0 to V13.4.0. To transition across multiple minor releases your options are to perform consecutive in-service upgrades (for example, from V13.2.0 to V13.3.0, then from V13.3.0 to V13.4.0) or to perform a regular upgrade where all cluster nodes are upgrading at one time.

✖ Major Releases

You cannot use in-service upgrades between major versions of VoltDB. That is, where the first number in the version identifier is different. For example, you must perform a full cluster upgrade when migrating from V13.x.x to V14.0.0 or later.

2.3.4.2. How to Perform an In-Service Upgrade

If your cluster meets the requirements, you can use the in-service upgrade process to automate the software update and eliminate the downtime associated with standard upgrades. The procedure for performing an in-service upgrade is:

  1. Set the property cluster.clusterSpec.enableInServiceUpgrade to true to allow the upgrade.

  2. Set the property global.voltdbVersion to the software version you want to upgrade to.

For example, the following command performs an in-service upgrade from V13.1.2 to V13.2.0:

helm upgrade mydb voltdb/voltdb --reuse-values \
  --set cluster.clusterSpec.enableInServiceUpgrade=true \
  --set global.voltdbVersion=13.2.0

Once the upgrade is complete, it is a good idea to reset the enableInServiceUpgrade property to false so as not to accidentally trigger an upgrade during normal operations.

2.3.4.3. Monitoring the In-Service Upgrade Process

Once you initiate an in-service upgrade, the process proceeds by itself until completion. At a high level you can monitor the current status of the upgrade using the @SystemInformation system procedure with the OVERVIEW selector and looking for the VERSION keyword. For example, in the following command output, the first column is the host ID and the last column is the currently installed software version for that host. Once all hosts report using the upgraded software version, the upgrade is complete.

$ echo "exec @SystemInformation overview" | sqlcmd | grep VERSION
       2 VERSION                   13.1.2                                                                                     
       1 VERSION                   13.1.2                                                                                     
       0 VERSION                   13.1.3  

During the upgrade, the Volt Operator reports various stages of the process as events to Kubernetes. So you can monitor the progression of the upgrade in more detail using the kubectl get events command. For example, the following is an abbreviated listing of events you might see during an in-service upgrade. (The messages often contain additional information concerning the pods or the software versions being upgraded from and to.)

$ kubectl get events -w
11m     Normal RollingUpgrade  mydb-voltdb-cluster  Gracefully terminating pod 2
11m     Normal RollingUpgrade  mydb-voltdb-cluster  Gracefully terminated pod 2
11m     Normal RollingUpgrade  mydb-voltdb-cluster  Recycling Gracefully terminated pod mydb-voltdb-cluster-2
9m43s   Normal RollingUpgrade  mydb-voltdb-cluster  Recycled pod 2 has rejoined the cluster
9m42s   Normal RollingUpgrade  mydb-voltdb-cluster  Pod mydb-voltdb-cluster-2 is now READY
9m35s   Normal RollingUpgrade  mydb-voltdb-cluster  Gracefully terminating pod 1
 [ . . . ]

Once the upgrade is finished, the Operator reports this as well:

5m10s   Normal RollingUpgrade  mydb-voltdb-cluster  RollingUpgrade Done. 

2.3.4.4. Recovering if an Upgrade Fails

The in-service upgrade process is automatic on Kubernetes — once you initiate the upgrade, the Volt Operator handles all of the activities until the upgrade is complete. However, if the upgrade fails for any reason — for example, if a node fails to rejoin the cluster — you can rollback the upgrade, returning the cluster to its original software version.

The Volt Operator detects an error during the upgrade whenever the VoltDB server process fails. The failure is reported as an appropriate series of events to Kubernetes:

12m   Warning RollingUpgrade  mydb-voltdb-cluster  Rolling Upgrade failed upgrading from... to...
12m   Normal  RollingUpgrade  mydb-voltdb-cluster  Please update the clusterSpec image back to...

In addition to monitoring the events, you may wish to use the kubectl commands get events, get pods, and logs to determine exactly why the node is failing. The next step is to cancel the upgrade by initiating a rollback. You do this by resetting the image tag to the original version number.

Invoking the rollback is a manual task. However, once the rollback is initiated, the Operator automates the process of returning the cluster to its original state. Consider the previous example where you are upgrading from V13.1.2 to V13.2.0. Let us assume three nodes had upgraded but a fourth was refusing to join the cluster. You could initiate a rollback by resetting the global.voltdbVersion property to V13.1.2:

$ helm upgrade mydb voltdb/voltdb --reuse-values \
  --set global.voltdbVersion=13.1.2

Once you initiate the rollback, the Volt Operator stops the node currently being upgraded and restarts it using the original software version. After that process completes, the Operator goes through any node that had been upgraded, one at a time, downgrading them back to the original software. Once all nodes are reset and have rejoined the cluster, the rollback is complete.

Note that an in-service rollback can only occur if you initiate the rollback during the upgrade process. Once the in-service upgrade is complete and all nodes are running the new software version, resetting the image tag will force the cluster to perform a standard software downgrade, shutting down the cluster as a whole and restarting with the earlier version.

2.3.5. Updating VoltDB for XDCR Clusters

When upgrading an XDCR cluster, there is one extra step you must pay attention to. Normally, during the upgrade, VoltDB saves and restores a snapshot between versions and so all data and schema information is maintained. When upgrading an XDCR cluster, the data and schema is deleted, since the cluster will need to reload the data from another cluster in the XDCR relationship once the upgrade is complete.

Loading the data is automatic. But loading the schema depends on the schema being stored properly before the upgrade begins.

If the schema was loaded through the YAML properties cluster.config.schemas and cluster.config.classes originally and has not changed, the schema and classes will be restored automatically. However, if the schema was loaded manually or has been changed since it was originally loaded, you must make sure a current copy of the schema and classes is available after the upgrade. There are two ways to do this.

For both methods, the first step is to save a copy of the schema and the classes. You can do this using the voltdb get schema and voltdb get classes commands. For example, using Kubernetes port forwarding you can save a copy of the schema and class JAR file to your local working directory:

$ kubectl port-forward mydb-voltdb-cluster-0 21212 &
$ voltdb get schema -o myschema.sql
$ voltdb get classes -o myclasses.jar

Once you have copies of the current schema and class files, you can either set them as the default schema and classes for your database release before you upgrade the software or you can set them in the same command as you upgrade the software. For example, the following commands set the default schema and classes first, then upgrade the Operator and server software. Alternately, you could put the two --set-file and two --set arguments in a single command.

$ helm upgrade mydb voltdb/voltdb --reuse-values \
   --set-file cluster.config.schemas.mysql=myschema.sql  \
   --set-file cluster.config.classes.myjar=myclasses.jar
$ helm upgrade mydb voltdb/voltdb --reuse-values \
   --set global.voltdbVersion=12.3.1