Operation on the Alluxio Cluster

Slack Docker Pulls

This document describes administrative operations on a running Alluxio cluster on Kubernetes, such as upgrading to a new version and adding new workers.

Upgrading to a newer Alluxio version

Upgrade the Operator

  1. Upload the new docker images corresponding to the new Alluxio operator version to your image registry and unpack the helm chart of the operator. Refer to the installation doc for details.
  2. Run the following command to apply the new changes to the cluster.
# uninstall the operator. the operator is independent and the status of the operator won't affect the existing Alluxio cluster
$ helm uninstall operator
release "operator" uninstalled

# check if all the resources are removed. the namespace will be the last resource to remove
$ kubectl get ns alluxio-operator
Error from server (NotFound): namespaces "alluxio-operator" not found

# run the command in the new helm chart directory to upgrade the CRDs first
$ kubectl apply -f alluxio-operator/crds 2>/dev/null
customresourcedefinition.apiextensions.k8s.io/alluxioclusters.k8s-operator.alluxio.com configured
customresourcedefinition.apiextensions.k8s.io/underfilesystems.k8s-operator.alluxio.com configured

# use the same operator-config.yaml with only the tag of the image changed to restart the operator
$ helm install operator -f operator-config.yaml alluxio-operator
NAME: operator
LAST DEPLOYED: Thu Jun 27 15:47:44 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Upgrade the Alluxio cluster

Before the operation you should know:

  • When the upgrade operation starts, the coordinator and workers will perform rolling upgrade to use the new image. The existing CSI FUSE pods will not be restarted and upgraded, and only the new pods will use the new image.
  • While the cluster is being upgraded, the cache hit rate may decrease slightly, but will fully recover once the cluster is fully running again.

Following the steps to upgrade the cluster:

  1. Upload the new docker images corresponding to the new Alluxio version to your image registry. Refer to the installation doc for details.
  2. Update the imageTag fields in alluxio-cluster.yaml to reflect the new Alluxio version. In the following example the new imageTag will be DA-3.2-8.0.0.
  3. Run the following command to apply the new changes to the cluster.
# apply the changes to Kubernetes
$ kubectl apply -f alluxio-cluster.yaml
alluxiocluster.k8s-operator.alluxio.com/alluxio configured

# verify the upgration. you can see the new pods are spawning
$ kubectl get pod
NAME                                          READY   STATUS     RESTARTS   AGE
alluxio-coordinator-0                         0/1     Init:0/2   0          7s
alluxio-etcd-0                                1/1     Running    0          10m
alluxio-monitor-grafana-b89bf9dbb-77pb6       1/1     Running    0          10m
alluxio-monitor-prometheus-59b7b8bd64-b95jh   1/1     Running    0          10m
alluxio-worker-58999f8ddd-cd6r2               0/1     Init:0/2   0          7s
alluxio-worker-5d6786f5bf-cxv5j               1/1     Running    0          10m

# check the status of the cluster
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Updating       10m

# wait until the cluster is ready again
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Ready          12m

# check the pods of the cluster. you can see the age of the alluxio pods are changed
$ kubectl get pod
NAME                                          READY   STATUS    RESTARTS   AGE
alluxio-coordinator-0                         1/1     Running   0          93s
alluxio-etcd-0                                1/1     Running   0          12m
alluxio-monitor-grafana-b89bf9dbb-77pb6       1/1     Running   0          12m
alluxio-monitor-prometheus-59b7b8bd64-b95jh   1/1     Running   0          12m
alluxio-worker-58999f8ddd-cd6r2               1/1     Running   0          93s
alluxio-worker-58999f8ddd-rtftk               1/1     Running   0          33s

# double check the version string
$ kubectl exec -it alluxio-coordinator-0 -- alluxio info version 2>/dev/null
DA-3.2-8.0.0

Scaling the size of the cluster

Scale Up the Workers

Before the operation you should know:

  • While the cluster is being upgraded, the cache hit rate may decrease slightly, but will fully recover once the cluster is fully running again.

Following the steps to scale up the workers:

  1. Change the alluxio-cluster.yaml to increase the count in the worker. In the following example we will scale from 2 workers to 3 workers.
  2. Run the following command to apply the new changes to the cluster.
# apply the changes to Kubernetes
$ kubectl apply -f alluxio-cluster.yaml
alluxiocluster.k8s-operator.alluxio.com/alluxio configured

# verify the cluster is upgrading. you should be able to see the new pods are spawning
$ kubectl get pod
NAME                                          READY   STATUS            RESTARTS   AGE
alluxio-coordinator-0                         1/1     Running           0          4m51s
alluxio-etcd-0                                1/1     Running           0          15m
alluxio-monitor-grafana-b89bf9dbb-77pb6       1/1     Running           0          15m
alluxio-monitor-prometheus-59b7b8bd64-b95jh   1/1     Running           0          15m
alluxio-worker-58999f8ddd-cd6r2               1/1     Running           0          4m51s
alluxio-worker-58999f8ddd-rtftk               1/1     Running           0          3m51s
alluxio-worker-58999f8ddd-p6n59               0/1     PodInitializing   0          4s

# check if the new instances are ready
$ kubectl get pod
NAME                                          READY   STATUS    RESTARTS   AGE
alluxio-coordinator-0                         1/1     Running   0          5m21s
alluxio-etcd-0                                1/1     Running   0          16m
alluxio-monitor-grafana-b89bf9dbb-77pb6       1/1     Running   0          16m
alluxio-monitor-prometheus-59b7b8bd64-b95jh   1/1     Running   0          16m
alluxio-worker-58999f8ddd-cd6r2               1/1     Running   0          5m21s
alluxio-worker-58999f8ddd-rtftk               1/1     Running   0          4m21s
alluxio-worker-58999f8ddd-p6n59               1/1     Running   0          34s