Skip to main content

Upgrade Palette Management Appliance

tech preview

This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. Do not use this feature in production workloads.

Follow the instructions to upgrade the Palette Management Appliance using a content bundle. The content bundle is used to upgrade the Palette instance to a chosen target version.

info

The upgrade process will incur downtime for the Palette management cluster, but your workload clusters will remain operational.

Prerequisites

  • A healthy Palette management cluster where you can access the Local UI of the leader node.

    • Verify that your local machine can access the Local UI, as airgapped environments may have strict network policies preventing direct access.
  • If using an external registry, the Palette CLI must be installed on your local machine to upload the content to the external registry. Refer to the Palette CLI guide for installation instructions.

    • Ensure your local machine has network access to the external registry server and you have the necessary permissions to push images to the registry.
  • Access to the Artifact Studio to download the content bundle for Palette.

    tip

    If you do not have access to Artifact Studio, contact your Spectro Cloud representative or open a support ticket.

  • Check that your upgrade path is supported by referring to the Supported Upgrade Paths.

  • If upgrading from version 4.7.15, you must run an additional script to prepare Palette for the upgrade.

    Click to expand the instructions for the script
    1. Log in to Local UI of the leader node of your Palette management cluster. For example, https://<palette-leader-node-ip>:5080.

    2. From the left main menu, click Cluster.

    3. On the Overview tab, within the Environment section, click the link for the Admin Kubeconfig File to download the kubeconfig file.

    4. On your local machine, ensure you have kubectl installed and set the KUBECONFIG environment variable to point to the file.

      export KUBECONFIG=/path/to/downloaded/kubeconfig/file
    5. Issue the following command to check the PVCs in the zot-system namespace.

      kubectl get pvc --namespace zot-system

      Note the PVC name from the output, for example, zot-pvc. The access mode should be RWX. If it is already RWO, you can skip the remaining steps.

      Example output
      NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          VOLUMEATTRIBUTESCLASS   AGE
      zot-pvc Bound pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 250Gi RWX linstor-lvm-storage <unset> 3h58m
    6. Issue the following command to check the deployments in the zot-system namespace.

      kubectl get deploy --namespace zot-system

      Note the deployment name from the output, for example, zot.

      Example output
      NAME   READY   UP-TO-DATE   AVAILABLE   AGE
      zot 1/1 1 1 3h59m
    7. Use the following command to create a script named change-pvc-access-mode.sh.

      The script safely changes the access mode of a Kubernetes PersistentVolumeClaim (PVC) without losing data, by temporarily deleting and re-creating the PVC while keeping the underlying PersistentVolume (PV) intact. It pauses your app during the change, then resumes it. For version 4.7.15, this is required for the zot deployment before upgrading.

      cat > change-pvc-access-mode.sh <<'SCRIPT'
      #!/bin/bash

      # Script to change PVC access mode while preserving data in LINSTOR/Piraeus
      # Usage: ./change-pvc-access-mode.sh <namespace> <pvc-name> <deployment-name> <new-access-mode>
      # Example: ./change-pvc-access-mode.sh zot-system zot-pvc zot ReadWriteOnce

      set -e

      # Colors for output
      RED='\033[0;31m'
      GREEN='\033[0;32m'
      YELLOW='\033[1;33m'
      NC='\033[0m' # No Color

      # Check arguments
      if [ "$#" -ne 4 ]; then
      echo -e "${RED}Error: Invalid number of arguments${NC}"
      echo "Usage: $0 <namespace> <pvc-name> <deployment-name> <new-access-mode>"
      echo "Access modes: ReadWriteOnce, ReadWriteMany, ReadOnlyMany"
      exit 1
      fi

      NAMESPACE="$1"
      PVC_NAME="$2"
      DEPLOYMENT_NAME="$3"
      NEW_ACCESS_MODE="$4"

      # Validate access mode
      if [[ ! "$NEW_ACCESS_MODE" =~ ^(ReadWriteOnce|ReadWriteMany|ReadOnlyMany)$ ]]; then
      echo -e "${RED}Error: Invalid access mode. Must be one of: ReadWriteOnce, ReadWriteMany, ReadOnlyMany${NC}"
      exit 1
      fi

      echo -e "${YELLOW}=== PVC Access Mode Migration Script ===${NC}"
      echo "Namespace: $NAMESPACE"
      echo "PVC: $PVC_NAME"
      echo "Deployment: $DEPLOYMENT_NAME"
      echo "New Access Mode: $NEW_ACCESS_MODE"
      echo ""

      # Step 1: Get PV name
      echo -e "${GREEN}[1/9] Getting PV name...${NC}"
      PV_NAME=$(kubectl get pvc "$PVC_NAME" -n "$NAMESPACE" -o jsonpath='{.spec.volumeName}')
      if [ -z "$PV_NAME" ]; then
      echo -e "${RED}Error: Could not find PV for PVC $PVC_NAME${NC}"
      exit 1
      fi
      echo "PV Name: $PV_NAME"

      # Step 2: Backup current configuration
      echo -e "${GREEN}[2/9] Backing up current PVC and PV configuration...${NC}"
      kubectl get pvc "$PVC_NAME" -n "$NAMESPACE" -o yaml > "${PVC_NAME}-backup-$(date +%Y%m%d-%H%M%S).yaml"
      kubectl get pv "$PV_NAME" -o yaml > "${PV_NAME}-backup-$(date +%Y%m%d-%H%M%S).yaml"
      echo "Backups created in current directory"

      # Step 3: Set PV reclaim policy to Retain
      echo -e "${GREEN}[3/9] Setting PV reclaim policy to Retain...${NC}"
      CURRENT_POLICY=$(kubectl get pv "$PV_NAME" -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')
      echo "Current reclaim policy: $CURRENT_POLICY"
      if [ "$CURRENT_POLICY" != "Retain" ]; then
      kubectl patch pv "$PV_NAME" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
      echo "Reclaim policy changed to Retain"
      else
      echo "Already set to Retain"
      fi

      # Step 4: Get current replica count
      echo -e "${GREEN}[4/9] Getting current deployment replica count...${NC}"
      REPLICA_COUNT=$(kubectl get deployment "$DEPLOYMENT_NAME" -n "$NAMESPACE" -o jsonpath='{.spec.replicas}')
      echo "Current replicas: $REPLICA_COUNT"

      # Step 5: Scale down deployment
      echo -e "${GREEN}[5/9] Scaling down deployment to 0...${NC}"
      kubectl scale deployment "$DEPLOYMENT_NAME" -n "$NAMESPACE" --replicas=0
      echo "Waiting for pods to terminate..."
      kubectl wait --for=delete pod -l app="$DEPLOYMENT_NAME" -n "$NAMESPACE" --timeout=120s 2>/dev/null || true
      sleep 5

      # Step 6: Delete PVC
      echo -e "${GREEN}[6/9] Deleting PVC (data preserved in PV)...${NC}"
      kubectl delete pvc "$PVC_NAME" -n "$NAMESPACE"
      echo "Waiting for PV to be Released..."
      sleep 5

      PV_STATUS=$(kubectl get pv "$PV_NAME" -o jsonpath='{.status.phase}')
      echo "PV Status: $PV_STATUS"

      # Step 7: Remove claimRef and update access mode
      echo -e "${GREEN}[7/9] Removing claimRef from PV...${NC}"
      kubectl patch pv "$PV_NAME" --type json -p='[{"op": "remove", "path": "/spec/claimRef"}]'

      echo -e "${GREEN}[7/9] Updating PV access mode to $NEW_ACCESS_MODE...${NC}"
      kubectl patch pv "$PV_NAME" -p "{\"spec\":{\"accessModes\":[\"$NEW_ACCESS_MODE\"]}}"

      PV_STATUS=$(kubectl get pv "$PV_NAME" -o jsonpath='{.status.phase}')
      echo "PV Status: $PV_STATUS"

      # Step 8: Create new PVC
      echo -e "${GREEN}[8/9] Creating new PVC with updated access mode...${NC}"

      # Get original PVC details
      STORAGE_SIZE=$(kubectl get pv "$PV_NAME" -o jsonpath='{.spec.capacity.storage}')
      STORAGE_CLASS=$(kubectl get pv "$PV_NAME" -o jsonpath='{.spec.storageClassName}')
      VOLUME_MODE=$(kubectl get pv "$PV_NAME" -o jsonpath='{.spec.volumeMode}')

      cat <<EOF | kubectl apply -f -
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: $PVC_NAME
      namespace: $NAMESPACE
      labels:
      app.kubernetes.io/managed-by: Helm
      spec:
      accessModes:
      - $NEW_ACCESS_MODE
      resources:
      requests:
      storage: $STORAGE_SIZE
      storageClassName: $STORAGE_CLASS
      volumeMode: $VOLUME_MODE
      volumeName: $PV_NAME
      EOF

      echo "Waiting for PVC to bind..."
      sleep 5
      kubectl get pvc "$PVC_NAME" -n "$NAMESPACE"

      # Step 9: Scale deployment back up
      echo -e "${GREEN}[9/9] Scaling deployment back to $REPLICA_COUNT replicas...${NC}"
      kubectl scale deployment "$DEPLOYMENT_NAME" -n "$NAMESPACE" --replicas="$REPLICA_COUNT"

      echo ""
      echo -e "${GREEN}=== Migration Complete ===${NC}"
      echo ""
      echo "Verifying final state..."
      kubectl get pvc "$PVC_NAME" -n "$NAMESPACE"
      echo ""
      kubectl get pods -n "$NAMESPACE"
      echo ""
      echo -e "${YELLOW}Note: Wait for pods to be Running and verify your application is working correctly${NC}"
      echo -e "${YELLOW}Backup files created: ${PVC_NAME}-backup-*.yaml and ${PV_NAME}-backup-*.yaml${NC}"
      SCRIPT
    8. Make the script executable.

      chmod u+x change-pvc-access-mode.sh
    9. Execute the script with the following parameters. Replace <pvc-name> and <deployment-name> with the values noted from steps 5 and 6.

      ./change-pvc-access-mode.sh zot-system <pvc-name> <deployment-name> ReadWriteOnce
      Example command
      ./change-pvc-access-mode.sh zot-system zot-pvc zot ReadWriteOnce
      Example output
      === PVC Access Mode Migration Script ===
      Namespace: zot-system
      PVC: zot-pvc
      Deployment: zot
      New Access Mode: ReadWriteOnce

      [1/9] Getting PV name...
      PV Name: pvc-6d603d91-d5f6-459a-b600-0a699cbb4936
      [2/9] Backing up current PVC and PV configuration...
      Backups created in current directory
      [3/9] Setting PV reclaim policy to Retain...
      Current reclaim policy: Delete
      persistentvolume/pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 patched
      Reclaim policy changed to Retain
      [4/9] Getting current deployment replica count...
      Current replicas: 1
      [5/9] Scaling down deployment to 0...
      deployment.apps/zot scaled
      Waiting for pods to terminate...
      [6/9] Deleting PVC (data preserved in PV)...
      persistentvolumeclaim "zot-pvc" deleted from zot-system namespace
      Waiting for PV to be Released...
      PV Status: Released
      [7/9] Removing claimRef from PV...
      persistentvolume/pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 patched
      [7/9] Updating PV access mode to ReadWriteOnce...
      persistentvolume/pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 patched
      PV Status: Available
      [8/9] Creating new PVC with updated access mode...
      persistentvolumeclaim/zot-pvc created
      Waiting for PVC to bind...
      NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
      zot-pvc Pending pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 0 linstor-lvm-storage <unset> 5s
      [9/9] Scaling deployment back to 1 replicas...
      deployment.apps/zot scaled

      === Migration Complete ===

      Verifying final state...
      NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
      zot-pvc Pending pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 0 linstor-lvm-storage <unset> 7s

      NAME READY STATUS RESTARTS AGE
      zot-c96cb7b-hspd2 0/1 Pending 0 1s

      Note: Wait for pods to be Running and verify your application is working correctly
      Backup files created: zot-pvc-backup-*.yaml and pvc-6d603d91-d5f6-459a-b600-0a699cbb4936-backup-*.yaml
    10. Verify that the PVC status is Bound and the deployment pods are in the Running state before proceeding with the upgrade.

      kubectl get pvc --namespace zot-system
      Example output
      NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          VOLUMEATTRIBUTESCLASS   AGE
      zot-pvc Bound pvc-6d603d91-d5f6-459a-b600-0a699cbb4936 250Gi RWO linstor-lvm-storage <unset> 5m30s
      kubectl get pods --namespace zot-system
      Example output
      NAME                READY   STATUS    RESTARTS   AGE
      zot-c96cb7b-hspd2 1/1 Running 0 5m3s

Upgrade Palette

  1. Navigate to the Artifact Studio through a web browser and log in. Under Install Palette Enterprise, click on the drop-down menu and select the version you want to upgrade your Palette Management Appliance to.

  2. Click Show Artifacts to display the Palette Enterprise Artifacts pop-up window. Click the Download button for the Content bundle (including Ubuntu).

  3. Wait until the content bundle is downloaded to your local machine. The bundle is downloaded in .tar.zst format alongside a signature file in sig.bin format.

    tip

    Refer to the Artifact Studio guide for detailed guidance on how to verify the integrity of downloaded files using the provided signature file.

  4. Log in to the Local UI of the leader host of the Palette management cluster. By default, Local UI is accessible at https://<node-ip>:5080. Replace <node-ip> with the IP address of the leader host.

  5. From the left main menu, click Content.

  6. Click Actions in the top right and select Upload Content from the drop-down menu.

  7. Click the Upload icon to open the file selection dialog and select the content bundle file from your local machine. Alternatively, you can drag and drop the file into the upload area.

    The upload process starts automatically once the file is selected. You can monitor the upload progress in the Upload Content dialog.

    Wait for the File(s) uploaded successfully confirmation message or the green check mark to appear next to the upload progress bar.

  8. On the Content page, wait for the content to finish syncing. This is indicated by the Syncing content: (N) items are pending banner that appears to the left of disk usage. The banner will disappear once the sync is complete. This can take several minutes depending on the size of the content bundle and your internal network speed.

  9. From the left main menu, click Cluster and select the Configuration tab.

  10. Click the Update drop-down menu and select Review Changes.

    warning

    Ensure that the configured Zot password matches the password that you used when installing Palette Management Appliance. You cannot access the Zot registry post-upgrade if the passwords do not match.

    If you have forgotten your Zot password, you can connect to your Kubernetes cluster and retrieve it from the Zot secret.

  11. Review the changes for each profile carefully and ensure you are satisfied with the proposed updates. There may be changes to the profiles between versions that include the addition or removal of properties.

    warning

    Some upgrade paths require you to re-enter the configuration values you provided during the initial Palette installation. This includes the OCI Pack Registry Password and any other non-default settings.

    Click Confirm Changes once satisfied.

  12. Click Update to start the upgrade process.

During the upgrade process, the Palette system and tenant consoles will be unavailable, and a warning message will be displayed when attempting to log in. You can monitor the upgrade progress in the Overview tab on the Cluster page.

Validate

  1. Log in to the Local UI of the leader host of the Palette management cluster. By default, Local UI is accessible at https://<node-ip>:5080. Replace <node-ip> with the IP address of the leader host.

  2. From the left main menu, click Cluster.

  3. Check that the palette-mgmt-plane pack displays the upgraded version number and is in a Running status.

  4. Verify that you can log in to the Palette system console and no warning message is displayed.

  5. If you have configured a tenant, log in to the tenant console and verify that Palette displays the correct version number.

    Palette version in tenant console