Skip to main content
Version: latest

Scale, Upgrade, and Secure Clusters

Palette has in-built features to help with the automation of Day-2 operations. Upgrading and maintaining a deployed cluster is typically complex because you need to consider any possible impact on service availability. Palette provides out-of-the-box functionality for upgrades, observability, granular Role Based Access Control (RBAC), backup and security scans.

This tutorial will teach you how to use the Palette UI to perform scale and maintenance tasks on your clusters. You will learn how to create Palette projects and teams, import a cluster profile, safely upgrade the Kubernetes version of a deployed cluster and scale up your cluster nodes. The concepts you learn about in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd.

πŸ§‘β€πŸš€ Back at Spacetastic HQ​

The team have been impressed with Palette's capabilities and decide to become a Spectro Cloud customer. The last piece of the puzzle is to learn how to handle Day-2 operations, which become increasingly more important as the Spacetastic platform matures. They must ensure that their systems are patched, upgraded, scaled, and scanned for vulnerabilities. These maintenance tasks must be automated and applied on a schedule, as the entire team wants to focus on providing Spacetastic features.

"I've read your report on Palette adoption at Spacetastic." says Meera, who provides the security expertise at Spacetastic. I was impressed with the ability to roll out updates to all clusters using the same cluster profile. This will streamline our system upgrades and cluster patching. Keeping up with security best practices has never been more important, now that we are growing faster than ever!"

"I agree. No matter how safe our coding practices are, we need to periodically review, patch and upgrade our dependencies." says Wren, who leads the engineering team at Spacetastic.

Kai nods, scrolling through the Palette Docs. "Team, Palette has more security and Day-2 operation support than we have explored so far. I will continue their Getting Started section and report back with my findings."

Prerequisites​

To complete this tutorial, follow the steps described in the Set up Palette with VMware guide to authenticate Palette for use with your VMware vSphere account.

Follow the steps described in the Deploy a PCG tutorial to deploy a VMware vSphere Private Cloud Gateway (PCG).

Additionally, you should install kubectl locally. Use the Kubernetes Install Tools page for further guidance.

Create Palette Projects​

Palette projects help you organize and manage cluster resources, providing logical groupings. They also allow you to manage user access control through Role Based Access Control (RBAC). You can assign users and teams with specific roles to specific projects. All resources created within a project are scoped to that project and only available to that project, but a tenant can have multiple projects.

Log in to Palette.

Click on the drop-down Menu at the top of the page and switch to the Tenant Admin scope. Palette provides the Default project out-of-the-box.

Image that shows how to select tenant admin scope

Navigate to the left Main Menu and click on Projects. Click on the Create Project button. The Create a new project dialog appears.

Fill out the input fields with values from the table below to create a project.

FieldDescriptionValue
NameThe name of the project.Project-ScaleSecureTutorial
DescriptionA brief description of the project.Project for Scale, Upgrade, and Secure Clusters tutorial.
TagsAdd tags to the project.env:dev

Click Confirm to create the project. Once Palette finishes creating the project, a new card appears on the Projects page.

Navigate to the left Main Menu and click on Users & Teams.

Select the Teams tab. Then, click on Create Team.

Fill in the Team Name with scale-secure-tutorial-team. Click on Confirm.

Once Palette creates the team, select it from the Teams list. The Team Details pane opens.

On the Project Roles tab, click on New Project Role. The list of project roles appears.

Select the Project-ScaleSecureTutorial from the Projects drop-down. Then, select the Cluster Profile Viewer and Cluster Viewer roles. Click on Confirm.

Image that shows how to select team roles

Any users that you add to this team inherit the project roles assigned to it. Roles are the foundation of Palette's RBAC enforcement. They allow a single user to have different types of access control based on the resource being accessed. In this scenario, any user added to this team will have access to view any cluster profiles and clusters in the Project-ScaleSecureTutorial project, but not modify them. Check out the Palette RBAC section for more details.

Navigate to the left Main Menu and click on Projects.

Click on Open project on the Project-ScaleSecureTutorial card.

Image that shows how to open the tutorial project

Your scope changes from Tenant Admin to Project-ScaleSecureTutorial. All further resources you create will be part of this project.

Import a Cluster Profile​

Palette provides three resource contexts. They help you customize your environment to your organizational needs, as well as control the scope of your settings.

ContextDescription
SystemResources are available at the system level and to all tenants in the system.
TenantResources are available at the tenant level and to all projects belonging to the tenant.
ProjectResources are available within a project and not available to other projects.

All of the resources you have created as part of your Getting Started journey have used the Project context. They are only visible in the Default project. Therefore, you will need to create a new cluster profile in Project-ScaleSecureTutorial.

Navigate to the left Main Menu and click on Profiles. Click on Import Cluster Profile. The Import Cluster Profile pane opens.

Paste the following in the text editor. Click on Validate. The Select repositories dialog appears.

{
"metadata": {
"name": "vmware-profile",
"description": "Cluster profile to deploy to VMware.",
"labels": {}
},
"spec": {
"version": "1.0.0",
"template": {
"type": "cluster",
"cloudType": "vsphere",
"packs": [
{
"name": "ubuntu-vsphere",
"type": "spectro",
"layer": "os",
"version": "22.04",
"tag": "22.04",
"values": "# Spectro Golden images includes most of the hardening as per CIS Ubuntu Linux 22.04 LTS Server L1 v1.0.0 standards\n\n# Uncomment below section to\n# 1. Include custom files to be copied over to the nodes and/or\n# 2. Execute list of commands before or after kubeadm init/join is executed\n#\n#kubeadmconfig:\n# preKubeadmCommands:\n# - echo \"Executing pre kube admin config commands\"\n# - update-ca-certificates\n# - 'systemctl restart containerd; sleep 3'\n# - 'while [ ! -S /var/run/containerd/containerd.sock ]; do echo \"Waiting for containerd...\"; sleep 1; done'\n# postKubeadmCommands:\n# - echo \"Executing post kube admin config commands\"\n# files:\n# - targetPath: /usr/local/share/ca-certificates/mycom.crt\n# targetOwner: \"root:root\"\n# targetPermissions: \"0644\"\n# content: |\n# -----BEGIN CERTIFICATE-----\n# MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\n# cm5ldGVzMB4XDTIwMDkyMjIzNDMyM1oXDTMwMDkyMDIzNDgyM1owFTETMBEGA1UE\n# AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMdA\n# nZYs1el/6f9PgV/aO9mzy7MvqaZoFnqO7Qi4LZfYzixLYmMUzi+h8/RLPFIoYLiz\n# qiDn+P8c9I1uxB6UqGrBt7dkXfjrUZPs0JXEOX9U/6GFXL5C+n3AUlAxNCS5jobN\n# fbLt7DH3WoT6tLcQefTta2K+9S7zJKcIgLmBlPNDijwcQsbenSwDSlSLkGz8v6N2\n# 7SEYNCV542lbYwn42kbcEq2pzzAaCqa5uEPsR9y+uzUiJpv5tDHUdjbFT8tme3vL\n# 9EdCPODkqtMJtCvz0hqd5SxkfeC2L+ypaiHIxbwbWe7GtliROvz9bClIeGY7gFBK\n# jZqpLdbBVjo0NZBTJFUCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB\n# /wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBADIKoE0P+aVJGV9LWGLiOhki\n# HFv/vPPAQ2MPk02rLjWzCaNrXD7aPPgT/1uDMYMHD36u8rYyf4qPtB8S5REWBM/Y\n# g8uhnpa/tGsaqO8LOFj6zsInKrsXSbE6YMY6+A8qvv5lPWpJfrcCVEo2zOj7WGoJ\n# ixi4B3fFNI+wih8/+p4xW+n3fvgqVYHJ3zo8aRLXbXwztp00lXurXUyR8EZxyR+6\n# b+IDLmHPEGsY9KOZ9VLLPcPhx5FR9njFyXvDKmjUMJJgUpRkmsuU1mCFC+OHhj56\n# IkLaSJf6z/p2a3YjTxvHNCqFMLbJ2FvJwYCRzsoT2wm2oulnUAMWPI10vdVM+Nc=\n# -----END CERTIFICATE-----",
"registry": {
"metadata": {
"uid": "5eecc89d0b150045ae661cef",
"name": "Public Repo",
"kind": "pack",
"isPrivate": false,
"providerType": ""
}
}
},
{
"name": "kubernetes",
"type": "spectro",
"layer": "k8s",
"version": "1.27.15",
"tag": "1.27.x",
"values": "# spectrocloud.com/enabled-presets: Kube Controller Manager:loopback-ctrlmgr,Kube Scheduler:loopback-scheduler\npack:\n content:\n images:\n - image: registry.k8s.io/coredns/coredns:v1.10.1\n - image: registry.k8s.io/etcd:3.5.12-0\n - image: registry.k8s.io/kube-apiserver:v1.27.15\n - image: registry.k8s.io/kube-controller-manager:v1.27.15\n - image: registry.k8s.io/kube-proxy:v1.27.15\n - image: registry.k8s.io/kube-scheduler:v1.27.15\n - image: registry.k8s.io/pause:3.9\n - image: registry.k8s.io/pause:3.8\n #CIDR Range for Pods in cluster\n # Note : This must not overlap with any of the host or service network\n podCIDR: \"192.168.0.0/16\"\n #CIDR notation IP range from which to assign service cluster IPs\n # Note : This must not overlap with any IP ranges assigned to nodes for pods.\n serviceClusterIpRange: \"10.96.0.0/12\"\n # serviceDomain: \"cluster.local\"\n\nkubeadmconfig:\n apiServer:\n extraArgs:\n # Note : secure-port flag is used during kubeadm init. Do not change this flag on a running cluster\n secure-port: \"6443\"\n anonymous-auth: \"true\"\n profiling: \"false\"\n disable-admission-plugins: \"AlwaysAdmit\"\n default-not-ready-toleration-seconds: \"60\"\n default-unreachable-toleration-seconds: \"60\"\n enable-admission-plugins: \"AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity\"\n admission-control-config-file: \"/etc/kubernetes/pod-security-standard.yaml\"\n audit-log-path: /var/log/apiserver/audit.log\n audit-policy-file: /etc/kubernetes/audit-policy.yaml\n audit-log-maxage: \"30\"\n audit-log-maxbackup: \"10\"\n audit-log-maxsize: \"100\"\n authorization-mode: RBAC,Node\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n extraVolumes:\n - name: audit-log\n hostPath: /var/log/apiserver\n mountPath: /var/log/apiserver\n pathType: DirectoryOrCreate\n - name: audit-policy\n hostPath: /etc/kubernetes/audit-policy.yaml\n mountPath: /etc/kubernetes/audit-policy.yaml\n readOnly: true\n pathType: File\n - name: pod-security-standard\n hostPath: /etc/kubernetes/pod-security-standard.yaml\n mountPath: /etc/kubernetes/pod-security-standard.yaml\n readOnly: true\n pathType: File\n controllerManager:\n extraArgs:\n profiling: \"false\"\n terminated-pod-gc-threshold: \"25\"\n use-service-account-credentials: \"true\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n scheduler:\n extraArgs:\n profiling: \"false\"\n kubeletExtraArgs:\n read-only-port : \"0\"\n event-qps: \"0\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n protect-kernel-defaults: \"true\"\n rotate-server-certificates: \"true\"\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n files:\n - path: hardening/audit-policy.yaml\n targetPath: /etc/kubernetes/audit-policy.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - path: hardening/90-kubelet.conf\n targetPath: /etc/sysctl.d/90-kubelet.conf\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - targetPath: /etc/kubernetes/pod-security-standard.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n content: |\n apiVersion: apiserver.config.k8s.io/v1\n kind: AdmissionConfiguration\n plugins:\n - name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"v1.27\"\n audit: \"baseline\"\n audit-version: \"v1.27\"\n warn: \"restricted\"\n warn-version: \"v1.27\"\n audit: \"restricted\"\n audit-version: \"v1.27\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: [kube-system]\n\n preKubeadmCommands:\n # For enabling 'protect-kernel-defaults' flag to kubelet, kernel parameters changes are required\n - 'echo \"====> Applying kernel parameters for Kubelet\"'\n - 'sysctl -p /etc/sysctl.d/90-kubelet.conf'\n postKubeadmCommands:\n - 'chmod 600 /var/lib/kubelet/config.yaml'\n #- 'echo \"List of post kubeadm commands to be executed\"'\n\n# Client configuration to add OIDC based authentication flags in kubeconfig\n#clientConfig:\n #oidc-issuer-url: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-issuer-url }}\"\n #oidc-client-id: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-client-id }}\"\n #oidc-client-secret: 1gsranjjmdgahm10j8r6m47ejokm9kafvcbhi3d48jlc3rfpprhv\n #oidc-extra-scope: profile,email",
"registry": {
"metadata": {
"uid": "5eecc89d0b150045ae661cef",
"name": "Public Repo",
"kind": "pack",
"isPrivate": false,
"providerType": ""
}
}
},
{
"name": "cni-calico",
"type": "spectro",
"layer": "cni",
"version": "3.27.2",
"tag": "3.27.x",
"values": "# spectrocloud.com/enabled-presets: Microk8s:microk8s-false\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/cni:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/node:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/kube-controllers:v3.27.2\n\nmanifests:\n calico:\n microk8s: \"false\"\n images:\n cni: \"\"\n node: \"\"\n kubecontroller: \"\"\n # IPAM type to use. Supported types are calico-ipam, host-local\n ipamType: \"calico-ipam\"\n\n calico_ipam:\n assign_ipv4: true\n assign_ipv6: false\n\n # Should be one of CALICO_IPV4POOL_IPIP or CALICO_IPV4POOL_VXLAN \n encapsulationType: \"CALICO_IPV4POOL_IPIP\"\n\n # Should be one of Always, CrossSubnet, Never\n encapsulationMode: \"Always\"\n\n env:\n # Additional env variables for calico-node\n calicoNode:\n #IPV6: \"autodetect\"\n #FELIX_IPV6SUPPORT: \"true\"\n #CALICO_IPV6POOL_NAT_OUTGOING: \"true\"\n #CALICO_IPV4POOL_CIDR: \"192.168.0.0/16\"\n #IP_AUTODETECTION_METHOD: \"first-found\"\n\n # Additional env variables for calico-kube-controller deployment\n calicoKubeControllers:\n #LOG_LEVEL: \"info\"\n #SYNC_NODE_LABELS: \"true\"",
"registry": {
"metadata": {
"uid": "5eecc89d0b150045ae661cef",
"name": "Public Repo",
"kind": "pack",
"isPrivate": false,
"providerType": ""
}
}
},
{
"name": "csi-vsphere-csi",
"type": "spectro",
"layer": "csi",
"version": "3.1.2",
"tag": "3.1.x",
"values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.28.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.22.9\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.23.5\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.26.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.24.6\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.25.3\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.27.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-attacher:v4.3.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-resizer:v1.8.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/livenessprobe:v2.10.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-provisioner:v3.5.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-snapshotter:v6.2.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-driver:v3.1.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-syncer:v3.1.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-node-driver-registrar:v2.8.0\n\nmanifests:\n #Storage class config\n vsphere:\n\n #Toggle for Default class\n isDefaultClass: \"true\"\n\n #Specifies file system type\n fstype: \"ext4\"\n\n #Allowed reclaim policies are Delete, Retain\n reclaimPolicy: \"Delete\"\n\n #Specifies the URL of the datastore on which the container volume needs to be provisioned.\n datastoreURL: \"\"\n\n #Specifies the storage policy for datastores on which the container volume needs to be provisioned.\n storagePolicyName: \"\"\n\n volumeBindingMode: \"WaitForFirstConsumer\"\n\n #Set this flag to true to enable volume expansion\n allowVolumeExpansion: true\n\n vsphere-cloud-controller-manager:\n k8sVersion: \"{{ .spectro.system.kubernetes.version }}\"\n # Override CPI image\n image: \"\"\n extraArgs:\n - \"--cloud-provider=vsphere\"\n - \"--v=2\"\n - \"--cloud-config=/etc/cloud/vsphere.conf\"\n\n vsphere-csi-driver:\n replicas: 3\n livenessProbe:\n csiController:\n initialDelaySeconds: 30\n timeoutSeconds: 10\n periodSeconds: 180\n failureThreshold: 3\n # Override CSI component images\n csiAttacherImage: \"\"\n csiResizerImage: \"\"\n csiControllerImage: \"\"\n csiLivenessProbeImage: \"\"\n csiSyncerImage: \"\"\n csiProvisionerImage: \"\"\n csiSnapshotterImage: \"\"\n nodeDriverRegistrarImage: \"\"\n vsphereCsiNodeImage: \"\"\n extraArgs:\n csiAttacher:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n csiResizer:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--handle-volume-inuse-error=false\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n csiController:\n - \"--fss-name=internal-feature-states.csi.vsphere.vmware.com\"\n - \"--fss-namespace=$(CSI_NAMESPACE)\"\n csiLivenessProbe:\n - \"--v=4\"\n - \"--csi-address=/csi/csi.sock\"\n csiSyncer:\n - \"--leader-election\"\n - \"--leader-election-lease-duration=30s\"\n - \"--leader-election-renew-deadline=20s\"\n - \"--leader-election-retry-period=10s\"\n - \"--fss-name=internal-feature-states.csi.vsphere.vmware.com\"\n - \"--fss-namespace=$(CSI_NAMESPACE)\"\n csiProvisioner:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n - \"--default-fstype=ext4\"\n # needed only for topology aware setup\n - \"--feature-gates=Topology=true\"\n - \"--strict-topology\"\n csiSnapshotter:\n - \"--v=4\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"",
"registry": {
"metadata": {
"uid": "5eecc89d0b150045ae661cef",
"name": "Public Repo",
"kind": "pack",
"isPrivate": false,
"providerType": ""
}
}
},
{
"name": "lb-metallb-helm",
"type": "spectro",
"layer": "addon",
"version": "0.14.8",
"tag": "0.14.8",
"values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/controller:v0.14.8\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/speaker:v0.14.8\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/frr:9.1.0\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/kube-rbac-proxy:v0.12.0\n charts:\n - repo: https://metallb.github.io/metallb\n name: metallb\n version: 0.14.8\n namespace: metallb-system\n namespaceLabels:\n \"metallb-system\": \"pod-security.kubernetes.io/enforce=privileged,pod-security.kubernetes.io/enforce-version=v{{ .spectro.system.kubernetes.version | substr 0 4 }}\" # Do not change this namespace, since CRDs expect the namespace to be metallb-system\n spectrocloud.com/install-priority: 0\n\ncharts:\n metallb-full:\n configuration:\n ipaddresspools:\n first-pool:\n spec:\n addresses:\n - 192.168.10.0/24\n # - 192.168.100.50-192.168.100.60\n avoidBuggyIPs: true\n autoAssign: true\n\n l2advertisements:\n default:\n spec:\n ipAddressPools:\n - first-pool\n\n bgpadvertisements: {}\n # external:\n # spec:\n # ipAddressPools:\n # - bgp-pool\n # # communities:\n # # - vpn-only\n\n bgppeers: {}\n # bgp-peer-1:\n # spec:\n # myASN: 64512\n # peerASN: 64512\n # peerAddress: 172.30.0.3\n # peerPort: 180\n # # BFD profiles can only be used in FRR mode\n # # bfdProfile: bfd-profile-1\n\n communities: {}\n # community-1:\n # spec:\n # communities:\n # - name: vpn-only\n # value: 1234:1\n\n bfdprofiles: {}\n # bfd-profile-1:\n # spec:\n # receiveInterval: 380\n # transmitInterval: 270\n\n metallb:\n # Default values for metallb.\n # This is a YAML-formatted file.\n # Declare variables to be passed into your templates.\n\n imagePullSecrets: []\n nameOverride: \"\"\n fullnameOverride: \"\"\n loadBalancerClass: \"\"\n\n # To configure MetalLB, you must specify ONE of the following two\n # options.\n\n rbac:\n # create specifies whether to install and use RBAC rules.\n create: true\n\n prometheus:\n # scrape annotations specifies whether to add Prometheus metric\n # auto-collection annotations to pods. See\n # https://github.com/prometheus/prometheus/blob/release-2.1/documentation/examples/prometheus-kubernetes.yml\n # for a corresponding Prometheus configuration. Alternatively, you\n # may want to use the Prometheus Operator\n # (https://github.com/coreos/prometheus-operator) for more powerful\n # monitoring configuration. If you use the Prometheus operator, this\n # can be left at false.\n scrapeAnnotations: false\n\n # port both controller and speaker will listen on for metrics\n metricsPort: 7472\n\n # if set, enables rbac proxy on the controller and speaker to expose\n # the metrics via tls.\n # secureMetricsPort: 9120\n\n # the name of the secret to be mounted in the speaker pod\n # to expose the metrics securely. If not present, a self signed\n # certificate to be used.\n speakerMetricsTLSSecret: \"\"\n\n # the name of the secret to be mounted in the controller pod\n # to expose the metrics securely. If not present, a self signed\n # certificate to be used.\n controllerMetricsTLSSecret: \"\"\n\n # prometheus doens't have the permission to scrape all namespaces so we give it permission to scrape metallb's one\n rbacPrometheus: true\n\n # the service account used by prometheus\n # required when \" .Values.prometheus.rbacPrometheus == true \" and \" .Values.prometheus.podMonitor.enabled=true or prometheus.serviceMonitor.enabled=true \"\n serviceAccount: \"\"\n\n # the namespace where prometheus is deployed\n # required when \" .Values.prometheus.rbacPrometheus == true \" and \" .Values.prometheus.podMonitor.enabled=true or prometheus.serviceMonitor.enabled=true \"\n namespace: \"\"\n\n # the image to be used for the kuberbacproxy container\n rbacProxy:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/kube-rbac-proxy\n tag: v0.12.0\n pullPolicy:\n\n # Prometheus Operator PodMonitors\n podMonitor:\n # enable support for Prometheus Operator\n enabled: false\n\n # optional additionnal labels for podMonitors\n additionalLabels: {}\n\n # optional annotations for podMonitors\n annotations: {}\n\n # Job label for scrape target\n jobLabel: \"app.kubernetes.io/name\"\n\n # Scrape interval. If not set, the Prometheus default scrape interval is used.\n interval:\n\n # \tmetric relabel configs to apply to samples before ingestion.\n metricRelabelings: []\n # - action: keep\n # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'\n # sourceLabels: [__name__]\n\n # \trelabel configs to apply to samples before ingestion.\n relabelings: []\n # - sourceLabels: [__meta_kubernetes_pod_node_name]\n # separator: ;\n # regex: ^(.*)$\n # target_label: nodename\n # replacement: $1\n # action: replace\n\n # Prometheus Operator ServiceMonitors. To be used as an alternative\n # to podMonitor, supports secure metrics.\n serviceMonitor:\n # enable support for Prometheus Operator\n enabled: false\n\n speaker:\n # optional additional labels for the speaker serviceMonitor\n additionalLabels: {}\n # optional additional annotations for the speaker serviceMonitor\n annotations: {}\n # optional tls configuration for the speaker serviceMonitor, in case\n # secure metrics are enabled.\n tlsConfig:\n insecureSkipVerify: true\n\n controller:\n # optional additional labels for the controller serviceMonitor\n additionalLabels: {}\n # optional additional annotations for the controller serviceMonitor\n annotations: {}\n # optional tls configuration for the controller serviceMonitor, in case\n # secure metrics are enabled.\n tlsConfig:\n insecureSkipVerify: true\n\n # Job label for scrape target\n jobLabel: \"app.kubernetes.io/name\"\n\n # Scrape interval. If not set, the Prometheus default scrape interval is used.\n interval:\n\n # \tmetric relabel configs to apply to samples before ingestion.\n metricRelabelings: []\n # - action: keep\n # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'\n # sourceLabels: [__name__]\n\n # \trelabel configs to apply to samples before ingestion.\n relabelings: []\n # - sourceLabels: [__meta_kubernetes_pod_node_name]\n # separator: ;\n # regex: ^(.*)$\n # target_label: nodename\n # replacement: $1\n # action: replace\n\n # Prometheus Operator alertmanager alerts\n prometheusRule:\n # enable alertmanager alerts\n enabled: false\n\n # optional additionnal labels for prometheusRules\n additionalLabels: {}\n\n # optional annotations for prometheusRules\n annotations: {}\n\n # MetalLBStaleConfig\n staleConfig:\n enabled: true\n labels:\n severity: warning\n\n # MetalLBConfigNotLoaded\n configNotLoaded:\n enabled: true\n labels:\n severity: warning\n\n # MetalLBAddressPoolExhausted\n addressPoolExhausted:\n enabled: true\n labels:\n severity: alert\n\n addressPoolUsage:\n enabled: true\n thresholds:\n - percent: 75\n labels:\n severity: warning\n - percent: 85\n labels:\n severity: warning\n - percent: 95\n labels:\n severity: alert\n\n # MetalLBBGPSessionDown\n bgpSessionDown:\n enabled: true\n labels:\n severity: alert\n\n extraAlerts: []\n\n # controller contains configuration specific to the MetalLB cluster\n # controller.\n controller:\n enabled: true\n # -- Controller log level. Must be one of: `all`, `debug`, `info`, `warn`, `error` or `none`\n logLevel: info\n # command: /controller\n # webhookMode: enabled\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/controller\n tag: v0.14.8\n pullPolicy:\n ## @param controller.updateStrategy.type Metallb controller deployment strategy type.\n ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy\n ## e.g:\n ## strategy:\n ## type: RollingUpdate\n ## rollingUpdate:\n ## maxSurge: 25%\n ## maxUnavailable: 25%\n ##\n strategy:\n type: RollingUpdate\n serviceAccount:\n # Specifies whether a ServiceAccount should be created\n create: true\n # The name of the ServiceAccount to use. If not set and create is\n # true, a name is generated using the fullname template\n name: \"\"\n annotations: {}\n securityContext:\n runAsNonRoot: true\n # nobody\n runAsUser: 65534\n fsGroup: 65534\n resources: {}\n # limits:\n # cpu: 100m\n # memory: 100Mi\n nodeSelector: {}\n tolerations: []\n priorityClassName: \"\"\n runtimeClassName: \"\"\n affinity: {}\n podAnnotations: {}\n labels: {}\n livenessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n readinessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n tlsMinVersion: \"VersionTLS12\"\n tlsCipherSuites: \"\"\n\n extraContainers: []\n\n # speaker contains configuration specific to the MetalLB speaker\n # daemonset.\n speaker:\n enabled: true\n # command: /speaker\n # -- Speaker log level. Must be one of: `all`, `debug`, `info`, `warn`, `error` or `none`\n logLevel: info\n tolerateMaster: true\n memberlist:\n enabled: true\n mlBindPort: 7946\n mlBindAddrOverride: \"\"\n mlSecretKeyPath: \"/etc/ml_secret_key\"\n excludeInterfaces:\n enabled: true\n # ignore the exclude-from-external-loadbalancer label (required for 1-node clusters are all-control-plane clusters)\n ignoreExcludeLB: false\n\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/speaker\n tag: v0.14.8\n pullPolicy:\n ## @param speaker.updateStrategy.type Speaker daemonset strategy type\n ## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/\n ##\n updateStrategy:\n ## StrategyType\n ## Can be set to RollingUpdate or OnDelete\n ##\n type: RollingUpdate\n serviceAccount:\n # Specifies whether a ServiceAccount should be created\n create: true\n # The name of the ServiceAccount to use. If not set and create is\n # true, a name is generated using the fullname template\n name: \"\"\n annotations: {}\n securityContext: {}\n ## Defines a secret name for the controller to generate a memberlist encryption secret\n ## By default secretName: {{ \"metallb.fullname\" }}-memberlist\n ##\n # secretName:\n resources: {}\n # limits:\n # cpu: 100m\n # memory: 100Mi\n nodeSelector: {}\n tolerations: []\n priorityClassName: \"\"\n affinity: {}\n ## Selects which runtime class will be used by the pod.\n runtimeClassName: \"\"\n podAnnotations: {}\n labels: {}\n livenessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n readinessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n startupProbe:\n enabled: true\n failureThreshold: 30\n periodSeconds: 5\n # frr contains configuration specific to the MetalLB FRR container,\n # for speaker running alongside FRR.\n frr:\n enabled: false\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/frr\n tag: 9.1.0\n pullPolicy:\n metricsPort: 7473\n resources: {}\n # if set, enables a rbac proxy sidecar container on the speaker to\n # expose the frr metrics via tls.\n # secureMetricsPort: 9121\n\n\n reloader:\n resources: {}\n\n frrMetrics:\n resources: {}\n\n extraContainers: []\n\n crds:\n enabled: true\n validationFailurePolicy: Fail\n\n # frrk8s contains the configuration related to using an frrk8s instance\n # (github.com/metallb/frr-k8s) as the backend for the BGP implementation.\n # This allows configuring additional frr parameters in combination to those\n # applied by MetalLB.\n frrk8s:\n # if set, enables frrk8s as a backend. This is mutually exclusive to frr\n # mode.\n enabled: false\n external: false\n namespace: \"\"",
"registry": {
"metadata": {
"uid": "5eecc89d0b150045ae661cef",
"name": "Public Repo",
"kind": "pack",
"isPrivate": false,
"providerType": ""
}
}
},
{
"name": "hello-universe",
"type": "oci",
"layer": "addon",
"version": "1.2.0",
"tag": "1.2.0",
"values": "# spectrocloud.com/enabled-presets: Backend:disable-api\npack:\n content:\n images:\n - image: ghcr.io/spectrocloud/hello-universe:1.2.0\n spectrocloud.com/install-priority: 0\n\nmanifests:\n hello-universe:\n images:\n hellouniverse: ghcr.io/spectrocloud/hello-universe:1.2.0\n apiEnabled: false\n namespace: hello-universe\n port: 8080\n replicas: 1",
"registry": {
"metadata": {
"uid": "64eaff5630402973c4e1856a",
"name": "Palette Community Registry",
"kind": "oci",
"isPrivate": true,
"providerType": "pack"
}
}
}
]
},
"variables": []
}
}

Click on Confirm. Then, click on Confirm on the Import Cluster Profile pane. Palette creates a new cluster profile named vmware-profile.

On the Profiles list, select Project from the Contexts drop-down. Your newly created cluster profile displays. The Palette UI confirms that the cluster profile was created in the scope of the Project-ScaleSecureTutorial.

Image that shows the cluster profile

Select the cluster profile to view its details. The cluster profile summary appears.

This cluster profile deploys the Hello Universe application using a pack. Click on the hellouniverse 1.2.0 layer. The pack manifest editor appears.

Click on Presets on the right-hand side. You can learn more about the pack presets on the pack README, which is available in the Palette UI. Select the Enable Hello Universe API preset. The pack manifest changes accordingly.

Screenshot of pack presets

The pack requires two values to be replaced for the authorization token and for the database password when using this preset. Replace these values with your own base64 encoded values. The hello-universe repository provides a token that you can use.

Click on Confirm Updates. The manifest editor closes.

Click on the lb-metallb-helm layer. The pack manifest editor appears.

Replace the predefined 192.168.10.0/24 IP CIDR listed below the addresses line with a valid IP address or IP range from your VMware environment to be assigned to your load balancer.

Metallb Helm-based pack.

Click on Confirm Updates. The manifest editor closes. Then, click on Save Changes to save your updates.

Deploy a Cluster​

Navigate to the left Main Menu and select Clusters. Click on Create Cluster.

Palette will prompt you to select the type of cluster. Select VMware and click on Start VMware Configuration.

Continue with the rest of the cluster deployment flow using the cluster profile you created in the Import a Cluster Profile section, named vmware-profile. Refer to the Deploy a Cluster tutorial for additional guidance or if you need a refresher of the Palette deployment flow.

Verify the Application​

Navigate to the left Main Menu and select Clusters.

Select your cluster to view its Overview tab.

When the application is deployed and ready for network traffic, Palette exposes the service URL in the Services field. Click on the URL for port :8080 to access the Hello Universe application.

Cluster details page with service URL highlighted

Upgrade Kubernetes Versions​

Regularly upgrading your Kubernetes version is an important part of maintaining a good security posture. New versions may contain important patches to security vulnerabilities and bugs that could affect the integrity and availability of your clusters.

Palette supports three minor Kubernetes versions at any given time. We support the current release and the three previous minor version releases, also known as N-3. For example, if the current release is 1.29, we support 1.28, 1.27, and 1.26.

warning

Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade.

We recommend using cluster profile versions to safely upgrade any layer of your cluster profile and maintain the security of your clusters. Expand the following section to learn how to create a new cluster profile version with a Kubernetes upgrade.

Upgrade Kubernetes using Cluster Profile Versions

Navigate to the left Main Menu and click on Profiles. Select the cluster profile that you used to deploy your cluster, named vmware-profile. The cluster profile details page appears.

Click on the version drop-down and select Create new version. The version creation dialog appears.

Fill in 1.1.0 in the Version input field. Then, click on Confirm. The new cluster profile version is created with the same layers as version 1.0.0.

Select the kubernetes 1.27.x layer of the profile. The pack manifest editor appears.

Click on the Pack Version dropdown. All of the available versions of the Palette eXtended Kubernetes pack appear. The cluster profile is configured to use the latest patch version of Kubernetes 1.27.

Cluster profile with all Kubernetes versions

The official guidelines for Kubernetes upgrades recommend upgrading one minor version at a time. For example, if you are using Kubernetes version 1.26, you should upgrade to 1.27, before upgrading to version 1.28. You can learn more about the official Kubernetes upgrade guidelines in the Version Skew Policy page.

Select 1.28.x from the version dropdown. This selection follows the Kubernetes upgrade guidelines as the cluster profile is using 1.27.x.

The manifest editor highlights the changes made by this upgrade. Once you have verified that the upgrade changes versions as expected, click on Confirm changes.

Click on Confirm Updates. Then, click on Save Changes to persist your updates.

Navigate to the left Main Menu and select Clusters. Select your cluster to view its Overview tab.

Select the Profile tab. Your cluster is currently using the 1.0.0 version of your cluster profile.

Change the cluster profile version by selecting 1.1.0 from the version drop-down. Click on Review & Save. The Changes Summary dialog appears.

Click on Review changes in Editor. The Review Update Changes dialog displays the same Kubernetes version upgrades as the cluster profile editor previously did. Click on Update.

Upgrading the Kubernetes version of your cluster modifies an infrastructure layer. Therefore, Kubernetes needs to replace its nodes. This is known as a repave. Check out the Node Pools page to learn more about the repave behavior and configuration.

Click on the Nodes tab. You can follow along with the node upgrades on this screen. Palette replaces the nodes configured with the old Kubernetes version with newly upgraded ones. This may affect the performance of your application, as Kubernetes swaps the workloads to the upgraded nodes.

Node repaves in progress

Verify the Application​

The cluster update completes when the Palette UI marks the cluster profile layers as green and the cluster is in a Healthy state. The cluster Overview page also displays the Kubernetes version as 1.28. Click on the URL for port :8080 to access the application and verify that your upgraded cluster is functional.

Kubernetes upgrade applied

Scan Clusters​

Palette provides compliance, security, conformance, and Software Bill of Materials (SBOM) scans on tenant clusters. These scans ensure cluster adherence to specific compliance and security standards, as well as detect potential vulnerabilities. You can perform four types of scans on your cluster.

ScanDescription
Kubernetes Configuration SecurityThis scan examines the compliance of deployed security features against the CIS Kubernetes Benchmarks, which are consensus-driven security guidelines for Kubernetes. By default, the test set will execute based on the cluster Kubernetes version.
Kubernetes Penetration TestingThis scan evaluates Kubernetes-related open-ports for any configuration issues that can leave the tenant clusters exposed to attackers. It hunts for security issues in your clusters and increases visibility of the security controls in your Kubernetes environments.
Kubernetes Conformance TestingThis scan validates your Kubernetes configuration to ensure that it conforms to CNCF specifications. Palette leverages an open-source tool called Sonobuoy to perform this scan.
Software Bill of Materials (SBOM)This scan details the various third-party components and dependencies used by your workloads and helps to manage security and compliance risks associated with those components.

Navigate to the left Main Menu and select Clusters. Select your cluster to view its Overview tab.

Select the Scan tab. The list of all the available cluster scans appears. Palette indicates that you have never scanned your cluster.

Scans never performed on the cluster

Click Run Scan on the Kubernetes configuration security and Kubernetes penetration testing scans. Palette schedules and executes these scans on your cluster, which may take a few minutes. Once they complete, you can download the report in PDF, CSV or view the results directly in the Palette UI.

Scans completed on the cluster

Click on Configure Scan on the Software Bill of Materials (SBOM) scan. The Configure SBOM Scan dialog appears.

Leave the default selections on this screen and click on Confirm. Optionally, you can configure an S3 bucket to save your report into. Refer to the Configure an SBOM Scan guide to learn more about the configuration options of this scan.

Once the scan completes, click on the report to view it within the Palette UI. The third-party dependencies that your workloads rely on are evaluated for potential security vulnerabilities. Reviewing the SBOM enables organizations to track vulnerabilities, perform regular software maintenance, and ensure compliance with regulatory requirements.

info

The scan reports highlight any failed checks, based on Kubernetes community standards and CNCF requirements. We recommend that you prioritize the rectification of any identified issues.

As you have seen so far, Palette scans are crucial when maintaining your security posture. Palette provides the ability to schedule your scans and periodically evaluate your clusters. In addition, it keeps a history of previous scans for comparison purposes. Expand the following section to learn how to configure scan schedules for your cluster.

Configure Cluster Scan Schedules

Click on Settings. Then, select Cluster Settings. The Settings pane appears.

Select the Schedule Scans option. You can configure schedules for you cluster scans. Palette provides common scan schedules or you can provide a custom time. We recommend choosing a schedule when you expect the usage of your cluster to be lowest. Otherwise, the scans may impact the performance of your nodes.

Scan schedules

Palette will automatically scan your cluster according to your configured schedule.

Scale a Cluster​

A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for different workloads. For example, you can create a node pool for your production workloads and another for your development workloads. You can update node pools for active clusters or create a new one for the cluster.

Navigate to the left Main Menu and select Clusters. Select your cluster to view its Overview tab.

Select the Nodes tab. Your cluster has a control-plane-pool and a worker-pool. Each pool contains one node.

Select the Overview tab. Download the kubeconfig file.

kubeconfig download

Open a terminal window and set the environment variable KUBECONFIG to point to the file you downloaded.

export KUBECONFIG=~/Downloads/admin.vmware-cluster.kubeconfig

Execute the following command in your terminal to view the nodes of your cluster.

kubectl get nodes

The output reveals two nodes, one for the worker pool and one for the control plane. Make a note of the name of your worker node, which is the node that does not have the control-plane role. In the example below, vmware-cluster-worker-pool-7d6d76b55b-dhffq is the name of the worker node.

NAME                                          STATUS   ROLES           AGE   VERSION
vmware-cluster-cp-xcqlw Ready control-plane 28m v1.28.13
vmware-cluster-worker-pool-7d6d76b55b-dhffq Ready <none> 28m v1.28.13

The Hello Universe pack deploys three pods in the hello-universe namespace. Execute the following command to verify where these pods have been scheduled.

kubectl get pods --namespace hello-universe --output wide

The output verifies that all of the pods have been scheduled on the worker node you made a note of previously.

NAME                        READY   STATUS    AGE   NODE
api-7db799cf85-5w5l6 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq
postgres-698d7ff8f4-vbktf 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq
ui-5f777c76df-pplcv 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq

Navigate back to the Palette UI in your browser. Select the Nodes tab.

Click on New Node Pool. The Add node pool dialog appears. This workflow allows you to create a new worker pool for your cluster. Fill in the following configuration.

FieldValueDescription
Node pool nameworker-pool-2The name of your worker pool.
Enable AutoscalerEnabledWhether Palette should scale the pool horizontally based on its per-node workload counts. The Minimum size parameter specifies the lower bound of nodes in the pool and the Maximum size specifies the upper bound. By default, Minimum size is 1 and Maximum size is 3.
CPU4 coresSet the number of CPUs equal to the already provisioned nodes.
Memory8 GBSet the memory allocation equal to the already provisioned nodes.
Disk60 GBSet the disk space equal to the already provisioned nodes.

Next, populate the Compute cluster, Resource Pool, Datastore, and Network fields according to your VMware vSphere environment.

Click on Confirm. The dialog closes. Palette begins provisioning your node pool. Once the process completes, your three node pools appear in a healthy state.

New worker pool provisioned

Navigate back to your terminal and execute the following command in your terminal to view the nodes of your cluster.

kubectl get nodes

The output reveals three nodes, two for worker pools and one for the control plane. Make a note of the names of your worker nodes. In the example below, vmware-cluster-worker-pool-7d6d76b55b-dhffq and vmware-cluster-worker-pool-2-5b4b559f6d-znbtm are the worker nodes.

NAME                                            STATUS   ROLES           AGE   VERSION
vmware-cluster-cp-xcqlw Ready control-plane 58m v1.28.13
vmware-cluster-worker-pool-2-5b4b559f6d-znbtm Ready <none> 30m v1.28.13
vmware-cluster-worker-pool-7d6d76b55b-dhffq Ready <none> 58m v1.28.13

It is common to dedicate node pools to a particular type of workload. One way to specify this is through the use of Kubernetes taints and tolerations.

Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain pods. Tolerations are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, nodes do not accept any pods that do not tolerate the taints.

The animation below provides a visual representation of how taints and tolerations can be used to specify which workloads execute on which nodes.

Taints repel pods to a new node

Switch back to Palette in your web browser. Navigate to the left Main Menu and select Profiles. Select the cluster profile deployed to your cluster, named vmware-profile. Ensure that the 1.1.0 version is selected.

Click on the hellouniverse 1.2.0 layer. The manifest editor appears. Set the manifests.hello-universe.ui.useTolerations field on line 20 to true. Then, set the manifests.hello-universe.ui.effect field on line 22 to NoExecute. This toleration describes that the UI pods of Hello Universe will tolerate the taint with the key app, value ui and effect NoExecute. The tolerations of the UI pods should be as below.

ui:
useTolerations: true
tolerations:
effect: NoExecute
key: app
value: ui

Click on Confirm Updates. The manifest editor closes. Then, click on Save Changes to persist your changes.

Navigate to the left Main Menu and select Clusters. Select your deployed cluster, named vmware-cluster.

Due to the changes you have made to the cluster profile, this cluster has a pending update. Click on Updates. The Changes Summary dialog appears.

Click on Review Changes in Editor. The Review Update Changes dialog appears. The toleration changes appear as incoming configuration.

Click on Apply Changes to apply the update to your cluster.

Select the Nodes tab. Click on Edit on the first worker pool, named worker-pool. The Edit node pool dialog appears.

Click on Add New Taint in the Taints section. Fill in app for the Key, ui for the Value and select NoExecute for the Effect. These values match the toleration you specified in your cluster profile earlier.

Add taint to worker pool

Click on Confirm to save your changes. The nodes in the worker-pool can now only execute the UI pods that have a toleration matching the configured taint.

Switch back to your terminal. Execute the following command again to verify where the Hello Universe pods have been scheduled.

kubectl get pods --namespace hello-universe --output wide

The output verifies that the UI pods have remained scheduled on their original node named vmware-cluster-worker-pool-7d6d76b55b-dhffq, while the other two pods have been moved to the node of the second worker pool named vmware-cluster-worker-pool-2-5b4b559f6d-znbtm.

NAME                        READY   STATUS    AGE   NODE
api-7db799cf85-5w5l6 1/1 Running 20m vmware-cluster-worker-pool-2-5b4b559f6d-znbtm
postgres-698d7ff8f4-vbktf 1/1 Running 20m vmware-cluster-worker-pool-2-5b4b559f6d-znbtm
ui-5f777c76df-pplcv 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq

Taints and tolerations are a common way of creating nodes dedicated to certain workloads, once the cluster has scaled accordingly through its provisioned node pools. Refer to the Taints and Tolerations guide to learn more.

Verify the Application​

Select the Overview tab. Click on the URL for port :8080 to access the Hello Universe application and verify that the application is functioning correctly.

Cleanup​

Use the following steps to remove all the resources you created for the tutorial.

To remove the cluster, navigate to the left Main Menu and click on Clusters. Select the cluster you want to delete to access its details page.

Click on Settings to expand the menu, and select Delete Cluster.

Delete cluster

You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name vmware-cluster to proceed with the delete step. The deletion process takes several minutes to complete.

info

If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force delete, navigate to the cluster’s details page, click on Settings, then select Force Delete Cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours.

Once the cluster is deleted, navigate to the left Main Menu and click on Profiles. Find the cluster profile you created and click on the three-dot Menu to display the Delete button. Select Delete and confirm the selection to remove the cluster profile.

Click on the drop-down Menu at the top of the page and switch to Tenant Admin scope.

Navigate to the left Main Menu and click on Projects.

Click on the three-dot Menu of the Project-ScaleSecureTutorial and select Delete. A pop-up box will ask you to confirm the action. Confirm the deletion.

Navigate to the left Main Menu and click on Users & Teams. Select the Teams tab.

Click on scale-secure-tutorial-team list entry. The Team Details pane appears. Click on Delete Team. A pop-up box will ask you to confirm the action. Confirm the deletion.

Wrap-up​

In this tutorial, you learned how to perform very important operations relating to the scalability and availability of your clusters. First, you created a project and team. Next, you imported a cluster profile and deployed a host VMware vSphere cluster. Then, you upgraded the Kubernetes version of your cluster and scanned your clusters using Palette's scanning capabilities. Finally, you scaled your cluster's nodes and used taints to select which Hello Universe pods execute on them.

We encourage you to check out the Additional Capabilities to explore other Palette functionalities.

πŸ§‘β€πŸš€ Catch up with Spacetastic​

After going through the steps in the tutorial, Kai is confident in Palette's upgrade and scanning capabilities.

"What have you found out, Kai?" says Meera walking over to Kai's desk. "Can I rely on Palette when a zero-day vulnerability comes in?"

"Yes, I know how stressful it is when those are reported." says Kai with a sympathetic nod. "I found out that Palette has our security covered through their pack updates and scanning capabilities. Relying on this kind of tooling is invaluable to security conscious engineers like us."

"Excellent! These capabilities will be a great addition to our existing systems at Spacetastic." says Meera with a big grin.

"I'm so glad that we found a platform that can support everyone!" says Kai. "There is so much more to explore though. I will keep reading through the Getting Started section and find out what additional capabilities Palette provides."

"Good thinking, Kai." says Meera, nodding. "We should maximize all of Palette's features now that we have implemented it in production. We've got big ideas and goals on our company roadmap, so let's find out how Palette can help us deliver them."