Skip to main content
Version: latest

Node Labels

Node labels provide pods the ability to specify which nodes they should be scheduled on. This ability can be useful in scenarios where pods should be co-located or executed on dedicated nodes. A common use case of node labels is to ensure that certain workloads only execute on certain hardware configurations. Labels are optional configurations, as the scheduler will automatically place pods across nodes.

tip

You can think of node labels as having the opposite effect to Taints and Tolerations. Taints allow you to mark nodes as not accepting certain pods, while node labels allow you to specify that your pods should only be scheduled on certain nodes.

Palette allows you to apply node labels during cluster provisioning. Once the cluster is in a healthy state, labels can be modified on the Nodes tab of the cluster details page.

This guide covers the Palette UI flow.

info

Node labels can also be applied to node pools using our Terraform provider.

Prerequisites

Enablement

  1. Log in to Palette.

  2. Navigate to the left Main Menu and select Profiles.

  3. Create a cluster profile to deploy to your environment. Refer to the Create a Full Profile guide for more information.

  4. Add a manifest to your cluster profile with a custom workload of your choice. Refer to the Add a Manifest for additional guidance.

  5. Add a node selector to the pod specification of your manifest. Refer to the Assign Pods to Nodes official documentation page for more details.

    nodeSelector:
    key1: value1
    info

    You can also specify a node by name by using the nodeName: name option on your pod specification. We recommend using a node selector, as it provides a more scalable and robust solution.

    When using packs or Helm charts, the nodeSelector or nodeName options can only be specified if they are exposed for configuration in the values.yaml file.

  6. Save the changes made to your cluster profile.

  7. Navigate to the left Main Menu and select Clusters.

  8. Click on Add New Cluster.

  9. Fill in the Basic Information for your cluster and click Next.

  10. On the Cluster Profile tab, select the cluster profile you previously created. Click Next.

  11. Select a Subscription, Region, and SSH Key on the Cluster Config tab. Click Next.

  12. On the Nodes Config tab, configure your control plane pool and worker pools by providing the instance type, availability zones and disk size.

  13. The control plane pool and worker pool provide the Additional Labels (Optional) section. Palette accepts labels in the key:value format. Fill in the labels corresponding to the values provided in your pod specification node selector. Click on Next.

    Screenshot of adding node labels during cluster creation

    info

    Node labels can also be updated on a deployed cluster by editing a worker node pool from the Nodes tab of the cluster details page.

  14. Accept the default settings on the Cluster Settings tab and click on Validate.

  15. Click on Finish Configuration and deploy your cluster.

    further guidance

    Refer to our Deploy a Cluster tutorial for detailed guidance on how to deploy a cluster with Palette using Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) cloud providers.

Validate

You can follow these steps to validate that your node labels are applied successfully.

  1. Log in to Palette.

  2. Navigate to the left Main Menu and select Clusters.

  3. Select the cluster you deployed, and download the kubeconfig file.

    Screenshot of kubeconfig file download

  4. Open a terminal window and set the environment variable KUBECONFIG to point to the kubeconfig file you downloaded.

    export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig
  5. Confirm the cluster deployment process has scheduled your pods as expected. Remember that only pods will be scheduled on nodes with labels matching their node selectors.

    kubectl get pods --all-namespaces --output wide --watch
    tip

    For a more user-friendly experience, consider using K9s or a similar tool to explore your cluster workloads.