Palette supports creating and managing Kubernetes clusters deployed to an Azure subscription. This section guides you on how to create an IaaS Kubernetes cluster in Azure that is managed by Palette.
Azure clusters can be created under the following scopes:
Project Scope - This is the recommended scope.
Be aware that clusters that are created under the Tenant Admin scope are not visible under Project scope .
These prerequisites must be met before deploying an AKS workload cluster:
- You need an active Azure cloud account with sufficient resource limits and permissions to provision compute, network, and security resources in the desired regions.
- You will need to have permissions to deploy clusters using the AKS service on Azure.
- Register your Azure cloud account in Palette as described in the Creating an Azure Cloud Account section below.
- You should have a cluster profile created in Palette for AKS.
- Associate an SSH key pair to the cluster worker node.
There are additional prerequisites if you want to set up Azure Active Directory integration for the AKS cluster:
- A Tenant Name must be provided as part of the Azure cloud account creation in Palette.
For the Azure client used in the Azure cloud account, these API permissions have to be provided:
Microsoft Graph Group.Read.All (Application Type) Microsoft Graph Directory.Read.All (Application Type)
You can configure these permissions from the Azure cloud console under App registrations > API permissions for the specified application.
Palette also enables the provisioning of private AKS clusters via a private cloud gateway (Self Hosted PCGs). The Self-Hosted PCG is an AKS cluster that needs to be launched manually and linked to an Azure cloud account in Palette Management Console. Click here for more..
To create an Azure cloud account, we need:
A custom Account Name
- Client ID
- Tenant ID
- Client Secret
- Tenant Name (optional)
Connect Private Cloud Gatewayoption and select the Self-Hosted PCG already created from the drop-down menu to link it to the cloud account.
For existing cloud account go to
Edit and toggle the
Connect Private Cloud Gateway option to select the created gateway from the drop down menu.
For Azure cloud account creation, we first need to create an Azure Active Directory (AAD) application that can be used with role-based access control. Follow the steps below to create a new AAD application, assign roles, and create the client secret:
- Follow the steps described here to create a new Azure Active Directory application. Note down your ClientID and TenantID.
- On creating the application, assign a minimum required ContributorRole. To assign any type of role, the user must have a minimum role of UserAccessAdministrator. Follow the Assign Role To Application link learn more about roles.
- Follow the steps described in the Create an Application Secret section to create the client application secret. Store the Client Secret safely as it will not be available as plain text later.
The following steps need to be performed to provision a new cluster:
- If you already have a profile to use, go to the Cluster > Add a New Cluster > Deploy New Cluster and select an Azure cloud. If you do not have a profile to use, reference the Creating a Cluster Profile page for steps on how to create one.
- Fill the basic cluster profile information such as Name, Description, Tags and Cloud Account.
- In the Cloud Account dropdown list, select the Azure Cloud account or create a new one. See the Creating an Azure Cloud Account section above.
- Next, in the Cluster profile tab from the Managed Kubernetes list, pick AKS, and select the AKS cluster profile definition.
- Review the Parameters for the selected cluster profile definitions. By default, parameters for all packs are set with values defined in the cluster profile.
Complete the Cluster config section with the information for each parameter listed below.
Parameter Description Subscription Select the subscription which is to be used to access Azure Services. Region Select a region in Azure in where the cluster should be deployed. Resource Group Select the resource group in which the cluster should be deployed. SSH Key Public key to configure remote SSH access to the nodes. Static Placement By default, Palette uses dynamic placement, wherein a new VPC with a public and private subnet is created to place cluster resources for every cluster. These resources are fully managed by Palette and deleted when the corresponding cluster is deleted.
Turn on the Static Placement option if it is desired to place resources into preexisting VPCs and subnets. If the user is making the selection of Static Placement of resources, the following placement information needs to be provided:
Virtual Resource Group: The logical container for grouping related Azure resources. Virtual Network: Select the virtual network from dropdown menu. Control plane Subnet: Select the control plane network from the dropdown menu. Worker Network: Select the worker network from the dropdown. Update worker pools in parallel Check the box to concurrently update the worker pools.
If the Palette cloud account is created with Disable Properties and with Static Placement the network informations from user's Azure account will not be imported to palette account. Hence user can manually input the information for the Control Plane Subnet and the Worker Network (no drop down menu will be available).
- Click Next to configure the node pools.
The maximum number of pods per node in an AKS cluster is 250. If you don't specify maxPods when creating new node pools, you receive a default value of 30. This value can be edited from the Kubernetes configuration file at any time by editing the
maxPodPerNode value. Refer to the snippet below:
managedMachinePool:maxPodPerNode: 30#NOTE: The recommended minimum value set for maxPodPerNode is 30, setting value less than the recommended one can result in palette system pods in pending state.
This section guides you to through configuring Node Pools. As you set up the cluster, the Nodes config section will allow you to customize node pools. AKS Clusters are comprised of System and User node pools, and all pool types can be configured to use the Autoscaler, which scales out pools horizontally based on per node workload counts.
A complete AKS cluster contains the following:
- As a mandatory primary System Node Pool, this pool will run the pods necessary to run a Kubernetes cluster, like the control plane and etcd. All system pools must have at least a single node for a development cluster; one (1) node is enough for high availability production clusters, and three (3) or more is recommended.
- Worker Node pools consist of one (1) or more per workload requirements. Worker node pools can be sized to zero (0) nodes when not in use.
During cluster creation, you will default to a single pool.
- To add additional pools, click Add Node Pool.
- Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional, and you can use a
key:valuestructure, press your space bar to add additional labels, and click the X with your mouse to remove unwanted labels.
- To remove a pool, click Remove across from the title for each pool.
- Each cluster requires at least one (1) system node pool. To define a pool as a system pool, check the box labeled System Node Pool.
- Provide a name in the Node pool name text box. When creating a node, it is good practice to include an identifying name that matches the node in Azure.
- Add the Desired size. You can start with three for multiple nodes.
- Include Additional Labels. This is optional.
- In the Azure Cloud Configuration section, add the Instance type. The cost details are present for review.
- Enter the Managed Disk information and its size.
- If you are including additional or multiple nodes to make a node pool, click the Add Worker Pool button to create the next node.
In all types of node pools, configure the following.
Provide a name in the Node pool name text box. When creating a node, it is good practice to include an identifying name.
Note: Windows clusters have a name limitation of six (6) characters.
- Provide how many nodes the pool will contain by adding the count to the box labeled Number of nodes in the pool. Configure each pool to use the autoscaler controller. There are more details on how to configure that below.
- Alternative to a static node pool count, you can enable the autoscaler controller, click Enable Autoscaler to change to the Minimum size and Maximum size fields which will allow AKS to increase or decrease the size of the node pool based on workloads. The smallest size of a dynamic pool is zero (0), and the maximum is one thousand (1000); setting both to the same value is identical to using a static pool size.
- Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional; you can use a
key:valuestructure. Press your space bar to add additional labels and click the X with your mouse to remove unwanted labels.
- In the Azure Cloud Configuration section:
- Provide instance details for all nodes in the pool with the Instance type dropdown. The cost details are present for review.
- Provide the disk type via the Managed Disk dropdown and the size in Gigabytes (GB) in the Disk size field.
A minimum allocation of 4Gi of memory is required across all worker nodes.
When are done setting up all node pools, click Next to go to the Settings page to Validate and finish the cluster deployment wizard.
Note: Keep an eye on the Cluster Status once you click Finish Configuration as it will start as Provisioning. Deploying an AKS cluster does take a considerable amount of time to complete, and the Cluster Status in Palette will say Ready when it is complete and ready to use.
The deletion of an AKS cluster results in the removal of all Virtual Machines and associated Storage Disks, created for the cluster. The following tasks need to be performed to delete an AKS cluster:
- Select the cluster to be deleted from the Cluster View page and navigate to the Cluster Overview page.
- Invoke a delete action available on the page: Cluster > Settings > Cluster Settings > Delete Cluster.
- Click Confirm to delete.
The Cluster Status is updated to Deleting while cluster resources are being deleted. Provisioning status is updated with the ongoing progress of the delete operation. Once all resources are successfully deleted, the cluster status changes to Deleted and is removed from the list of clusters.
A cluster stuck in the Deletion state can be force deleted by the user through the User Interface. The user can go for a force deletion of the cluster, only if it is stuck in a deletion state for a minimum of 15 minutes. Palette enables cluster force delete from the Tenant Admin and Project Admin scope.
- Log in to the Palette Management Console.
Navigate to the Cluster Details page of the cluster stuck in deletion mode.
If the deletion status is stuck for more than 15 minutes, click the Force Delete Cluster button from the Settings dropdown.
If the Force Delete Cluster button is not enabled, wait for 15 minutes. The Settings dropdown will give the estimated time for the auto-enabling of the Force Delete button.
The Azure Active Directory (AAD) could be enabled while creating and linking the Azure Cloud account for the Palette Platform, using a simple check box. Once the cloud account is created, you can create the Azure AKS cluster. The AAD-enabled AKS cluster will have its Admin kubeconfig file created and can be downloaded from our Palette UI as the 'Kubernetes config file'. You need to manually create the user's kubeconfig file to enable AAD completely. The following are the steps to create the custom user kubeconfig file:
- Go to the Azure console to create the Groups in Azure AD to access the Kubernetes RBAC and Azure AD control access to cluster resources.
- After you create the groups, create users in the Azure AD.
- Create custom Kubernetes roles and role bindings for the created users and apply the roles and role bindings, using the Admin kubeconfig file.
- Once the roles and role bindings are created, these roles can be linked to the Groups created in Azure AD.
- The users can now access the Azure clusters with the complete benefits of AAD. To get the user-specific kubeconfig file, please run the following command:
az aks get-credentials --resource-group <resource-group> --name <cluster-name>