Kubernetes
Support Lifecycle
We support CNCF Kubernetes for N-3 Kubernetes minor versions for a duration of 14 months. The duration exceeds the official EOL by four months. Once we stop supporting the minor version, we initiate the deprecation process. Refer to the Kubernetes Support Lifecycle guide to learn more.
Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade. We recommend that, before upgrading, you review the information provided in the Kubernetes Upgrades section.
Review the Maintenance Policy to learn about pack update and deprecation schedules.
Versions Supported
- 1.29.x
- 1.28.x
- 1.27.x
The Kubeadm configuration file is where you can do the following:
-
Change the default
podCIDR
andserviceClusterIpRange
values. CIDR IPs specified in the configuration file take precedence over other defined CIDR IPs in your environment.As you build your cluster, check that the
podCIDR
value does not overlap with any hosts or with the service network and theserviceClusterIpRange
value does not overlap with any IP ranges assigned to nodes or pods.
-
Change the default cluster DNS service domain from
cluster.local
to a DNS domain that you specify. You can only change the DNS domain during cluster creation. For more information, refer to Change Cluster DNS Service Domain. -
Add a certificate for the Spectro Proxy pack if you want to use a reverse proxy with a Kubernetes cluster. For more information, refer to the Spectro Proxy guide.
Change Cluster DNS Service Domain
The pack.serviceDomain
parameter with default value cluster.local
is not visible in the Kubernetes YAML file, and
its value can only be changed at cluster creation. To change the value, you must add serviceDomain: "cluster.local"
to
the Kubernetes YAML file when you create a cluster, and specify the service domain you want to use.
pack:
k8sHardening: True
podCIDR: "172.16.0.0/16"
serviceClusterIPRange: "10.96.0.0/12"
serviceDomain: "<your_cluster_DNS_service_domain>"
You can specify the service domain only at cluster creation. After the cluster creation completes, you cannot update the
value. Attempting to update it results in the error serviceDomain update is forbidden for existing cluster
.
For more information about networking configuration with DNS domains, refer to the Kubernetes Networking API documentation.
Configuration Changes
The Kubeadm config is updated with hardening improvements that do the following:
- Meet CIS standards for Operating Systems (OS).
-
Enable a Kubernetes audit policy in the pack. The audit policy is hidden, and you cannot customize the default audit policy. If you want to apply your custom audit policy, refer to the Enable Audit Logging guide to learn how to create your custom audit policy by adjusting the API server flags.
-
Replace a deprecated PodSecurityPolicy (PSP) with one that offers three built-in policy profiles for broad security coverage:
-
Privileged: An unrestricted policy that provides wide permission levels and allows for known privilege escalations.
-
Baseline: A policy that offers minimal restrictions and prevents known privilege escalations. As shown in the example below, you can override the default cluster-wide policy to set baseline enforcement by enabling the
PodSecurity
Admission plugin in theenable-admission-plugins
section of the YAML file. You can then add a custom Admission configuration and set theadmission-control-config-file
flag to the custom Admission.kubeadmconfig:
apiServer:
extraArgs:
secure-port: "6443"
anonymous-auth: "true"
profiling: "false"
disable-admission-plugins: "AlwaysAdmit"
default-not-ready-toleration-seconds: "60"
default-unreachable-toleration-seconds: "60"
enable-admission-plugins: "AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity"
admission-control-config-file: "/etc/kubernetes/pod-security-standard.yaml"
audit-log-path: /var/log/apiserver/audit.log
audit-policy-file: /etc/kubernetes/audit-policy.yaml -
Restricted: A heavily restricted policy that follows the best practices of Pod hardening. This policy is set to warn and audit and identifies Pods that require privileged access.
You can enforce these policies at the Cluster or Namespace level. For workloads that require privileged access, you can relax
PodSecurity
enforcement by adding these labels in the Namespace:pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.26
-
Kubeadm Configuration File
The default pack YAML contains minimal configurations offered by the managed provider.
Configure OIDC Identity Provider
You can configure an OpenID Connect (OIDC) identity provider to authenticate users and groups in your cluster. OIDC is an authentication layer on top of OAuth 2.0, an authorization framework that allows users to authenticate to a cluster without using a password.
OIDC requires a Role Binding for the users or groups you want to provide cluster access. You must create a Role
Binding to a Kubernetes role that is available in the cluster. The Kubernetes role can be a custom role you created or a
default Kubernetes role, such as the
cluster-admin
role. To learn how to create a Role Binding through Palette, refer to
Create Role Bindings.
Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon Elastic Kubernetes Service (EKS) and Azure AKS.
- Custom OIDC Setup
- Amazon EKS
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to use Azure Active Directory (AAD) to enable OIDC integration. Refer to Enable OIDC in Kubernetes Clusters With Entra ID to learn more. Click the Amazon EKS tab for steps to configure OIDC for EKS clusters.
-
Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
identityProvider
value with your OIDC provider name.pack:
palette:
config:
oidc:
identityProvider: yourIdentityProviderNameHere -
Add the following
kubeadmconfig
parameters. Replace the values with your OIDC provider values.kubeadmconfig:
apiServer:
extraArgs:
oidc-issuer-url: "provider URL"
oidc-client-id: "client-id"
oidc-groups-claim: "groups"
oidc-username-claim: "email" -
Under the
clientConfig
parameter section of Kubernetes YAML file, uncomment theoidc-
configuration lines.kubeadmconfig:
clientConfig:
oidc-issuer-url: "<OIDC-ISSUER-URL>"
oidc-client-id: "<OIDC-CLIENT-ID>"
oidc-client-secret: "<OIDC-CLIENT-SECRET>"
oidc-extra-scope: profile,email,openid
Follow these steps to configure OIDC for managed EKS clusters.
-
In the Kubernetes pack, uncomment the lines in the
oidcIdentityProvider
parameter section of the Kubernetes pack, and enter your third-party provider details.oidcIdentityProvider:
identityProviderConfigName: "Spectro-docs"
issuerUrl: "issuer-url"
clientId: "user-client-id-from-Palette"
usernameClaim: "email"
usernamePrefix: "-"
groupsClaim: "groups"
groupsPrefix: ""
requiredClaims: -
Under the
clientConfig
parameter section of Kubernetes pack, uncomment theoidc-
configuration lines.clientConfig:
oidc-issuer-url: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.issuerUrl }}"
oidc-client-id: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.clientId }}"
oidc-client-secret: yourSecretKeyHere
oidc-extra-scope: profile,email -
Provide third-party OIDC IDP details.
- Refer to the Access EKS Cluster for guidance on how to access an EKS cluster.
The Kubeadm configuration file is where you can do the following:
-
Change the default
podCIDR
andserviceClusterIpRange
values. CIDR IPs specified in the configuration file take precedence over other defined CIDR IPs in your environment.As you build your cluster, check that the
podCIDR
value does not overlap with any hosts or with the service network and theserviceClusterIpRange
value does not overlap with any IP ranges assigned to nodes or pods.
-
Change the default cluster DNS service domain from
cluster.local
to a DNS domain that you specify. You can only change the DNS domain during cluster creation. For more information, refer to Change Cluster DNS Service Domain. -
Add a certificate for the Spectro Proxy pack if you want to use a reverse proxy with a Kubernetes cluster. For more information, refer to the Spectro Proxy guide.
Change Cluster DNS Service Domain
The pack.serviceDomain
parameter with default value cluster.local
is not visible in the Kubernetes YAML file, and
its value can only be changed at cluster creation. To change the value, you must add serviceDomain: "cluster.local"
to
the Kubernetes YAML file when you create a cluster, and specify the service domain you want to use.
pack:
k8sHardening: True
podCIDR: "172.16.0.0/16"
serviceClusterIPRange: "10.96.0.0/12"
serviceDomain: "<your_cluster_DNS_service_domain>"
You can only specify the service domain at cluster creation. After cluster creation completes, you cannot update the
value. Attempting to update it results in the error serviceDomain update is forbidden for existing cluster
.
For more information about networking configuration with DNS domains, refer to the Kubernetes Networking API documentation.
Configuration Changes
The Kubeadm config is updated with hardening improvements that do the following:
- Meet CIS standards for operating systems (OS).
-
Enable a Kubernetes audit policy in the pack. The audit policy is hidden, and you cannot customize the default audit policy. If you want to apply your custom audit policy, refer to the Enable Audit Logging guide to learn how to create your custom audit policy by adjusting the API server flags.
-
Replace a deprecated PodSecurityPolicy (PSP) with one that offers three built-in policy profiles for broad security coverage:
-
Privileged: An unrestricted policy that provides wide permission levels and allows for known privilege escalations.
-
Baseline: A policy that offers minimal restrictions and prevents known privilege escalations. As shown in the example below, you can override the default cluster-wide policy to set baseline enforcement by enabling the
PodSecurity
Admission plugin in theenable-admission-plugins
section of the YAML file. You can then add a custom Admission configuration and set theadmission-control-config-file
flag to the custom Admission.kubeadmconfig:
apiServer:
extraArgs:
secure-port: "6443"
anonymous-auth: "true"
profiling: "false"
disable-admission-plugins: "AlwaysAdmit"
default-not-ready-toleration-seconds: "60"
default-unreachable-toleration-seconds: "60"
enable-admission-plugins: "AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity"
admission-control-config-file: "/etc/kubernetes/pod-security-standard.yaml"
audit-log-path: /var/log/apiserver/audit.log
audit-policy-file: /etc/kubernetes/audit-policy.yaml -
Restricted: A heavily restricted policy that follows Pod hardening best practices. This policy is set to warn and audit and identifies Pods that require privileged access.
You can enforce these policies at the cluster level or the Namespace level. For workloads that require privileged access, you can relax
PodSecurity
enforcement by adding these labels in the Namespace:pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.26
-
Kubeadm Configuration File
The default pack YAML contains minimal configurations offered by the managed provider.
Configure OIDC Identity Provider
You can configure an OpenID Connect (OIDC) identity provider to authenticate users and groups in your cluster. OIDC is an authentication layer on top of OAuth 2.0, an authorization framework that allows users to authenticate to a cluster without using a password.
OIDC requires a Role Binding for the users or groups you want to provide cluster access. You must create a Role
Binding to a Kubernetes role that is available in the cluster. The Kubernetes role can be a custom role you created or a
default Kubernetes role, such as the
cluster-admin
role. To learn how to create a Role Binding through Palette, refer to
Create Role Bindings.
Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon Elastic Kubernetes Service (EKS) and Azure AKS.
- Custom OIDC Setup
- Amazon EKS
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to use Azure Active Directory (AAD) to enable OIDC integration. Refer to Enable OIDC in Kubernetes Clusters With Entra ID to learn more. Click the Amazon EKS tab for steps to configure OIDC for EKS clusters.
-
Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
identityProvider
value with your OIDC provider name.pack:
palette:
config:
oidc:
identityProvider: yourIdentityProviderNameHere -
Add the following
kubeadmconfig
parameters. Replace the values with your OIDC provider values.kubeadmconfig:
apiServer:
extraArgs:
oidc-issuer-url: "provider URL"
oidc-client-id: "client-id"
oidc-groups-claim: "groups"
oidc-username-claim: "email" -
Under the
clientConfig
parameter section of Kubernetes YAML file, uncomment theoidc-
configuration lines.kubeadmconfig:
clientConfig:
oidc-issuer-url: "<OIDC-ISSUER-URL>"
oidc-client-id: "<OIDC-CLIENT-ID>"
oidc-client-secret: "<OIDC-CLIENT-SECRET>"
oidc-extra-scope: profile,email,openid
Follow these steps to configure OIDC for managed EKS clusters.
-
In the Kubernetes pack, uncomment the lines in the
oidcIdentityProvider
parameter section of the Kubernetes pack, and enter your third-party provider details.oidcIdentityProvider:
identityProviderConfigName: "Spectro-docs"
issuerUrl: "issuer-url"
clientId: "user-client-id-from-Palette"
usernameClaim: "email"
usernamePrefix: "-"
groupsClaim: "groups"
groupsPrefix: ""
requiredClaims: -
Under the
clientConfig
parameter section of Kubernetes pack, uncomment theoidc-
configuration lines.clientConfig:
oidc-issuer-url: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.issuerUrl }}"
oidc-client-id: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.clientId }}"
oidc-client-secret: yourSecretKeyHere
oidc-extra-scope: profile,email -
Provide third-party OIDC IDP details.
- Refer to the Access EKS Cluster for guidance on how to access an EKS cluster.
The Kubeadm configuration file is where you can do the following:
-
Change the default
podCIDR
andserviceClusterIpRange
values. CIDR IPs specified in the configuration file take precedence over other defined CIDR IPs in your environment.As you build your cluster, check that the
podCIDR
value does not overlap with any hosts or with the service network and theserviceClusterIpRange
value does not overlap with any IP ranges assigned to nodes or pods.
-
Change the default cluster DNS service domain from
cluster.local
to a DNS domain that you specify. You can only change the DNS domain during cluster creation. For more information, refer to Change Cluster DNS Service Domain. -
Add a certificate for the Spectro Proxy pack if you want to use a reverse proxy with a Kubernetes cluster. For more information, refer to the Spectro Proxy guide.
Change Cluster DNS Service Domain
The pack.serviceDomain
parameter with default value cluster.local
is not visible in the Kubernetes YAML file, and
its value can only be changed at cluster creation. To change the value, you must add serviceDomain: "cluster.local"
to
the Kubernetes YAML file when you create a cluster, and specify the service domain you want to use.
pack:
k8sHardening: True
podCIDR: "172.16.0.0/16"
serviceClusterIPRange: "10.96.0.0/12"
serviceDomain: "<your_cluster_DNS_service_domain>"
You can only specify the service domain at cluster creation. After cluster creation completes, you cannot update the
value. Attempting to update it results in the error serviceDomain update is forbidden for existing cluster
.
For more information about networking configuration with DNS domains, refer to the Kubernetes Networking API documentation.
Configuration Changes
The Kubeadm config is updated with hardening improvements that do the following:
- Meet CIS standards for operating systems (OS).
-
Enable a Kubernetes audit policy in the pack. The audit policy is hidden, and you cannot customize the default audit policy. If you want to apply your custom audit policy, refer to the Enable Audit Logging guide to learn how to create your custom audit policy by adjusting the API server flags.
-
Replace a deprecated PodSecurityPolicy (PSP) with one that offers three built-in policy profiles for broad security coverage:
-
Privileged: An unrestricted policy that provides wide permission levels and allows for known privilege escalations.
-
Baseline: A policy that offers minimal restrictions and prevents known privilege escalations. As shown in the example below, you can override the default cluster-wide policy to set baseline enforcement by enabling the
PodSecurity
Admission plugin in theenable-admission-plugins
section of the YAML file. You can then add a custom Admission configuration and set theadmission-control-config-file
flag to the custom Admission.kubeadmconfig:
apiServer:
extraArgs:
secure-port: "6443"
anonymous-auth: "true"
profiling: "false"
disable-admission-plugins: "AlwaysAdmit"
default-not-ready-toleration-seconds: "60"
default-unreachable-toleration-seconds: "60"
enable-admission-plugins: "AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity"
admission-control-config-file: "/etc/kubernetes/pod-security-standard.yaml"
audit-log-path: /var/log/apiserver/audit.log
audit-policy-file: /etc/kubernetes/audit-policy.yaml -
Restricted: A heavily restricted policy that follows Pod hardening best practices. This policy is set to warn and audit and identifies Pods that require privileged access.
You can enforce these policies at the cluster level or the Namespace level. For workloads that require privileged access, you can relax
PodSecurity
enforcement by adding these labels in the Namespace:pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.26
-
Kubeadm Configuration File
The default pack YAML contains minimal configurations offered by the managed provider.
Configure OIDC Identity Provider
You can configure an OpenID Connect (OIDC) identity provider to authenticate users and groups in your cluster. OIDC is an authentication layer on top of OAuth 2.0, an authorization framework that allows users to authenticate to a cluster without using a password.
OIDC requires a Role Binding for the users or groups you want to provide cluster access. You must create a Role
Binding to a Kubernetes role that is available in the cluster. The Kubernetes role can be a custom role you created or a
default Kubernetes role, such as the
cluster-admin
role. To learn how to create a Role Binding through Palette, refer to
Create Role Bindings.
Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon Elastic Kubernetes Service (EKS) and Azure AKS.
- Custom OIDC Setup
- Amazon EKS
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to use Azure Active Directory (AAD) to enable OIDC integration. Refer to Enable OIDC in Kubernetes Clusters With Entra ID to learn more. Click the Amazon EKS tab for steps to configure OIDC for EKS clusters.
-
Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
identityProvider
value with your OIDC provider name.pack:
palette:
config:
oidc:
identityProvider: yourIdentityProviderNameHere -
Add the following
kubeadmconfig
parameters. Replace the values with your OIDC provider values.kubeadmconfig:
apiServer:
extraArgs:
oidc-issuer-url: "provider URL"
oidc-client-id: "client-id"
oidc-groups-claim: "groups"
oidc-username-claim: "email" -
Under the
clientConfig
parameter section of Kubernetes YAML file, uncomment theoidc-
configuration lines.kubeadmconfig:
clientConfig:
oidc-issuer-url: "<OIDC-ISSUER-URL>"
oidc-client-id: "<OIDC-CLIENT-ID>"
oidc-client-secret: "<OIDC-CLIENT-SECRET>"
oidc-extra-scope: profile,email,openid
Follow these steps to configure OIDC for managed EKS clusters.
- In the Kubernetes pack, uncomment the lines in the
oidcIdentityProvider
parameter section of the Kubernetes pack, and enter your third-party provider details.
oidcIdentityProvider:
identityProviderConfigName: "Spectro-docs"
issuerUrl: "issuer-url"
clientId: "user-client-id-from-Palette"
usernameClaim: "email"
usernamePrefix: "-"
groupsClaim: "groups"
groupsPrefix: ""
requiredClaims:
- Under the
clientConfig
parameter section of Kubernetes pack, uncomment theoidc-
configuration lines.
clientConfig:
oidc-issuer-url: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.issuerUrl }}"
oidc-client-id: "{{ .spectro.pack.kubernetes-eks.managedControlPlane.oidcIdentityProvider.clientId }}"
oidc-client-secret: yourSecretKeyHere
oidc-extra-scope: profile,email
- Provide third-party OIDC IDP details.
- Refer to the Access EKS Cluster for guidance on how to access an EKS cluster.
Terraform
You can retrieve details about the Kubernetes pack for AWS EKS by using the following Terraform code.
data "spectrocloud_registry" "public_registry" {
name = "Public Repo"
}
data "spectrocloud_pack_simple" "k8s" {
name = "kubernetes-eks"
version = "1.29"
type = "helm"
registry_uid = data.spectrocloud_registry.public_registry.id
}