Packs
The following are common scenarios that you may encounter when using Packs.
Scenario - AWS EKS Cluster Deployment Fails when Cilium is Used as CNI
When deploying AWS EKS clusters using the Cilium pack, worker node provisioning fails as the AWS VPC CNI and Cilium CNI clash with each other. This is because installation of the AWS VPC CNI cannot be disabled by default on EKS cluster nodes.
To resolve this, you will need to make the following additions and changes:
- Kube-proxy must be replaced with eBPF.
- Specific Cilium configuration parameters must be set.
- An additional manifest must be included with the Cilium pack.
- The
charts.cilium.k8sServiceHost
parameter value must be manually changed to the cluster API server endpoint during deployment.
Use the following debug steps to learn how to make these configuration changes and additions.
- You must use a pre-created static VPC for EKS deployments using Cilium.
- This workaround has only been validated on Cilium 1.15.3 and above.
Debug Steps
-
Log in to Palette.
-
From the left Main Menu, select Profiles.
-
On the Profiles page, click on your EKS cluster profile that uses Cilium as the network pack.
-
Click on the Cilium pack to view the Edit Pack page.
-
Click on the Presets button to expand the options drawer.
-
Scroll down the presets option menu and enable Replace kube-proxy with eBPF.
-
Review the following parameters and adjust to the required values as needed. Some of these parameters are changed automatically after enabling Replace kube-proxy with eBPF.
Parameter Required Value Description Change Required After Enabling Preset? charts.cilium.bpf.masquerade
false
Disables eBPF masquerading because AWS handles NAT and IP masquerading through the ENI interface. True charts.cilium.endpointRoutes.enabled
true
Enables per-endpoint routing to allow direct pod-to-pod communication in ENI mode without encapsulation. True charts.cilium.eni.enabled
true
Enables AWS ENI integration for direct networking instead of using an overlay network. True charts.cilium.ipam.mode
"eni"
Uses AWS ENI-based IP address management (IPAM) to allocate pod IPs directly from AWS VPC subnets. True charts.cilium.enableIPv4Masquerade
false
Disables IPv4 masquerading for outgoing packets because AWS ENI mode provides direct pod-to-pod routing without NAT. True charts.cilium.enableIPv6Masquerade
false
Disables IPv6 masquerading for outgoing packets because AWS handles IPv6 routing without the need for masquerading. True charts.cilium.k8sServiceHost
auto
Ensures Cilium correctly connects to the EKS control plane. This value will be changed during cluster deployment. False charts.cilium.k8sServicePort
"443"
Uses port 443 to connect to the Kubernetes API server because EKS API server communication happens over HTTPS. True charts.cilium.kubeProxyReplacement
"true"
Enables eBPF-based kube-proxy replacement because kube-proxy is disabled, and Cilium must handle service load balancing. False charts.cilium.kubeProxyReplacementHealthzBindAddr
0.0.0.0:10256
Binds the health check service to 0.0.0.0:10256
for the kube-proxy replacement.False charts.cilium.autoDirectNodeRoutes
false
Disables automatic direct routing between nodes because AWS ENI mode already manages routing, making additional direct routes unnecessary. True charts.cilium.ipv4NativeRoutingCIDR
<POD_SUBNET_CIDR>
Set this to a CIDR block that covers all AWS VPC subnets where your worker nodes will be deployed. For example, if your worker node subnets are 10.0.64.0/18
,10.0.128.0/18
, and10.0.192.0/18
, set this to10.0.0.0/16
to ensure all ranges are encapsulated.True charts.cilium.routingMode
native
Uses native routing mode because AWS ENI mode supports direct pod-to-pod routing, making encapsulation unnecessary. False -
Click the New manifest option, and provide a name for the manifest, such as
job-fix-cni
. Click the tick button afterwards. -
Copy the following manifest into the YAML editor. This manifest disables the
kube-proxy
andaws-node
daemonsets by applying a node selector that does not match any nodes. It also removes existing Cilium,kube-dns
, andcert-manager
pods to ensure a clean state for Cilium deployment.apiVersion: batch/v1
kind: Job
metadata:
name: ds-fix
namespace: kube-system
spec:
template:
metadata:
name: ds-fix
spec:
serviceAccountName: ds-fix-sa
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
initContainers:
- name: kubectl-init-pod-1
image: bitnami/kubectl
args:
- "-n"
- "kube-system"
- "patch"
- "daemonset"
- "kube-proxy"
- "aws-node"
- --patch={"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}
containers:
- name: kubectl-pod-1
image: bitnami/kubectl
args:
- "delete"
- "pod"
- "-n"
- "kube-system"
- "-l app.kubernetes.io/part-of=cilium"
- name: kubectl-pod-2
image: bitnami/kubectl
args:
- "delete"
- "pod"
- "-n"
- "kube-system"
- "-l k8s-app=kube-dns"
- name: kubectl-pod-3
image: bitnami/kubectl
args:
- "delete"
- "pod"
- "-n"
- "cert-manager"
- "--all"
restartPolicy: Never
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ds-fix-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ds-fix-role
namespace: kube-system
rules:
- apiGroups:
- apps
resources:
- daemonsets
resourceNames:
- kube-proxy
- aws-node
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ds-fix-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ds-fix-rolebinding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: ds-fix-sa
namespace: kube-system
roleRef:
kind: Role
name: ds-fix-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ds-fix-rolebinding
subjects:
- kind: ServiceAccount
name: ds-fix-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: ds-fix-role
apiGroup: rbac.authorization.k8s.io -
Click Confirm Updates after making the required changes.
-
Click Save Changes on the cluster profile page.
-
Click Deploy on the cluster profile page, and click OK in the pop-up window.
-
Provide the basic information for the cluster and click Next.
-
Click Next on the Cluster Profile page.
-
On the Cluster Config page, configure the cluster as required, and ensure you select Enable static placement (Optional) to provide your AWS VPC details. Click Next when complete.
-
Configure the remaining settings as needed, and deploy the cluster. Refer to Create and Manage AWS EKS Cluster if you need guidance on the available options.
-
As soon as it is available, obtain the API server endpoint for the cluster.
If using the AWS Console, go to AWS > Clusters > clusterName and view the Overview tab for the cluster. Click the clipboard icon next to the API server endpoint field.
If using the AWS CLI, issue the following command to obtain the API endpoint for the cluster. Replace
<clusterName>
with the name of your cluster, and<awsRegion>
with your AWS region.aws eks update-kubeconfig --region <awsRegion> --name <clusterName>
aws eks describe-cluster --name <clusterName> --query "cluster.endpoint" --output textExample output.
https://MY2567C9923FENDPOINT882F9EXAMPLE.gr7.us-east-1.eks.amazonaws.com
-
On your cluster page in Palette, click the Profile tab.
-
Select the Cilium layer and find the
k8sServiceHost
parameter in the YAML editor. -
Change the value from
auto
to the cluster API server endpoint discovered in step 17, but without thehttps://
portion.For example,
"MY2567C9923FENDPOINT882F9EXAMPLE.gr7.us-east-1.eks.amazonaws.com"
. -
Click Save.
The EKS cluster will now deploy successfully.
Scenario - Control Plane Node Fails to Upgrade in Sequential MicroK8s Upgrades
In clusters that use MicroK8s as the
Kubernetes distribution, there is a known issue when using the InPlaceUpgrade
strategy for sequential Kubernetes
upgrades. For example, upgrading from version 1.25.x to version 1.26.x and then to version 1.27.x may cause the control
plane node to fail to upgrade. Use the following steps to troubleshoot and resolve the issue.
Debug Steps
-
Execute the first MicroK8s upgrade in your cluster. For example, upgrade from version 1.25.x to version 1.26.x.
-
Ensure you can access your cluster using kubectl. Refer to the Access Cluster with CLI guide for more information.
-
After the first upgrade is complete, issue the following command to delete the pod named
upgrade-pod
.kubectl delete pod upgrade-pod --namespace default
-
Once the pod is deleted, proceed to the next upgrade. For example, upgrade from version 1.26.x to version 1.27.x.
-
Within a few minutes, the control plane node will be upgraded correctly.