Skip to main content
Version: latest

AWS EFS

Versions Supported

Policy Information

You must create a policy that allows you to use EFS from your IAM account. You can use the following JSON to create the policy.

{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action": ["elasticfilesystem:DescribeAccessPoints", "elasticfilesystem:DescribeFileSystems"],
"Resource": "*",
},
{
"Effect": "Allow",
"Action": ["elasticfilesystem:CreateAccessPoint"],
"Resource": "*",
"Condition": { "StringLike": { "aws:RequestTag/efs.csi.aws.com/cluster": "true" } },
},
{
"Effect": "Allow",
"Action": "elasticfilesystem:DeleteAccessPoint",
"Resource": "*",
"Condition": { "StringEquals": { "aws:ResourceTag/efs.csi.aws.com/cluster": "true" } },
},
],
}

Storage Class

Palette creates storage classes named spectro-storage-class. You can view a list of storage classes using this kubectl command:

kubectl get storageclass

PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request made by a pod for a certain amount of storage from the cluster. It acts as a link between the pod and the storage resource, allowing the pod to use the storage. You can learn details about a PVC, as shown in the following output, when you use the kubectl describe pvc command.

kubectl describe pvc my-efs-volume
Name:          efs
Namespace: default
StorageClass: aws-efs
Status: Pending

Volume:

Labels:<none>

Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/
storage-class":"aws-efs"},"name":"..."}

volume.beta.kubernetes.io/storage-class: aws-efs

Finalizers: [kubernetes.io/pvc-protection]

Capacity:

Access Modes:

Events:
| Type | Reason | Age | From | Message |
| ------- | ------------------ | ------------------ | --------------------------- | ------------------------ |
| Warning | ProvisioningFailed | 43s (x12 over 11m) | persistentvolume-controller | no volume plugin matched |
Mounted By: <none>

Troubleshooting

Some basic troubleshooting steps you can take if you receive errors in your pods when mounting an Amazon EFS volume in your Amazon EKS cluster are to verify you have the following:

If you encounter some issues when mounting an Amazon EFS volume in your Amazon EKS cluster, start by reviewing the items below:

  • Ensure you have an Amazon EFS file system created with a mount target in each of the worker node subnets.
  • A valid EFS storage class definition is created and using the efs.csi.aws.com provisioner.
  • A valid PersistentVolumeClaim (PVC) definition and PersistentVolume definition. This is not necessary if you are using dynamic provisioning.
  • The Amazon EFS CSI driver is installed in the cluster.

Common Issues

The following list provides more specific details to help you troubleshoot issues when mounting an Amazon EFS volume.

tip

The CSI driver pod logs are also available to help you determine the cause of the mount failures. If the volume is failing to mount, use efs-plugin logs to help you debug. Use the following command to view the logs.

kubectl logs --namespace kube-system --label app=efs-csi-node --containe efs-plugin
  • Mount Targets: Verify the mount targets are configured correctly. Be sure to create the EFS mount targets in each Availability Zone where the EKS worker nodes are running.

  • Allow NFS Traffic: Verify the security group associated with your EFS file system and worker nodes allows NFS traffic. The security group that's associated with your EFS file system must have an inbound rule that allows NFS traffic (port 2049) from the CIDR for your cluster's VPC. The security group that's associated with your worker nodes where the pods are failing to mount the EFS volume must have an outbound rule that allows NFS traffic (port 2049) to the EFS file system.

  • Subdirectories: If you are mounting the pod to a subdirectory, verify the subdirectory is created in your EFS file system. When you add sub paths in persistent volumes, the EFS CSI driver does not create the subdirectory path in the EFS file system as part of the mount operation. Subdirectories must be present before you start the mount operation.

  • DNS server: Confirm the cluster's Virtual Private Cloud (VPC) uses the Amazon DNS server.

  • Permissions: Verify you have iam mount options in the Persistent Volume (PV) definition when using a restrictive file system policy. In some cases, the EFS file system policy is configured to restrict mount permissions to specific IAM roles. In this case, the EFS mount helper in the EFS CSI driver requires the -o iam mount option during the mount operation. Include the spec.mountOptions property in the Persistent Volume (PV) definition to specify the mount options.

    spec:
    mountOptions:
    - iam
  • IAM role: Verify the Amazon EFS CSI driver controller service account associates with the correct IAM role and the IAM role has the required permissions. Use the following command to view the service account annotation.

    kubectl describe sa efs-csi-controller-sa --namespace kube-system

    The output should look similar to the following:

    eks.amazonaws.com/role-arn"="arn:aws:iam::111122223333:role/AmazonEKS_EFS_CSI_Driver_Policy
  • Driver Pods: Verify the EFS CSI driver pods are active. Issue the following command to display a list of controller pods and node pods active in your cluster.

    kubectl get all --label app.kubernetes.io/name=aws-efs-csi-driver --namespace kube-system
  • File System Not Mounting: Verify the EFS mount operation from the EC2 worker node where the pod is failing to mount the file system. Log in to the Amazon EKS worker node where the pod is scheduled. Then, use the EFS mount helper to try to manually mount the EFS file system to the worker node. Use the following command to mount the EFS file system.

    sudo mount -types -efs -options tls file-system-dns-name efs-mount-point/

Check out the Amazon EFS troubleshooting guide for more information.

Terraform

You can reference the AWS EFS pack in Terraform with a data resource.

data "spectrocloud_registry" "public_registry" {
name = "Public Repo"
}

data "spectrocloud_pack_simple" "csi-aws-efs" {
name = "aws-efs"
version = "1.7.0"
type = "helm"
registry_uid = data.spectrocloud_registry.public_registry.id
}

References