Skip to main content
Version: latest

Prepare Network

warning

The information in this guide is provided as general guidance and example configuration only. It may not meet the specific requirements of your environment or cover every possible scenario. Ensure that you tailor these examples and validation steps to the needs of your networking infrastructure.

Preparing your network for Amazon EKS Hybrid Nodes involves configuring both the AWS and remote network environments. You will also need a secure inter-site connection to enable communication between the edge hosts and your Amazon EKS cluster.

The three main areas you need to configure are:

  1. AWS Network
  2. Remote Network Environment
  3. Inter-Site Connectivity

This guide provides common steps and example configurations for each of these areas. The following diagram provides a high-level example of a networking setup for Amazon EKS Hybrid Nodes.

Example Amazon EKS Hybrid Nodes network architecture

AWS Network

This section provides the steps and example configuration for your AWS network as described in the following AWS documentation:

Prerequisites

  • An AWS account with permissions to view, create, and modify the following resources:

    • VPC
    • Classless Inter-Domain Routing (CIDR) blocks
    • Subnets
    • Route tables
    • Internet gateways
    • Network Address Translation (NAT) gateways
      • Elastic IPs
    • Security groups

    Refer to Amazon VPC policy examples for guidance on Identity and Access Management (IAM) permissions.

Configure AWS Network

  1. Create a VPC where the Amazon EKS cluster will be located, for example, us-east-1. The following table is an example configuration for the VPC.

    SettingExample Value
    Name tageks-hybrid-vpc
    IPv4 CIDR blockIPv4 CIDR manual input
    IPv4 CIDR10.100.0.0/16
    IPv6 CIDR blockNo IPv6 CIDR block
    TenancyDefault
  2. Edit your VPC settings and enable the following options in DNS settings:

    • Enable DNS resolution
    • Enable DNS hostnames
  3. Create two subnets for the VPC that you created. The following table is an example configuration for the subnets.

    SettingExample Value for Subnet 1Example Value for Subnet 2
    VPC IDvpc-0518d3603257bf85d (eks-hybrid-vpc)vpc-0518d3603257bf85d (eks-hybrid-vpc)
    Subnet nameeks-hybrid-subnet-1eks-hybrid-subnet-2
    Availability Zoneus-east-1aus-east-1b
    IPv4 VPC CIDR block10.100.0.0/1610.100.0.0/16
    IPv4 subnet CIDR block10.100.0.0/2410.100.1.0/24
  4. If you plan to deploy AWS worker nodes to a public subnet, edit the subnet settings and enable the Enable auto-assign public IPv4 address option.

  5. If you want any of your subnets to be public, create and attach an internet gateway to the VPC that you created.

  6. For any subnets that you want to keep private, create a NAT gateway for those subnets. The following table is an example configuration for the NAT gateway.

    SettingExample Value for Subnet 1
    Nameeks-hybrid-private-subnet-gateway
    Subnetsubnet-0cdebdb570d3ca783 (eks-hybrid-subnet-1)
    Connectivity typePublic
    Elastic IP allocation IDeipalloc-05e4fdafb32b05447

    The NAT gateway requires an Elastic IP address when setting Connectivity type to Public. Ensure you have already allocated one in the same region as your VPC and subnet. Refer to Allocate an Elastic IP address for guidance.

  7. Edit the main route table depending on whether your subnets will be private or public. The main route table is created automatically for the subnets within the VPC.

    If you want one private subnet and one public subnet, follow the steps to edit the main route table for your private subnet first. Then, create a custom route table for your VPC and configure it for your public subnet.


    Example route table
    DestinationTargetStatusPropagated
    0.0.0.0/0nat-0327bf58440ab78b9ActiveNo
    10.100.0.0/16localActiveNo
  8. Create a security group for your VPC that contains the necessary rules to allow communication with your remote environment. This security group is added as an additional security group when creating your EKS cluster as described in the Prepare EKS Cluster steps.

    The following tables are an example configuration for the security group.

    SettingExample Value
    Security group nameeks-hybrid-remote-rules-sg
    Description (optional)"EKS Hybrid remote environment communication"
    VPCvpc-0518d3603257bf85d (eks-hybrid-vpc)
    Tags (optional)Name = eks-hybrid-remote-rules-sg

Validate

  1. Log in to AWS.

  2. Check that your created network resources have a State of Available or Attached in the chosen region. A list of the expected resources is as follows:

    • AWS VPC for your Amazon EKS cluster.
    • Two subnets within the AWS VPC.
    • Internet gateways for your public subnets.
    • NAT gateways for your private subnets.
  3. Check that you have created the following additional resources:

    • Route tables for your subnets.
    • Default security group for your VPC and custom security group for your remote environment.
    • Elastic IPs for your NAT gateways, if any.

Remote Network Environment

This section provides high-level steps and example configuration for your remote network environment as described in the AWS documentation under On-premises networking configuration.

Prerequisites

  • Access to your on-prem/remote core network devices with sufficient privileges to create and modify network configurations. This includes the following:

    • Permissions to define or adjust IP address allocations in the on-prem/remote environment to avoid CIDR overlap with AWS VPCs.

    • Permissions to configure or update firewall rules and NAT settings.

Configure Remote Network

  1. Configure your Virtual Local Area Network (VLAN) or subnet definitions to a suitable IP range for your hybrid nodes. Refer to On-premises node and pod CIDRs for AWS requirements on CIDR blocks in remote networks.

    • Example hybrid node CIDR block = 10.200.0.0/16
      • Example hybrid node subnets = 10.200.0.0/24, 10.200.1.0/24
    • Example pod CIDR block = 192.168.0.0/16
  2. Configure your NAT settings to allow outbound internet access from your hybrid nodes or, at a minimum, access to the necessary AWS services for hybrid node installation and upgrade.

    The following tables are an example list of enabled service URLs in the us-east-1 region. All endpoints are accessed with the HTTPS protocol on port 443. The services vary depending on the credential model used for the hybrid nodes.

    ServiceEndpoint URL
    Amazon EKShttps://hybrid-assets.eks.amazonaws.com
    Amazon EKShttps://eks.us-east-1.amazonaws.com
    Amazon ECRhttps://api.ecr.us-east-1.amazonaws.com
    Amazon EKS ECRhttps://602401143452.dkr.ecr.us-east-1.amazonaws.com
    AWS Systems Manager (SSM)https://amazon-ssm-us-east-1.s3.us-east-1.amazonaws.com
    AWS Systems Manager (SSM)https://ssm.us-east-1.amazonaws.com
    (Optional) AWS Systems Manager (SSM)https://ec2messages.us-east-1.amazonaws.com
    (Optional) Amazon CloudWatch Logshttps://logs.us-east-1.amazonaws.com
    (Optional) Amazon S3https://s3.us-east-1.amazonaws.com
  3. Configure your firewall rules to allow node and pod communication with necessary AWS services as described in Access required for ongoing cluster operations.

    The following tables are example on-prem/remote firewall rules for AWS services.

    ProtocolsPort RangeSourceDestinationDescription
    TCP1025010.100.0.0/1610.200.0.0/16Amazon EKS cluster to hybrid nodes.
    TCP44310.100.0.0/16192.168.0.0/16Amazon EKS cluster to hybrid pods.
    TCP, UDP53192.168.0.0/16192.168.0.0/16Hybrid pods to CoreDNS.
    TCP, UDP443192.168.0.0/16192.168.0.0/16Hybrid pod to hybrid pod application port.
    SSH2210.100.0.0/1610.200.0.0/16(Optional) Amazon EKS VPC CIDR to hybrid nodes for SSH access.
    ICMP - IPv4All10.100.0.0/1610.200.0.0/16(Optional) Amazon EKS VPC CIDR to hybrid nodes for ICMP access.
  4. Configure firewall rules for Cilium operation. Cilium is used as the Container Network Interface (CNI) for hybrid nodes and requires firewall rules to allow health checks, Virtual Extensible Local Area Network (VXLAN) overlay, and etcd access.

    The following tables are example on-prem/remote firewall rules for Cilium and assumes that hybrid nodes will act as worker nodes without VXLAN overlay networking.

    ProtocolPort RangeSourceDestinationDescription
    TCP424010.100.0.0/1610.200.0.0/16AWS to hybrid node for cilium-health monitoring.
    TCP424010.200.0.0/1610.200.0.0/16Hybrid node to hybrid node for cilium-health monitoring.
    ICMPType 0/8, Code 010.100.0.0/1610.200.0.0/16AWS to hybrid node pings for cilium-health.
    ICMPType 0/8, Code 010.200.0.0/1610.200.0.0/16Hybrid node to hybrid node pings for cilium-health.
  5. Configure firewall rules for Palette SaaS operation, which requires inbound and outbound connectivity to Palette SaaS services and ports.

Validate

  1. Log in to your on-prem/remote network device or network management tool.

  2. Check that the following network resources have been configured for hybrid nodes.

    • VLAN or subnets defined for appropriate IP ranges for hybrid nodes.
    • NAT settings for outbound access to AWS services.
    • Firewall rules for AWS services and Cilium operations.
  3. (Optional) If you have an available host deployed within the VLAN or subnet, SSH into the host, and verify the host can connect to the required AWS and Spectro Cloud services.

    For example, if you have netcat installed, issue the following command on the Edge host to check whether the eks.us-east-1.amazonaws.com domain is accessible on port 443.

    nc -z -v eks.us-east-1.amazonaws.com 443

    Example output, if successful.

    Connection to eks.us-east-1.amazonaws.com port 443 [tcp/https] succeeded!

Inter-Site Connectivity

This section outlines high-level steps and example configurations for establishing inter-site connectivity between AWS and your on-prem/remote environment.

Inter-site connectivity can be configured using a variety of methods, such as:

Refer to Network-to-Amazon VPC connectivity options for guidance on all available options.

important

This section's primary focus is AWS Site-to-Site VPN, although some steps can be adapted for AWS Direct Connect.

Prerequisites

  • An AWS account with permissions to view, create, and modify the following resources:

    • Route tables
    • Transit gateways or Virtual private gateways, or both
    • Security groups
    • Customer gateway
    • Site-to-Site VPN or Direct Connect

    Refer to Amazon VPC policy examples for guidance on Identity and Access Management (IAM) permissions.

  • Access to your on-prem/remote core network devices with sufficient privileges to create and modify network configurations. This includes the following:

    • If using a VPN, permissions to configure or update IPsec tunnels/security policies.

    • If using an IPsec VPN or similar encrypted connection, permissions to generate, install, and rotate certificates or keys on the local network equipment.

    • If using a VPN, permissions to update firewall rules for VPN ports.

    • Permissions to adjust or introduce Border Gateway Protocol (BGP) or manage static routes that control traffic to and from AWS.

  • The network connectivity between AWS and on-prem/remote environment meets the recommended minimum network requirements.

Configure Inter-Site Connectivity

  1. In AWS, create a customer gateway.

    The following table is an example configuration for the customer gateway. Refer to Customer gateway options for your AWS Site-to-Site VPN connection for further guidance on all available options.

    SettingExample ValueDescription
    Name tag (optional)eks-hybrid-remote-gateway-1The optional AWS Name tag value for the customer gateway.
    BGP ASN65000The BGP Autonomous System Number (ASN) used to identify your on-prem/remote gateway in BGP route exchanges. It must be distinct from the ASN configured on the AWS target gateway.
    IP address3.232.157.211The public IP address of the on-prem/remote gateway used to establish the VPN connection.
    Device (optional)eks-hybrid-remote-gateway-1An optional identifier for the device, used for reference within the AWS console.
  2. In AWS, create a target gateway and attach it to your VPC. Virtual private gateways are attached to a single VPC, whereas transit gateways can facilitate multiple VPCs.

    important

    If you are planning to use AWS Direct Connect, ensure that you create the gateway in the AWS Direct Connect console. If using AWS Site-to-Site VPN, create the gateway in the AWS VPC console.

    The following tables are example configurations for the target gateway.

    SettingExample ValueDescription
    Name tag (optional)eks-hybrid-vgwThe optional AWS Name tag value for the virtual private gateway.
    Autonomous System Number (ASN)Amazon default ASNA unique numeric identifier that the AWS gateway uses in BGP route exchanges. This ASN must be distinct from the customer gateway ASN.
    Attach to VPCvpc-0518d3603257bf85d (eks-hybrid-vpc)The AWS VPC that the virtual private gateway is attached to. In the AWS Console, this step is performed after creating the virtual private gateway.
  3. Configure routing in AWS to enable traffic from your on-prem/remote network.

    info

    If using AWS Direct Connect, you would need to map traffic from your on-prem/remote network to your AWS VPC private subnet CIDRs.
    For example, both remote node and pod CIDRs 10.200.0.0/16 and 192.168.0.0/16 → Private subnet CIDRs 10.100.0.0/24 and 10.100.1.0/24.

    • If using a virtual private gateway, enable route propagation on your subnet route tables.

      • If using static routing, once your IPsec tunnels have been established later on in step 7, the remote node and remote pod CIDR routes should automatically propagate to your subnet route tables.

        Example
        DestinationTargetStatusPropagated
        10.200.0.0/16vgw-08b7d849217105d6fActiveYes
        192.168.0.0/16vgw-08b7d849217105d6fActiveYes

        If they are not automatically propagated, you will need to define them manually.

    • If using a transit gateway, add two routes to your subnet route tables and your transit gateway route table.

      • For the subnet route tables, add routes that target the transit gateway for traffic destined for the remote node CIDRs and remote pod CIDRs.

        Example
        DestinationTargetStatusPropagated
        10.200.0.0/16tgw-06e39deb85a158d2eActiveNo
        192.168.0.0/16tgw-06e39deb85a158d2eActiveNo
      • For the transit gateway route table, create active static routes for the remote node CIDRs and remote pod CIDRs. These should be attached to the VPN.

        Example
        CIDRAttachment IDResource IDResource typeRoute typeRoute state
        10.200.0.0/16tgw-attach-0b80c5b8aff518eadvpn-0c3568c2303ac18df(3.232.157.211)VPNStaticActive
        192.168.0.0/16tgw-attach-0b80c5b8aff518eadvpn-0c3568c2303ac18df(3.232.157.211)VPNStaticActive
  4. Create a VPN connection in AWS.

    The following tables are example configurations for an AWS Site-to-Site VPN depending on the target gateway type.

    SettingExample ValueDescription
    Name tag (optional)eks-hybrid-sts-vpnThe optional AWS Name tag value for the Site-to-Site VPN.
    Target gateway typeVirtual private gatewayThe target gateway type. This should match what you created in step 2.
    Virtual private gatewayvgw-08b7d849217105d6f (eks-hybrid-vgw)The virtual private gateway for the Amazon EKS VPC.
    Customer gatewayExistingChoose Existing to be able to select the customer gateway created in step 1.
    Customer gateway IDcgw-0b4ec7c65c5189d1e (eks-hybrid-remote-gateway-1)The customer gateway for your on-prem/remote gateway connection.
    Routing optionsStaticWhether to use dynamic or static routing.
    Static IP prefixes10.200.0.0/16, 192.168.0.0/16If using static routing, the remote node CIDR and remote pod CIDR must be added here.
  5. Download the configuration file to help you configure your on-prem/remote gateway device.

    Example download configuration
    SettingExample Value
    VendorpfSense
    PlatformpfSense
    Softwarepfsense 2.2.5+(GUI)
    IKE versionikev1
  6. On your on-prem/remote VPN gateway, configure IPsec Phase 1 tunnels with Phase 2 security associations to establish a connection to your AWS VPN. The Phase 2 security associations need to include the following routes:

    • Hybrid node network CIDR to AWS VPC CIDR.

      • This can be split into multiple routes for each hybrid node subnet. If doing so, ensure that the AWS VPN has paired traffic selectors configured. If using an AWS Site-to-Site VPN, this would be configured through the Local IPv4 Network CIDR and Remote IPv4 Network CIDR settings.
    • Hybrid pod network CIDR to AWS VPC CIDR.

    The following screenshot shows an example IPsec tunnel configuration on a pfSense device.

    Example IPsec tunnel configuration in pfSense

  7. Ensure that your IPsec tunnels have a connection established on your on-prem/remote VPN gateway for both phase 1 and phase 2 connections. The following screenshot shows an example of a connected and disconnected tunnel on a pfSense device.

    Example IPsec tunnel status in pfSense

  8. Configure your on-prem/remote firewall rules to allow VPN traffic. You will need to configure rules for each IPsec tunnel.

    The following tables describe example firewall rules where 52.44.108.101 and 3.225.148.144 are example Outside IP Address entries for AWS Site-to-Site VPN Tunnels.

    ProtocolPortSourceDescription
    UDP119452.44.108.101, 3.225.148.144OpenVPN traffic.
    UDP50052.44.108.101, 3.225.148.144IKE Phase 1 for IPsec VPN.
    UDP450052.44.108.101, 3.225.148.144NAT-Traversal (NAT-T) for IPsec VPN.
    ESP (IP 50)N/A52.44.108.101, 3.225.148.144Encapsulating Security Payload (ESP) for IPsec data encryption.
  9. Ensure that the appropriate NAT exemptions or policies, such as IPsec passthrough, are configured so that IPsec traffic is not inadvertently translated.

  10. Configure your on-prem/remote router to ensure network traffic to and from AWS reaches the correct hybrid nodes.

    In both BGP and static routing scenarios, a route must exist to send all Amazon EKS VPC-bound traffic through a centralized VPN gateway.

    • Use BGP to share your remote node and pod CIDRs with AWS.

      • If using AWS Direct Connect, this may be all that is required as AWS can route directly to individual on-prem/remote nodes.
    • Automate local route advertisement so your on-prem/remote routers dynamically learn each node’s CIDR, removing the need for manual route management.

    • In VPN setups where AWS routes all traffic to a single on-prem VPN server, rely on BGP to direct traffic to the correct on-prem/remote nodes or set up static routes as needed.

    • Optionally, you can define a unique VPN server IP for each hybrid node as a fallback during the Create Hybrid Node Pool steps.

      • If your on-prem/remote gateway or default gateway does not automatically route traffic bound for the AWS VPC CIDR to the VPN server, even when BGP is used, this feature ensures each node can still reach out to AWS. This is not necessary if the network already has the proper route to the AWS VPC CIDR.

Validate

  1. Log in to the Amazon VPC console.

  2. Check that your AWS Site-to-Site VPN connection has two tunnels with a Status of Up.

  3. If you have an available host deployed within the on-prem/remote VLAN or subnet, SSH into the host, and attempt to reach your AWS VPC gateway.

    Replace <awsVpcGateway> with the IP address of your AWS VPC gateway, for example, 10.100.0.1.

    ping <awsVpcGateway>

    Check that the ping statistics from the output show a healthy connection.

    Example healthy output.

    PING 10.100.0.1 (10.100.0.1) 56(84) bytes of data.
    64 bytes from 10.100.0.1: icmp_seq=1 ttl=64 time=27.5 ms
    64 bytes from 10.100.0.1: icmp_seq=2 ttl=64 time=28.2 ms
    64 bytes from 10.100.0.1: icmp_seq=3 ttl=64 time=29.1 ms
    64 bytes from 10.100.0.1: icmp_seq=4 ttl=64 time=27.9 ms
    --- 10.100.0.1 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3999ms
    rtt min/avg/max/mdev = 27.5/28.2/29.1/0.6 ms
  4. If you have an EC2 instance available that has been deployed in your AWS VPC, SSH into the instance, and attempt to reach an available host deployed within the on-prem/remote VLAN or subnet.

    Replace <hostIpAddress> with the IP address of your on-prem/remote host, for example, 10.200.1.23.

    ping <hostIpAddress>

    Check that the ping statistics from the output show a healthy connection.

    Example healthy output.

    PING 10.200.1.23 (10.200.1.23) 56(84) bytes of data.
    64 bytes from 10.200.1.23: icmp_seq=1 ttl=64 time=27.5 ms
    64 bytes from 10.200.1.23: icmp_seq=2 ttl=64 time=28.2 ms
    64 bytes from 10.200.1.23: icmp_seq=3 ttl=64 time=29.1 ms
    64 bytes from 10.200.1.23: icmp_seq=4 ttl=64 time=27.9 ms
    --- 10.200.1.23 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3999ms
    rtt min/avg/max/mdev = 27.5/28.2/29.1/0.6 ms

Next Steps

Complete the remaining sections as highlighted in Prepare Environment.