Get Started with Palette and Terraform

This tutorial is based on the Terraform source code. It assumes that you have working knowledge of Terraform and have some understanding how clusters work within Palette.

By the time you complete this tutorial, you will have done the following:

  1. Deploy a cluster via Spectro Cloud Terraform provider on Amazon Elastic Kubernetes Service (EKS) with an infrastructure profile.


  2. Modify a cluster profile to include add-ons and deploy.


  3. Clean up by destroying the cluster.


Tutorial Objectives

  1. Use Terraform to create a Cluster Profile.


  2. Use Terraform to build and destroy a Cluster in Palette.

Prerequisites

To create a cluster on Palette, the following information is required:


Terraform Configuration File

The configuration file contains three sections: Provider, Variable, and Modules.

Providers

Terraform-required providers are defined in the provider.tf file. The Spectro Cloud provider is configured with credentials, host, and project name as illustrated below:


terraform {
required_providers {
random = {
source = "hashicorp/random"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
null = {
source = "hashicorp/null"
version = "3.1.0"
}
spectrocloud = {
source = "spectrocloud/spectrocloud"
version = "0.6.1-pre"
}
}
required_version = ">= 0.14.9"
}
provider "spectrocloud" {
# Configuration options
host = var.spectro_host
api_key = var.spectro_api_key
project_name = vars.spectro_project_name
}


Variables

You can have multiple environment variables. Each environment has its variables defined in a file within the ./tfvars directory.


aws_access_key = "To fill"
aws_secret_key = "To fill"
spectro_host = "api.spectro.com"
spectro_api_key = "To fill"
spectro_project_name = "Default"
cluster_name = "my-test-cluster"
cluster_profile = "eks-profile"
cluster_profile_cloud = "eks"
aws_region = "us-east-1"
worker_nodes_count = "5"
worker_instance_type = "t3.large"
worker_disk_size = "30"
aws_ssh_key_name = "To fill"

In the example above, we define in the dev.tfvars file variables for a development environment. It contains credentials and cluster specifications such as number of worker nodes and instance type, and so on.

Note: A key pair must be created in the desired AWS region.


Modules

In the main.tf file, three submodules are called to create resources on Palette:


  • Cloud Account: It creates and configures the AWS account on Palette using access key id and secret key, it's required for next steps.


    resource "spectrocloud_cloudaccount_aws" "aws-1" {
    name = "aws-1"
    aws_access_key = var.aws_access_key
    aws_secret_key = var.aws_secret_key
    }
    terraform {
    required_providers {
    spectrocloud = {
    source = "spectrocloud/spectrocloud"
    version = "0.6.1-pre"
    }
    }
    }

  • Cluster Profile: It encapsulates the Kubernetes cluster configuration.

    Palette uses packs to define layers like OS, CNI plugins, K8s version and storage interface. These packs are retrieved as data sources on Terraform.

    A Cluster Profile is defined by a set of packs. In the example below, the type is cluster, because it's a configuration of a Kubernetes cluster.


    data "spectrocloud_pack" "amazon-linux-eks" {
    name = "amazon-linux-eks"
    version = "1.0.0"
    }
    data "spectrocloud_pack" "cni" {
    name = "cni-aws-vpc-eks"
    version = "1.0"
    }
    data "spectrocloud_pack" "k8s" {
    name = "kubernetes-eks"
    version = "1.2.1"
    }
    data "spectrocloud_pack" "csi" {
    name = "csi-aws"
    # version = "1.0.0"
    }
    resource "spectrocloud_cluster_profile" "profile" {
    name = var.cluster_profile
    cloud = var.cluster_profile_cloud
    type = "cluster"
    pack {
    name = data.spectrocloud_pack.amazon-linux-eks.name
    tag = "1.0.0"
    uid = data.spectrocloud_pack.amazon-linux-eks.id
    }
    pack {
    name = data.spectrocloud_pack.k8s.name
    tag = "1.21.x"
    uid = data.spectrocloud_pack.k8s.id
    }
    pack {
    name = "cni-aws-vpc-eks"
    tag = "1.0"
    uid = data.spectrocloud_pack.cni.id
    }
    pack {
    name = "csi-aws"
    tag = "1.0.0"
    uid = data.spectrocloud_pack.csi.id
    }


  • Add-on Packs - On the other hand, a Cluster Profile of type add-on is used to template application configurations, such as Kubeflow. See this example below.


    data "spectrocloud_pack" "kubeflow" {
    name = "kubeflow"
    version = "1.2.0"
    }
    resource "spectrocloud_cluster_profile" "addon-profile" {
    name = "kubeflow"
    description = "Kubeflow"
    type = "add-on"
    pack {
    name = data.spectrocloud_pack.kubeflow.name
    tag = data.spectrocloud_pack.kubeflow.name
    uid = data.spectrocloud_pack.amazon-linux-eks.id
    values = data.spectrocloud_pack.kubeflow.values
    }
    }

Building the Infrastructure

All dependencies between the three submodules are managed in the main.tf file automatically by Terraform. It creates a Cloud Account first, then the Cluster Profile, and finally, the Cluster.


Deploy the Cluster

To provision all resources with Terraform, execute the following commands:


  1. Initialize the backend and download the required providers and plugins.


    # terraform init

    Terraform automatically downloads all used providers and its dependencies.

    terraform

    tf-initializing

  2. Apply the Terraform code to provision resources. We provide an environment variables file.


    terraform apply -var-file tfvars/dev.tfvars

    Terraform will describe the desired resources state. There are three types of provisioning:


    • Add - A new resource will be created.


    • Change - An existing resource will be updated, one or more attributes will be modified.


    • Destroy - A resource will be deleted.


    tf-apply


  3. Type yes to confirm the provisioning plan. Provisioning all resources might take several minutes.

    tf-provision-plan


  4. Once completed, log in to Palette to view the newly created cluster.


    provisioned-cluster


  5. Update the Terraform code to provision an add-on to install Kubeflow on your cluster.


    tf-newly-created


Modify the Cluster

  1. Run the terraform apply command again to detect the change in the desired infrastructure state.


    tf-apply


  2. Type yes to create the add-on. Terraform will only create missing resources.


    create-missing-resources


  3. Verify that Kubeflow is successfully added to the cluster on Palette.


    tf-kubeflow


Clean Your Lab

Finally, you can easily tear down all resources and quickly clean up your lab. To destroy resources with Terraform, run the following command:

terraform destroy -var-file tfvars/dev.tfvars


  1. Type yes to confirm that you want to destroy all resources.


    destroy-complete


  2. Once confirmed, Terraform destroys all resources present in the state file.


    tf-destroy-complete


  3. Your resources in Palette are cleared.


Conclusion

With this quick introduction, you deployed a cluster via the Spectro Cloud Terraform provider on Amazon Elastic Kubernetes. You also modified the cluster and added some packs. Go ahead and try adding and removing other packs.

Don't forget to clean you lab when you're done!