Deploying EKS Auto Mode with Terraform

Intro

In a recent post I talked about how EKS Auto Mode is a game changer when it comes to deploying applications onto a Kubernetes cluster.

By lowering the barrier to entry, this update not only makes it easier than ever to run Kubernetes, it also provides peace of mind knowing that AWS are handling the backend security, compute resources, cost optimization and more without you having to worry about it.

That means that teams can focus on the application being deployed rather than running the underlying resources. Spending more time in the application layer is of course the desired end result, allowing teams to deploy faster and with more confidence.

Supporting Resources

Before we go ahead and deploy the Kubernetes infrastructure, we need something to deploy into. The GitHub repository that accompanies this article includes all of the required supporting resources, which essentially is a VPC with public and private subnets along with a single NAT Gateway.

It’s a simple setup, designed to allow you to deploy a working example quickly without having to worry about missing dependencies or writing extra Terraform resource configuration. You could of course deploy into existing resources and amend the accompanying code accordingly.

Deploying the Example

Pre-Requisites

You will need the following to deploy this example:

  • Access to an AWS account with the relevant permissions to deploy EKS clusters
  • Terraform installed (minimum version 1.3.2)
  • The AWS CLI installed and configured
  • kubectl installed

You can deploy the example yourself by cloning the GitHub repo here and running the following commands:

cd deploying-eks-auto-mode-with-terraform/terraform/
terraform init
terraform plan
terraform apply

We’ve touched on the supporting resources earlier, let’s take a look at the Terraform configuration that deploys the EKS Cluster.

Here’s the code:

module "eks_auto_mode" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.31.6"

  cluster_name                   = var.service
  cluster_version                = local.cluster_version
  cluster_endpoint_public_access = true

  enable_cluster_creator_admin_permissions = true

  cluster_compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

}

The first thing to note is that we’re using the community module for EKS, which further simplifies the deployment for us. Once deployed, we can run the following command to update our local config with our new cluster:

aws eks update-kubeconfig --region eu-west-2 --name ttl-blog

Once that’s done we can use kubectl to query the nodepools. You’ll notice that there are currently 0 nodes - this is because behind the scenes Karpenter is being used to right-size the computing resources and ensure cost optimisation. This will mean that our initial deployment will take a couple of minutes more but that shouldn’t be an issue.

kubectl get nodepools
NAMENODECLASSNODESREADYAGE
general-purposedefault0True2m58s
kubectl apply -f ../inflate.yml

As that’s applying, we can use the following command to see what is happening behind the scenes as nodes and pods are deployed:

kubectl get events -w --sort-by '.lastTimestamp'

Running kubectl get nodepools again shows that we have a node up - you can obviously also look in the console to see what has been deployed.

NAMENODECLASSNODESREADYAGE
general-purposedefault1True6m46s

Once you’ve finished looking around, we can delete the deployment by running the following command.

kubectl delete -f ../inflate.yml

What’s really interesting here is that the EKS cluster scales back to 0 nodes since there is no application to run. This goes to show the cost savings that can be had when compute is right sized in line with demand, especially as most people would be inclines to over-spec their EC2 instances rather than under-spec them.

Finally, we can run a destroy operation to remove the resources that we’ve just deployed:

terraform destroy

Conclusion and Benefits

This update brings with it a host of benefits, not least of which is taking something that can be very complex and presenting a simple way of abstracting that heavy lifting away from the end users.

Here is a summary of the main benefits as I see them:

  • Simplicity. This massively reduces the amount of heavy lifting needed to start running applications on Kubernetes.
  • Sensible defaults. AWS have modelled this based on tens of millions of customers running Kubernetes on their platform.
  • Automatic scaling, including scale down to zero. Optimizing the compute and cost dimensions dynamically without any design or running overhead for you is a game changer.
  • Automatic remediation of failed nodes. Identifying failed nodes and proactively replacing them could benefit on-call teams.
  • Bottlerocket. Designed to be secure by default and optimised for container workloads, running bottlerocket ticks off a lot of boxes
Written by