šŸš€ Using Karpenter for Efficient Autoscaling on EKS: From Beginner to Pro

As cloud-native workloads scale in complexity, efficient resource usage becomes critical. Kubernetes’ default autoscalers (like Cluster Autoscaler) do the job — but in dynamic environments, you need more speed, cost-awareness, and flexibility.

Enter Karpenter, AWS’s next-gen autoscaler built for Amazon EKS.

This blog will guide you from zero to advanced with Karpenter — covering architecture, setup, best practices, and pro tips for running elastic, cost-optimized clusters on EKS.


šŸ” What is Karpenter?

Karpenter is an open-source autoscaler from AWS that:

  • Provisions nodes quickly based on workload needs
  • Supports multiple instance types, zones, and capacity types
  • Works independently of node groups or ASGs
  • Uses custom scheduling logic for better bin-packing and cost efficiency

šŸ“Œ Unlike Cluster Autoscaler, Karpenter talks directly to EC2, skipping slow autoscaling groups.


🧠 Why Use Karpenter?

FeatureBenefit
šŸ”„ Fast ProvisioningLaunches new nodes in seconds
šŸŽÆ Pod-AwareScales based on actual pod needs (CPU/mem)
šŸ’° Cost-AwareOptimizes for Spot and On-Demand balance
🧰 No Node Groups NeededWorks with minimal EKS config
⚔ Multi-AZ & Multi-TypePicks best capacity from what’s available
✨ Simplified OpsDeclarative, native, no more manual tweaks

šŸ› ļø Karpenter Architecture

Here’s how Karpenter works:

  1. Pod is scheduled but no node has enough space.
  2. Karpenter sees the unschedulable pod.
  3. It calculates optimal instance type/zone.
  4. Provisions an EC2 instance via the LaunchTemplate.
  5. Schedules the pod when the node becomes Ready.
  6. If nodes become idle → Karpenter de-provisions them.

Think of Karpenter as a real-time, intelligent EC2 advisor.


🧰 Prerequisites for Using Karpenter with EKS

  • A running Amazon EKS cluster
  • IAM Roles for Service Accounts (IRSA) enabled
  • Helm installed (optional but easier)
  • kubectl, eksctl, and AWS CLI configured

🧪 Step-by-Step: Installing Karpenter on EKS

āœ… 1. Create a Cluster (if needed)

eksctl create cluster --name karpenter-demo \
  --region us-east-1 --zones us-east-1a,us-east-1b \
  --without-nodegroup

Karpenter works without managed node groups by design.


āœ… 2. Install Core Dependencies

helm repo add karpenter https://charts.karpenter.sh
helm repo update

āœ… 3. Create the Karpenter Node IAM Role

eksctl create iamidentitymapping \
  --cluster karpenter-demo \
  --arn arn:aws:iam::<ACCOUNT_ID>:role/KarpenterNodeRole \
  --username system:node:{{EC2PrivateDNSName}} \
  --group system:bootstrappers,system:nodes

āœ… 4. Install Karpenter via Helm

helm install karpenter karpenter/karpenter \
  --namespace karpenter \
  --create-namespace \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::<ACCOUNT_ID>:role/KarpenterControllerRole \
  --set settings.clusterName=karpenter-demo \
  --set settings.clusterEndpoint=https://<YOUR_EKS_ENDPOINT>

āœ… 5. Define a Provisioner

This tells Karpenter how to choose nodes.

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  requirements:
    - key: "node.kubernetes.io/instance-type"
      operator: In
      values: ["m5.large", "m5.xlarge", "t3.medium"]
  provider:
    subnetSelector:
      karpenter.sh/discovery: karpenter-demo
    securityGroupSelector:
      karpenter.sh/discovery: karpenter-demo
  ttlSecondsAfterEmpty: 60

This sets instance types, subnet/Security Group tags, and allows nodes to terminate after 60s of idleness.

Apply it:

kubectl apply -f provisioner.yaml

šŸš€ Test Karpenter Scaling

āœ… Deploy a workload that exceeds current capacity

apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 10
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      containers:
        - name: inflate
          image: public.ecr.aws/eks-distro/kube-proxy:v1.21.2-eksbuild.2
          resources:
            requests:
              memory: "1Gi"
              cpu: "500m"
kubectl apply -f inflate.yaml

Check the pods:

kubectl get pods

Watch Karpenter in action:

kubectl get nodes -w

šŸ“Š Advanced Configurations & Tips

šŸ” Consolidation

Automatically reschedules pods to bin-pack and remove underutilized nodes.

Enable in the Provisioner:

spec:
  consolidation:
    enabled: true

šŸ’° Use Spot + On-Demand Mix

requirements:
  - key: karpenter.sh/capacity-type
    operator: In
    values: ["spot", "on-demand"]

šŸ—ŗļø Multi-AZ Load Balancing

Set subnetSelector for multiple zones — Karpenter will choose based on capacity availability and pricing.


šŸ”’ Taints, Node Affinity, and Labels

Karpenter respects pod tolerations, affinities, and node selectors, so you can run specialized workloads (e.g., GPU pods, Fargate pods).


šŸ“‰ When Not to Use Karpenter

SituationBetter Alternative
Predictable steady workloadsCluster Autoscaler with ASG
Highly regulated capacity controlManual node groups
Extremely tight cost controlSpot-only clusters with reserved caps

šŸ”š Summary: Why Karpenter Rocks on EKS

FeatureKarpenter
āœ… Speed~60s to scale up
āœ… SimplicityNo ASGs or node groups
āœ… FlexibilityAny instance type, zone, or capacity
āœ… Cost EfficiencySpot-aware and idle node cleanup
āœ… Cloud NativeDeeply integrated with EKS + EC2

šŸ“š Resources & Docs


šŸ Final Thoughts

Karpenter is more than just a ā€œfaster Cluster Autoscaler.ā€
It’s a modern approach to infrastructure elasticity — designed for cost-aware, AI-driven, and real-time workloads.

If you’re building with Amazon EKS in 2025, Karpenter should be part of your cluster design from Day 1.


Category: 
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments