Voiced by Amazon Polly |
Introduction
Amazon Elastic Kubernetes Service (EKS) offers flexibility in managing worker nodes through managed and self-managed node groups. In this guide, we will walk through creating and configuring a self-managed node group with custom max_pods settings, which can be crucial for optimizing resource utilization and cluster performance.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Prerequisites
- AWS CLI configured with appropriate permissions
- kubectl installed and configured
- eksctl installed
- An existing EKS cluster
- AWS Systems Manager Parameter Store access
Understanding max_pods
Before we dive in, it’s important to understand what max_pods means in the context of Kubernetes:
- max_pods determines the maximum number of pods that can run on a single node
- The default value varies based on the instance type
- Setting an appropriate max_pods value is crucial for:
- Network performance
- Resource utilization
- Cluster stability
Creating a Self-Managed Node Group
Step 1: Get Cluster Information
- First, we need to retrieve the cluster information that will be used in our bootstrap script:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# Store cluster name export CLUSTER_NAME="your-cluster-name" # Get cluster endpoint export API_SERVER_URL=$(aws eks describe-cluster \ --name ${CLUSTER_NAME} \ --query "cluster.endpoint" \ --output text) # Get cluster CA certificate export B64_CLUSTER_CA=$(aws eks describe-cluster \ --name ${CLUSTER_NAME} \ --query "cluster.certificateAuthority.data" \ --output text) |
Step 2: Create Node AWS IAM Role
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
# Create IAM role for nodes aws iam create-role \ --role-name EKSNodeRole \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' # Attach required policies aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly |
Step 3: Create a Launch Template
Create a launch template that includes our custom max_pods configuration. Note how we explicitly set max_pods in both bootstrap.sh and kubelet arguments:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
# Store cluster name cat << EOF > launch-template-data.json { "LaunchTemplateData": { "InstanceType": "t3.large", "IamInstanceProfile": { "Name": "EKSNodeInstanceProfile" }, "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "VolumeSize": 20, "VolumeType": "gp3" } } ], "UserData": "$(echo -n "#!/bin/bash # Download and install worker node binaries /etc/eks/bootstrap.sh ${CLUSTER_NAME} \ --b64-cluster-ca ${B64_CLUSTER_CA} \ --apiserver-endpoint ${API_SERVER_URL} \ --dns-cluster-ip 10.100.0.10 \ --use-max-pods false \ --kubelet-extra-args '--max-pods=110 --node-labels=node-type=self-managed' \ --container-runtime containerd" | base64 -w 0)", "TagSpecifications": [ { "ResourceType": "instance", "Tags": [ { "Key": "Name", "Value": "EKS-Self-Managed-Node" } ] } ] } } EOF aws ec2 create-launch-template \ --launch-template-name eks-self-managed-node \ --version-description 1 \ --launch-template-data file://launch-template-data.json |
Step 4: Create Auto Scaling Group
bash:
1 2 3 4 5 6 7 8 9 |
aws autoscaling create-auto-scaling-group \ --auto-scaling-group-name eks-self-managed-nodes \ --launch-template LaunchTemplateName=eks-self-managed-node,Version='$Latest' \ --min-size 1 \ --max-size 5 \ --desired-capacity 3 \ --vpc-zone-identifier "subnet-xxxxx,subnet-yyyyy" \ --tags "Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned" \ --tags "Key=k8s.io/cluster-autoscaler/enabled,Value=true" |
Step 5: Enable Nodes to Join the Cluster
Create the node authentication ConfigMap to allow nodes to join the cluster:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# Get the node role ARN NODE_ROLE_ARN=$(aws iam get-role --role-name EKSNodeRole --query 'Role.Arn' --output text) # Create auth ConfigMap cat << EOF > aws-auth-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: ${NODE_ROLE_ARN} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes EOF # Apply the ConfigMap kubectl apply -f aws-auth-cm.yaml |
Verifying the Configuration
After deployment, verify your node group configuration:
1 2 3 4 5 6 7 8 |
# Check nodes in your cluster kubectl get nodes # Verify max-pods setting kubectl describe node <node-name> | grep "max-pods" # Check node labels kubectl get nodes --show-labels | grep "node-type=self-managed" # View node capacity kubectl get nodes -o json | jq '.items[].status.capacity.pods' |
Best Practices
- Instance Type Selection
- Choose instance types based on your workload requirements
- Consider CPU, memory, and networking requirements
- max_pods Calculation
- Use Amazon’s formula: (Number of ENIs × (IPv4 addresses per ENI – 1)) + 2
- Consider the network interface limits of your instance type
- Monitoring and Scaling
- Set up Amazon CloudWatch alarms for node metrics
- Implement horizontal pod autoscaling
- Monitor pod scheduling failures
Conclusion
Remember to monitor your cluster’s behavior and adjust settings based on your specific workload requirements.
Drop a query if you have any questions regarding Amazon EKS and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is a leading provider of Cloud Training and Consulting services with a global presence in India, the USA, Asia, Europe, and Africa. Specializing in AWS, Microsoft Azure, GCP, VMware, Databricks, and more, the company serves mid-market and enterprise clients, offering comprehensive expertise in Cloud Migration, Data Platforms, DevOps, IoT, AI/ML, and more.
CloudThat is the first Indian Company to win the prestigious Microsoft Partner 2024 Award and is recognized as a top-tier partner with AWS and Microsoft, including the prestigious ‘Think Big’ partner award from AWS and the Microsoft Superstars FY 2023 award in Asia & India. Having trained 650k+ professionals in 500+ cloud certifications and completed 300+ consulting projects globally, CloudThat is an official AWS Advanced Consulting Partner, Microsoft Gold Partner, AWS Training Partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, AWS GenAI Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, AWS Microsoft Workload Partners, Amazon EC2 Service Delivery Partner, Amazon ECS Service Delivery Partner, AWS Glue Service Delivery Partner, Amazon Redshift Service Delivery Partner, AWS Control Tower Service Delivery Partner, AWS WAF Service Delivery Partner, Amazon CloudFront, Amazon OpenSearch, AWS DMS and many more.
FAQs
1. Can I modify max_pods after the node group is created?
ANS: – You cannot modify max_pods for existing nodes. You will need to:
- Create a new launch template version with updated max_pods
- Update your Auto Scaling Group to use the new template version
- Gradually replace old nodes with new ones
2. How does max_pods affect cluster autoscaling?
ANS: – max_pods influences how the cluster autoscaler makes scaling decisions by:
- Determining the maximum pod capacity per node
- Affecting when new nodes are added based on pending pods
- Impacting resource utilization calculations
![](https://content.cloudthat.com/resources/wp-content/uploads/2024/12/IMG-20240609-WA00382-150x150.jpg)
WRITTEN BY Aditi Agarwal
Comments