AWS EKS Project
- Subhabrata Datta

- Jul 15, 2020
- 6 min read
Updated: May 27, 2022
Creating a Kubernetes cluster using EKS in AWS ! Launching wordpress & mysql !
Objective: Creating a Kubernetes Cluster using AWS EKS. Deploying pods with wordpress and mysql images on the worker nodes.
Introduction There are 2 ways of deploying a Kubernetes Cluster on AWS i) Launching EC2 instance and creating a Kubernetes Cluster as we usually do in our on-premise Linux server. ii) Using Amazon EKS (Elastic Kubernetes Service)
In our project we are going to use AWS EKS.

Amazon EKS is a fully AWS managed service that makes it easy for you to use Kubernetes on AWS without needing to install and operate our own Kubernetes control plane. It is not a free service, and its current price for EKS is 0.1 $/hour. Also the price of EC2 and EFS or other services you use will be extra, so use wisely. And make sure to delete/destroy the cluster once you complete the project, so that you don't end up charged hefty fees.
I will be writing the blog post assuming the reader is familiar with Kubernetes and AWS and familiar with related terminology like nodes, pods, deployment, pvc, vpc, EC2, EBS etc. Tools required:
i) AWS CLI ii) EKSCTL iii) KUBECTL
STEPS INVOLVED
Step#01: Create IAM User with Admin Access
Step#02: Configure aws cli, eksctl
Step#03: Create Kubernetes cluster
Step#04: Configure kubectl
Step#05: Create Security Group to allow NFS access to CIDR range of the cluster vpc
Step#06: Create EFS using AWS console
Step#07: Install amazon-efs-utils in all nodes
Step#08: Create EFS provisioner
Step#09: Grant RBAC permission for resource-provisioning
Step#10: Create StorageClass & PVC
Step#11: Create mysql & wordpress deployment
Step#12: Login to wordpress !
Step#01: Create IAM User with Admin Access Create an IAM user in AWS with Admin access

We will get the access key id and secret access key for the IAM user at the end of ID creation process, which we will need to access AWS using CLI or program.

Step#02: Configure aws cli, eksctl
Configure aws cli using aws configure command. Give the access id and secret id we got in the previous step.

eksctl will also use the same configuration details used by aws cli.
Step#03: Create Kubernetes cluster
cluster.yml is our manifest file to create the Kubernetes node
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: lwcluster
region: ap-south-1
nodeGroups:
- name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: ekskey
- name: ng2
desiredCapacity: 1
instanceType: t2.small
ssh:
publicKeyName: ekskey
- name: ng-mixed
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: ekskeyCreate the Kubernetes cluster using the following command
eksctl create cluster -f cluster.yml
Our cluster is ready

Also you will see in the console that stacks have been created under Cloud formation and 5 EC2 instances created


Step#04: Configure kubectl
A crucial thing here for those who have also created Kubernetes cluster in their pc using minikube, needs to update kubeconfig file using the below command. lwcluster is the cluster we have created using the cluster.yml file. This is requred so that we can use kubectl command to interact with the cluster we have created
aws eks update-kubeconfig --name lwcluster
Now confirm that the current context is our newly added aws context using kubectl command.
kubectl config get-contexts
Step#05: Create Security Group to allow NFS access to CIDR range of the cluster vpc
Locate the VPC ID for your Amazon EKS cluster. You can find this ID in the Amazon EKS console, or you can use the following AWS CLI command.
aws eks describe-cluster --name lwcluster --query "cluster.resourcesVpcConfig.vpcId" --output textLocate the CIDR range for your cluster's VPC. You can find this in the Amazon VPC console, or you can use the following AWS CLI command.
aws ec2 describe-vpcs --vpc-ids vpc-exampledb76d3e813 --query "Vpcs[].CidrBlock" --output textYou can get the same details from console

Now Create a security group that allows inbound NFS traffic for your Amazon EFS mount points.
[ This step could be skipped if you use the security group created by EKS that allows all protocols]
Step#06: Create EFS using AWS console
We will be using EFS to provided persistent storage to our pods in Kubernetes. The main differences between EBS and EFS is that EBS is only accessible from a single EC2 instance in our particular AWS region, while EFS allows you to mount the file system across multiple regions and instances.
We will create the EFS using AWS GUI. Be careful here because this step might not show you an error, but if you do it wrongly your EFS-PROVISIONER will not be launched. Now the important thing here is to create the EFS in the vpc where our k8s nodes have been created. And secondly, security group with NFS access to the CIDR range of vpc where our cluster is located should be assigned. Refer: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html


Step#07: Install amazon-efs-utils in all nodes
Following commands are used to connect to the instances and install amazon-efs-utils
ssh -i key_name.pem -l ec2-user Public_IP
sudo yum install amazon-efs-utils -y
Step#08: Create EFS provisioner
kubectl create -f efs-provisioner.yml Important things to be noted here, i) The file system id & server details will be corresponding to the EFS created in Step#05 ii) We will need the provisioner name while creating the storageclass
kind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-c2e66d13
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: lw-course/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-c2e66d13.efs.ap-south-1.amazonaws.com
path: /Verify that EFS-provisioner has been launched using kubectl get pods command.

Step#09: Grant RBAC permission for Resource Provisioning
Role-based access control (RBAC) is enabled on EKS by default. We must authorize efs-provisioner to access resources. Otherwise your PVC can't claim PV from EFS.
kubectl create -f create-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding2
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.ioStep#10: Create StorageClass & PVC
Below yml is used in order to i) Create Storage class to provision data from EFS provisoner ii) Create PVC for wordpress iii) Create PVC for mysql
kubectl create -f storage.yml (provisioner name should be the one we created in Step#07)
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: lw-course/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5GiStep#11: Create mysql & wordpress deployment
mysql.yml =>
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: efs-mysqlwordpress.yml =>
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: efs-wordpressWe will use the kustomization file to run the 2 yml files using command
kubectl apply -k .


Step#12: Login to wordpress !
We can get the DNS from the load balancer in AWS GUI

Yay ! And Our Site is ready !



Now Do not forget to delete your cluster after the project because EKS is chargeable, otherwise you will end up getting a high bill at month-end.
eksctl delete cluster --region=ap-south-1 --name=lwclusterSecondly, the EFS also needs to be deleted, you can delete it using the web GUI.
MOST COMMON ERRORS FACED
i) efs-provisoner not launching due to security group not configured properly ii) PVC in pending status due to erroneous RBAC permission
This task/project was given as part of AWS EKS training from Linuxworld India (http://www.linuxworldindia.org/) by my mentor Mr Vimal Daga.



Comments