AWS EKS INTEGRATION WITH PROMETHEUS AND GRAFANA

Rishabh Jain
13 min readJul 14, 2020

--

Aloha! Guys here comes another awesome service of AWS which actually is based on Kubernetes and its amazing use cases which can make your full system automated.

Actually EKS is a managed service of AWS which allows you to assign IAM permissions to your kubernetes service accounts. It actually manages all the services that are required for the fault tolerance and automation of your system. It is just a great service that you have just to focus on your work and backend stuff just forget about it all the things will be managed by this AWS EKS CLUSTER.

EKS CLUSTER WITH PROMETHEUS

The IAM role can control access to other containerized services, AWS resources external to the cluster such as databases and secrets, or third party services and applications running outside of AWS.

Amazon EKS is also integrated with many AWS services to provide scalability and security for your applications, including the following:

  • Amazon ECR for container images
  • Elastic Load Balancing for load distribution
  • IAM for authentication
  • Amazon VPC for isolation

and many other things are yet to come in this blog. Although I am publishing this blog through yaml code but the same can also be achieved through TERRAFORM.

PREREQUISITES

  1. KUBERNETES (K8s) RUNNING IN YOUR SYSTEM.
  2. AWS CLOUD ACCOUNT
  3. AWS CLI
  4. SSH CONFIGURATON IN SYSTEM

If anyone of the prerequisite does not satisfy you can refer for my other blogs to fulfil all the prerequisistes.

So, times up and let’s start with it !

So, just follow the steps and you will be able to handle a multi tier architecture through this system.

First go to your amazon cloud and login as a root user.

STEP-1 AFTER LOGGING IN JUST CREATE AN IAM USER WITH THE SPECIFIC PERMISSIONS AS SHOWN BELOW.

You can create the IAM USER through my IAM user blog.

Now after creating the IAM user set the specific permissions as shown below. This permission actually gives the full access to the IAM user as the ROOT user has in its account.

IAM USER ADMINISTRATION ACCESS PERMISSION

STEP-2 Go to the CMD of your system and check for the version of AWS CLI. It is just to check that whether AWS CLI is installed in your system or not.

AWS CLI INSTALLED

STEP-3 SETUP THE IAM USER TO YOUR AWS CLI AND INSTALL THE EKS IN YOUR AWS CLI.

AWS IAM USER SETUP

EKSCTL DOWNLOAD AND INSTALL

  1. Search on google for eksctl. It is actually an .exe file through we will bw able to launch the cluster in AWS Cloud.
EKSCTL DOWNLOAD

2. Just open the link shown above and copy the following command to the AWS CLI and run it.

COMMAND TO INSTALL EKSCTL

3. Add the dowloaded file to the Kubernetes folder as shown below.

EKSCTL ADDED IN PATH

4. For confirming that EKSCTL has been setup just run the following commands as shown below.

EKSCTL INSTALLED

STEP-3 AFTER SETTING UP YOUR EKSCTL WE ARE NOW GOING TO MAKE THE CLUSTER.

Before this just check for the cluster if any by the command shown below.

CHECKING FOR CLUSTER IN AWS

IF THERE IS ANY CLUSTER YOU CAN DELETE IT WITH THE COMMAND SHOWN BELOW.

DELETING EKS CLUSTER

AFTER CHECKING FOR IT WE ARE GOING TO LAUNCH THE CLUSTER(IT WILL TAKE 10–15 MINUTES) THROUGH THE COMMANDS SHOWN BELOW.

  1. CREATE THE DIRECTORY AND FILE FOR IT AS SHOWN BELOW.

2. CREATE A FILE AND WRITE THE FOLLOWING CODE.

CODE FOR CLUSTER

3. AFTER WRITING THE CODE CREATE THE CLUSTER BY THE COMMAND SHOWN BELOW.

LAUNCHING CLUSTER

4. AFTER IT IS LAUNCHED YOU CAN VIEW IT IN THE CLOUDFORMATION ON AWS CLOUD.

CLUSTER RUNNING ON AWS CLOUDFORMATION

5. YOU CAN ALSO VIEW THE INSTANCES RUNNING IN EC2.

INSTANCE RUNNING IN EC2 AS OF CLUSTER LAUNCHED

STEP-5 NOW WE ARE GOING TO CREATE AND UPDATE THE NAMESPACE FOR THE CLUSTER AS TO BE IT AT ONE PLACE (NO CONFUSION WILL CREATE)

  1. UPDATING TO KUBERNETES ABOUT THE CLUSTER RUNNING IN AWS CLOUD. JUST RUN THE FOLLOWING COMMAND SHOWN BELOW.
UPDATING KUBERNETES WITH AWS

2. GET THE NODES WHICH WILL ASSURE THAT IT IS UPDATED.

NODES RUNNING IN CLUSTER

3. CREATE THE NAMESPACE IN KUBERNETES THROUGH THE COMMAND.

NAMESPACE CREATED

4. FOR CHECKING THE NAMESPACE CREATED USE THE FOLLOWING COMMAND USED BELOW.

NAMESPACES IN KUBERNETES

5. NOW UPDATE THE FOLLOWING NAMESPACE WITH THE CLUSTER CONFIG BY THE COMMAND AS SHOWN BELOW.

NAMESPACE UPDATED IN CLUSTER

6. FOR CHECKING IT YOU CAN VIEE IT IN THE CONFIG FILE AS SHOWN BELOW.

CONFIG VIEW

7. NOW WE WILL CHECK THE CONNECTIVITY OF KUBERNETES IN THE SYSTEM AND THE CLUSTER IN THE CLOUD BY USING THE FOLLOWING COMMANDS GIVEN BELOW.

CONNECTIVITY WITH CLUSTER

8. AFTER THIS CONNECTIVITY WE WILL BE CHECKING THE NODEGROUP.

GETTING NODE GROUP

FARGATE CLUSTER

There is also a fargate cluster in the AWS cloud which actually provides the SERVERLESS COMPUTING that means the user has not to maintain the cluster with any aspect or dimension all the things and stuff will be managed by the AWS cloud by itself. The user do not need to take any type of worry instead of that just focus on the work and all the things will be managed by itself.

It cannot be launched in the Mumbai Region as it is not supported in the region. So, one can launch it on other region. In my case i have launched it in the SINGAPORE region i.e. ap-southeast-1

So. let’s go for it!

  1. Let us check for the fargate cluster if any by the following command used below.
CHECKING THE FARGATE CLUSTER

2. As we have checked it that there is no fargate cluster. So, we are going to create one by using the following code.

FARGATE CLUSTER CODE

3. After writing the code now run the command that is shown below.

FARGATE CLUSTER LAUNCHING

4. As you can see that the fargate cluster is now launched in Singapore region.

FARGATE CLUSTER LAUNCHED

NOW LET’S COME TO OUR CLUSTER THAT IS IN MUMBAI REGION.

As I mentioned above that the AWS will manage everything behind the setup. For demonstration I have showed here the Docker Engine , the Kubernetes and the pods.

  1. LOG IN TO THE EC2 INSTANCE IS SHOWED.(ANY OF THE EC2 INSTANCE)
LOG IN THE EC2 INSTANCE
DOCKER ENGINE RUNNING
KUBELET RUNNING
PODS RUNNING
DESCRIBED PODS IN K8s

STEP-6 NOW WE ARE GOING TO CREATE AND LAUNCH THE EFS-PROVISIONER THAT IS ACTUALLY A FILE SYSTEM ON AWS.

EFS ON AWS CLOUD
  1. Select for the VPC (VIRTUAL PRIVATE CLOUD) of your Cluster as shown below.
VPC SELECTION IN EFS

2. Select for the security group common in the EC2 instances launched by the cluster in EC2.(YOU CAN CHECK IT IN THE DESCRIPTION OF INSTANCES).

SELECTING SECURITY GROUPS

3. Click for the Next without any change in further steps and Click Create and your EFS will be created as shown below.

EFS CREATED
DNS IN EFS

4. Create the EFS-PROVISIONER by the following code. Use your file system Id that is in the EFS created shown above in the EFS-PROVISIONER code shown below. Also the same measure for the server in the code i.e. DNS name in the EFS in AWS Cloud shown above.

EFS-PROVISIONER CODE

5. Create the NFS-PROVISIONER by using the following code shown below.

NFS-PROVISIONER

6. Now create the storage with the following code shown below.

STORAGE CODE

7. Now create all the components by the command shown below.

EFS-PROVISIONER CREATED
NFS-PROVISIONER CREATED
STORAGE CREATED

8. You can now see the following components running by the following command as shown below.

POD RUNNING
DEPLOYMENT RUNNING

STEP-7 AFTER CREATING THIS HERE THE WORDPRESS , MYSQL , WITH PVC , POD , SERVICE AND KUSTOMIZATION IS IN A FOLDER AND ALL ARE CREATED JUST BY ONE COMMAND AS SHOWN BELOW.

(THE CODE FOR ALL THE FILES ARE IN THE GITHUB YOU CAN TAKE THE CODE FROM THERE. LINK IS AT THE END OF THIS BLOG)

COMPONENETS CREATED
PV CREATED
PODS OF WORDPRESS AND MYSQL CREATED AND RUNNING
DEPLOYMENT CREATED AND RUNNING
PODS DESCRIPTION AND CONNECTION WITH EC2

AFTER THIS ALL CREATED YOU CAN SEE THE VOLUME AND LOAD BALANCER ARE CREATED AS SHOWN BELOW.

LOAD BALANCER CREATED ON AWS CLOUD
VOLUME CREATED ON AWS CLOUD
SERVICE CREATED AND RUNNING

STEP-8 COPY THE EXTERNAL IP OF WORDPRESS (LOAD BALANCER) AND PASTE IT TO YOUR RESPECTIVE BROWSER AS SHOWN BELOW.

WORDPRESS OPENED

LOG IN TO YOUR ACCOUNT BY CREATING THE USERNAME AND PASSWORD.

USERNAME AND PASSWORD CREATED
LOGGING IN THE WORDPRESS
WORDPRESS DASHBOARD

WE HAVE NOW CREATED A MULTI TIER ARCHITECTURE WHICH DO NOT HAVE ANY DOWNTIME AS IT HAS A LOAD BALANCER AND IF SOMETHING DELETED OR DATA CORRUPTED. THE DATA IS PERSISENT AS PVC IS JOINED WITH IT.

LET US NOW INTEGRATE WITH MONITORING THIS WHOLE MULTI TIER ARCHITECTURE.

STEP-9 SEARCH FOR THE HELM AND TILLER ON GOOGLE. DOWNLOAD AN EXE FILE OF BOTH AND ADD IT TO THE KUBERNETES FOLDER(THE PLACE WHERE THE SYSTEM’S K8S IS SETUP).

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.

Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.

YOU CAN DOWNLOAD THE HELM FILE FROM HERE.

YOU CAN DOWNLOAD THE TILLER FILE FROM HERE.

HELM AND TILLER ADDED IN K8S FOLDER

NOW WE ARE GOING TO SETUP HELM.

JUST FOLLOW THE PICTORIAL STEPS

  1. Initializing HELM
HELM INITIALIZED

2. ADDING REPO TO HELM

REPO ADDED TO HELM

3. CREATE SERVICE ACCOUNT IN TILLER.

SERVICE ACCOUNT CREATED

4. CREATE PROVISIONER FOR TILLER.

PROVISIONER CREATED

5. UPGRADE THE HELM.

HELM UPGRADED

6. TILLER POD RUNNING IN THE KUBE-SYSTEM NAMESPACE.

TILLER POD RUNNING

NOW WE CAN INTEGRATE THE ARCHITECTURE WITH PROMETHEUS AND GRAFANA.

  1. SEARCH FOR THE HELM PROMENTHEUS STABLE ON GOOGLE.
GOOGLE SEARCH

2. OPEN THE FIRST LINK AND COPY THE COMMAND AND RUN IT AS SHOWN BELOW.

PROMETHEUS INSTALLED
PODS RUNNING OF PROMETHEUS

3. GET THE SERVICE BY USING THE COMMAND SHOWN BELOW.

SERVICES OF PROMETHEUS

4. NOW TO CONNECT THIS PRIVATE STRUCTURE TO PUBLIC NETWORK HERE PATTING IS DONE(SERVICES ARE EXPOSED).

PATTING DONE

5. NOW COPY THE FOLLOWING IP AND PASTE IT TO YOURRESPECTIVE BROWSER.

IP COPIED
IP PASTED AND PROMETHEUS OPENED

IF YOU WANT TO STOP IT GO TO THE CMD AND PRESS ctrl + c.

6. YOU CAN SEE MANY THINGS THERE ONE OF THE THINGS THAT HERE SHOWN IS TARGETS.

TARGETS IN PROMETHUS
KUBE_NODE_INFO IN PROMETHEUS

NOW WE ARE GOING TO INTEGRATE GRAFANA IN THE ARCHITECTURE.JUST FOLLOW THE STEPS

  1. CREATE A NAMESPACE WITH THE COMMAND AS SHOWN BELOW.
NAMESPACE CREATED

2. NOW RUN THE FOLLOWING COMMAND AS SHOWN BELOW FOR LOGIN AND CONNECTING IT WITH PROMETHEUS.

THE SELECTED PART SHOWN IN PICTURE IS THE NAME OF THE POD.

LAUNCHING GRAFANA

COMMAND -

helm install stable/grafana — namespace grafana — set persistence.storageClassName=”gp2" — set adminPassword=’GrafanaAdm!n’ — set datasources.”datasources\.yaml”.apiVersion=1 — set datasources.”datasources\.yaml”.datasources[0].name=Prometheus — set datasources.”datasources\.yaml”.datasources[0].type=prometheus — set datasources.”datasources\.yaml”.datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local — set datasources.”datasources\.yaml”.datasources[0].access=proxy — set datasources.”datasources\.yaml”.datasources[0].isDefault=true — set service.type=LoadBalancer

IN THE COMMAND WRITTEN ABOVE HERE IS THE PASSWORD FOR IT.(YOU CAN SET YOUR OWN)

’GrafanaAdm!n’

3. Get the service and copy the External-IP as shown below and paste it to your respective browser.

COPYING EXTERNAL-IP

THE LOGIN PAGE MAY TAKE SOMETIME TO OPEN. SO, WAIT FOR SOMETIME.(APPROX. 5 MINUTES)

LOGIN IN GRAFANA

4. GO TO HAVE YOUR DATA SOURCE AND CONNECTION OF GRAFANA WITH PROMETHEUS AS SHOWN BELOW.

ADDING DATA SOURCE

SELECT FOR THE PROMETHEUS AS SHOWN BELOW.

PROMETHEUS AS DATA SOURCE

5. Put the IP of Prometheus and check for the validity as shown below.

DATA SOURCE WINDOW AFTER SELECTING PROMETHEUS

YOU CAN GET THE IP OF PROMETHEUS FROM HERE.

COPYING IP OF PROMETHEUS
IP OF PROMETHEUS

CLICK ON SAVE & EXIT AND THE VALIDITY IS DONE.

ITS WORKING(VALIDATED)

6.Go to grafana.com and go to the dashboard there and search for kubernetes cluster monitoring as shown below.

YOU CAN GO FROM HERE.

GRAFANA.COM DASHBOARD

COPY THE NUMBER AS SHOWN BELOW.

SERVICE NUMBER IN GRAFANA

IN GRAFANA YOU WILL GET AN IMPORT OPTION AS SHOWN BELOW.

WRITE THE SERVICE NUMBER HERE AS SHOWN BELOW.

IMPORTING GRAFANA.COM DASHBOARD
PROMETHEUS SELECTED FOR MONITORING

JUST CLICK IMPORT AND THIS WONDERFUL GRAPH APPEARS.

DASHBOARD OF GRAFANA (IMPORTED)

So, this was the whole cluster setup of AWS ELASTIC KUBERNETES SERVICE with PROMETHEUS AND GRAFANA. After all the things done do not forget to delete the clusters (fargate-singapore and eks cluster-mumbai) and ELASTIC FILE SYSTEM as all are chargeable.

THANKS A LOT !

HERE IS THE GITHUB LINK WHERE YOU WILL GET ALL THE CODES.

--

--

Rishabh Jain
Rishabh Jain

No responses yet