AWS EKS INTEGRATION WITH PROMETHEUS AND GRAFANA
Aloha! Guys here comes another awesome service of AWS which actually is based on Kubernetes and its amazing use cases which can make your full system automated.
Actually EKS is a managed service of AWS which allows you to assign IAM permissions to your kubernetes service accounts. It actually manages all the services that are required for the fault tolerance and automation of your system. It is just a great service that you have just to focus on your work and backend stuff just forget about it all the things will be managed by this AWS EKS CLUSTER.
The IAM role can control access to other containerized services, AWS resources external to the cluster such as databases and secrets, or third party services and applications running outside of AWS.
Amazon EKS is also integrated with many AWS services to provide scalability and security for your applications, including the following:
- Amazon ECR for container images
- Elastic Load Balancing for load distribution
- IAM for authentication
- Amazon VPC for isolation
and many other things are yet to come in this blog. Although I am publishing this blog through yaml code but the same can also be achieved through TERRAFORM.
PREREQUISITES
- KUBERNETES (K8s) RUNNING IN YOUR SYSTEM.
- AWS CLOUD ACCOUNT
- AWS CLI
- SSH CONFIGURATON IN SYSTEM
If anyone of the prerequisite does not satisfy you can refer for my other blogs to fulfil all the prerequisistes.
So, times up and let’s start with it !
So, just follow the steps and you will be able to handle a multi tier architecture through this system.
First go to your amazon cloud and login as a root user.
STEP-1 AFTER LOGGING IN JUST CREATE AN IAM USER WITH THE SPECIFIC PERMISSIONS AS SHOWN BELOW.
You can create the IAM USER through my IAM user blog.
Now after creating the IAM user set the specific permissions as shown below. This permission actually gives the full access to the IAM user as the ROOT user has in its account.
STEP-2 Go to the CMD of your system and check for the version of AWS CLI. It is just to check that whether AWS CLI is installed in your system or not.
STEP-3 SETUP THE IAM USER TO YOUR AWS CLI AND INSTALL THE EKS IN YOUR AWS CLI.
EKSCTL DOWNLOAD AND INSTALL
- Search on google for eksctl. It is actually an .exe file through we will bw able to launch the cluster in AWS Cloud.
2. Just open the link shown above and copy the following command to the AWS CLI and run it.
3. Add the dowloaded file to the Kubernetes folder as shown below.
4. For confirming that EKSCTL has been setup just run the following commands as shown below.
STEP-3 AFTER SETTING UP YOUR EKSCTL WE ARE NOW GOING TO MAKE THE CLUSTER.
Before this just check for the cluster if any by the command shown below.
IF THERE IS ANY CLUSTER YOU CAN DELETE IT WITH THE COMMAND SHOWN BELOW.
AFTER CHECKING FOR IT WE ARE GOING TO LAUNCH THE CLUSTER(IT WILL TAKE 10–15 MINUTES) THROUGH THE COMMANDS SHOWN BELOW.
- CREATE THE DIRECTORY AND FILE FOR IT AS SHOWN BELOW.
2. CREATE A FILE AND WRITE THE FOLLOWING CODE.
3. AFTER WRITING THE CODE CREATE THE CLUSTER BY THE COMMAND SHOWN BELOW.
4. AFTER IT IS LAUNCHED YOU CAN VIEW IT IN THE CLOUDFORMATION ON AWS CLOUD.
5. YOU CAN ALSO VIEW THE INSTANCES RUNNING IN EC2.
STEP-5 NOW WE ARE GOING TO CREATE AND UPDATE THE NAMESPACE FOR THE CLUSTER AS TO BE IT AT ONE PLACE (NO CONFUSION WILL CREATE)
- UPDATING TO KUBERNETES ABOUT THE CLUSTER RUNNING IN AWS CLOUD. JUST RUN THE FOLLOWING COMMAND SHOWN BELOW.
2. GET THE NODES WHICH WILL ASSURE THAT IT IS UPDATED.
3. CREATE THE NAMESPACE IN KUBERNETES THROUGH THE COMMAND.
4. FOR CHECKING THE NAMESPACE CREATED USE THE FOLLOWING COMMAND USED BELOW.
5. NOW UPDATE THE FOLLOWING NAMESPACE WITH THE CLUSTER CONFIG BY THE COMMAND AS SHOWN BELOW.
6. FOR CHECKING IT YOU CAN VIEE IT IN THE CONFIG FILE AS SHOWN BELOW.
7. NOW WE WILL CHECK THE CONNECTIVITY OF KUBERNETES IN THE SYSTEM AND THE CLUSTER IN THE CLOUD BY USING THE FOLLOWING COMMANDS GIVEN BELOW.
8. AFTER THIS CONNECTIVITY WE WILL BE CHECKING THE NODEGROUP.
FARGATE CLUSTER
There is also a fargate cluster in the AWS cloud which actually provides the SERVERLESS COMPUTING that means the user has not to maintain the cluster with any aspect or dimension all the things and stuff will be managed by the AWS cloud by itself. The user do not need to take any type of worry instead of that just focus on the work and all the things will be managed by itself.
It cannot be launched in the Mumbai Region as it is not supported in the region. So, one can launch it on other region. In my case i have launched it in the SINGAPORE region i.e. ap-southeast-1
So. let’s go for it!
- Let us check for the fargate cluster if any by the following command used below.
2. As we have checked it that there is no fargate cluster. So, we are going to create one by using the following code.
3. After writing the code now run the command that is shown below.
4. As you can see that the fargate cluster is now launched in Singapore region.
NOW LET’S COME TO OUR CLUSTER THAT IS IN MUMBAI REGION.
As I mentioned above that the AWS will manage everything behind the setup. For demonstration I have showed here the Docker Engine , the Kubernetes and the pods.
- LOG IN TO THE EC2 INSTANCE IS SHOWED.(ANY OF THE EC2 INSTANCE)
STEP-6 NOW WE ARE GOING TO CREATE AND LAUNCH THE EFS-PROVISIONER THAT IS ACTUALLY A FILE SYSTEM ON AWS.
- Select for the VPC (VIRTUAL PRIVATE CLOUD) of your Cluster as shown below.
2. Select for the security group common in the EC2 instances launched by the cluster in EC2.(YOU CAN CHECK IT IN THE DESCRIPTION OF INSTANCES).
3. Click for the Next without any change in further steps and Click Create and your EFS will be created as shown below.
4. Create the EFS-PROVISIONER by the following code. Use your file system Id that is in the EFS created shown above in the EFS-PROVISIONER code shown below. Also the same measure for the server in the code i.e. DNS name in the EFS in AWS Cloud shown above.
5. Create the NFS-PROVISIONER by using the following code shown below.
6. Now create the storage with the following code shown below.
7. Now create all the components by the command shown below.
8. You can now see the following components running by the following command as shown below.
STEP-7 AFTER CREATING THIS HERE THE WORDPRESS , MYSQL , WITH PVC , POD , SERVICE AND KUSTOMIZATION IS IN A FOLDER AND ALL ARE CREATED JUST BY ONE COMMAND AS SHOWN BELOW.
(THE CODE FOR ALL THE FILES ARE IN THE GITHUB YOU CAN TAKE THE CODE FROM THERE. LINK IS AT THE END OF THIS BLOG)
AFTER THIS ALL CREATED YOU CAN SEE THE VOLUME AND LOAD BALANCER ARE CREATED AS SHOWN BELOW.
STEP-8 COPY THE EXTERNAL IP OF WORDPRESS (LOAD BALANCER) AND PASTE IT TO YOUR RESPECTIVE BROWSER AS SHOWN BELOW.
LOG IN TO YOUR ACCOUNT BY CREATING THE USERNAME AND PASSWORD.
WE HAVE NOW CREATED A MULTI TIER ARCHITECTURE WHICH DO NOT HAVE ANY DOWNTIME AS IT HAS A LOAD BALANCER AND IF SOMETHING DELETED OR DATA CORRUPTED. THE DATA IS PERSISENT AS PVC IS JOINED WITH IT.
LET US NOW INTEGRATE WITH MONITORING THIS WHOLE MULTI TIER ARCHITECTURE.
STEP-9 SEARCH FOR THE HELM AND TILLER ON GOOGLE. DOWNLOAD AN EXE FILE OF BOTH AND ADD IT TO THE KUBERNETES FOLDER(THE PLACE WHERE THE SYSTEM’S K8S IS SETUP).
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.
YOU CAN DOWNLOAD THE HELM FILE FROM HERE.
YOU CAN DOWNLOAD THE TILLER FILE FROM HERE.
NOW WE ARE GOING TO SETUP HELM.
JUST FOLLOW THE PICTORIAL STEPS
- Initializing HELM
2. ADDING REPO TO HELM
3. CREATE SERVICE ACCOUNT IN TILLER.
4. CREATE PROVISIONER FOR TILLER.
5. UPGRADE THE HELM.
6. TILLER POD RUNNING IN THE KUBE-SYSTEM NAMESPACE.
NOW WE CAN INTEGRATE THE ARCHITECTURE WITH PROMETHEUS AND GRAFANA.
- SEARCH FOR THE HELM PROMENTHEUS STABLE ON GOOGLE.
2. OPEN THE FIRST LINK AND COPY THE COMMAND AND RUN IT AS SHOWN BELOW.
3. GET THE SERVICE BY USING THE COMMAND SHOWN BELOW.
4. NOW TO CONNECT THIS PRIVATE STRUCTURE TO PUBLIC NETWORK HERE PATTING IS DONE(SERVICES ARE EXPOSED).
5. NOW COPY THE FOLLOWING IP AND PASTE IT TO YOURRESPECTIVE BROWSER.
IF YOU WANT TO STOP IT GO TO THE CMD AND PRESS ctrl + c.
6. YOU CAN SEE MANY THINGS THERE ONE OF THE THINGS THAT HERE SHOWN IS TARGETS.
NOW WE ARE GOING TO INTEGRATE GRAFANA IN THE ARCHITECTURE.JUST FOLLOW THE STEPS
- CREATE A NAMESPACE WITH THE COMMAND AS SHOWN BELOW.
2. NOW RUN THE FOLLOWING COMMAND AS SHOWN BELOW FOR LOGIN AND CONNECTING IT WITH PROMETHEUS.
THE SELECTED PART SHOWN IN PICTURE IS THE NAME OF THE POD.
COMMAND -
helm install stable/grafana — namespace grafana — set persistence.storageClassName=”gp2" — set adminPassword=’GrafanaAdm!n’ — set datasources.”datasources\.yaml”.apiVersion=1 — set datasources.”datasources\.yaml”.datasources[0].name=Prometheus — set datasources.”datasources\.yaml”.datasources[0].type=prometheus — set datasources.”datasources\.yaml”.datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local — set datasources.”datasources\.yaml”.datasources[0].access=proxy — set datasources.”datasources\.yaml”.datasources[0].isDefault=true — set service.type=LoadBalancer
IN THE COMMAND WRITTEN ABOVE HERE IS THE PASSWORD FOR IT.(YOU CAN SET YOUR OWN)
’GrafanaAdm!n’
3. Get the service and copy the External-IP as shown below and paste it to your respective browser.
THE LOGIN PAGE MAY TAKE SOMETIME TO OPEN. SO, WAIT FOR SOMETIME.(APPROX. 5 MINUTES)
4. GO TO HAVE YOUR DATA SOURCE AND CONNECTION OF GRAFANA WITH PROMETHEUS AS SHOWN BELOW.
SELECT FOR THE PROMETHEUS AS SHOWN BELOW.
5. Put the IP of Prometheus and check for the validity as shown below.
YOU CAN GET THE IP OF PROMETHEUS FROM HERE.
CLICK ON SAVE & EXIT AND THE VALIDITY IS DONE.
6.Go to grafana.com and go to the dashboard there and search for kubernetes cluster monitoring as shown below.
YOU CAN GO FROM HERE.
COPY THE NUMBER AS SHOWN BELOW.
IN GRAFANA YOU WILL GET AN IMPORT OPTION AS SHOWN BELOW.
WRITE THE SERVICE NUMBER HERE AS SHOWN BELOW.
JUST CLICK IMPORT AND THIS WONDERFUL GRAPH APPEARS.
So, this was the whole cluster setup of AWS ELASTIC KUBERNETES SERVICE with PROMETHEUS AND GRAFANA. After all the things done do not forget to delete the clusters (fargate-singapore and eks cluster-mumbai) and ELASTIC FILE SYSTEM as all are chargeable.
THANKS A LOT !