AWS INTEGRATION WITH TERRAFORM
Hey! Curators here comes an awesome automated system with which you can experience the power of the cloud and TERRAFORM . Just have to write the code and run it for once and then the power of TERRAFORM WITH INTEGRATION OF CLOUD is unleashed.
SO LET’S GO FOR IT!
This is actually an automated system which in just one click creates a instance with volume attached , make partitions and access a code which is in the github and shows it on the webserver. A very beautiful system just do it for once and experience the beauty of the automation.
BEFORE STARTING JUST INSTALL THE AWS CLI .(FOR THIS JUST VIEW THE FOLLOWING BLOG.)
JUST A SIMPLE STEP TO START WITH THE AUTOMATION THAT IS CREATE YOUR OWN PROFILE. JUST FOLLOW THE BELOW STEPS.
OPEN YOUR COMMAND PROMPT
(START => TYPE CMD => CLICK ENTER)
PUT THE ACCESS KEY ID AND ACCESS KEY WITH THE FILE DOWNLOADED WHEN CREATED AN IAM USER.(FOR THIS JUST VIEW THE FOLLOWING BLOG).
NOW YOU CAN START WITH THE CREATION OF AUTOMATED SYSTEM.
STEP - 1 CREATE A KEY PAIR THROUGH TERRAFORM CODE . JUST WRITE THE CODE SHOWN BELOW.
provider “aws” {
region = “ap-south-1”
profile = “rishabh”
}
resource “aws_key_pair” “deployer” {
key_name = “in2”
public_key = “ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com”
AFTER WRITING THE CODE SAVE IT WITH THE EXTENSION .TF
NOW FOR RUNNING IT :
- TERRAFORM INIT (IN SMALL LETTERS) - To download the plugins for the code.
2. TERRAFORM VALIDATE (IN SMALL LETTERS) - To check whether the code is valid or not.
3. TERRAFORM APPLY (IN SMALL LETTERS) - To run the code written.
JUST WRITE YES WHEN ASKED TO ENTER A VALUE OR YOU CAN WRITE TERRAFORM APPLY -AUTO-APPROVE.(IN SMALL LETTERS)
IF RESULT SHOWN ABOVE COMES OUT THEN THE KEY PAIR WILL BE CREATED. YOU CAN CHECK IT IN THE WEB UI OF THE AWS CLOUD.
FOR DESTROYING THE CREATED RESOURCE YOU CAN USE TERRAFORM DESTROY OR TERRAFORM DESTROY -AUTO-APPROVE.
STEP - 2 CREATE A SECURITY GROUP BY FOLLOWING THE CODE WRITTEN BELOW.
provider “aws” {
region = “ap-south-1”
profile = “rishabh”
}
resource “aws_security_group” “sgterraform” {
name = “launch-wizard-8”
description = “Allow TLS inbound traffic”
vpc_id = “vpc-a2edf0ca”
ingress {
description = “TLS from VPC”
from_port = 22
to_port = 22
protocol = “TCP”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “TLS from VPC”
from_port = 80
to_port = 80
protocol = “TCP”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “TLS from VPC”
from_port = 8080
to_port = 8080
protocol = “TCP”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “launch-wizard-8”
AFTER WRITING THE CODE SAVE IT WITH THE EXTENSION .TF
NOW FOR RUNNING IT :
- TERRAFORM INIT — To download the plugins for the code.
2. TERRAFORM VALIDATE (IN SMALL LETTERS) — To check whether the code is valid or not.
3. TERRAFORM APPLY (IN SMALL LETTERS) - To run the code written.
JUST WRITE YES WHEN ASKED TO ENTER A VALUE OR YOU CAN WRITE TERRAFORM APPLY -AUTO-APPROVE.(IN SMALL LETTERS)
IF THE ABOVE RESULT COMES OUT THEN THE SECURITY GROUP WILL BE CREATED. YOU CAN CHECK IT IN THE WEB UI OF THE AWS CLOUD.
HERE THE FOLLOWING KEYWORDS ARE EXPLAINED. PLEASE READ IT ONCE.
INGRESS - It means the way or medium through which you can access your instance.
PORT NO. - It means that on the web server on which port you want to access your data.
Port no 80 means that it can be accessed through HTTP method.
Port no. 22 means that it can be accessed through SSH method.
CIDR_BLOCKS - It actually tells that which device and on which port will access the content within the instance on the webserver.
0.0.0.0/0 means that any device from anywhere can access the content on the webserver.
EGRESS - It actually tells about the default connectivity with the instance of the outer world.
PROTOCOL - It tells about the mode of communication and the layer in which the communication is performed.
STEP - 3 CREATE AN INSTANCE BY WRITING THE FOLLOWING CODE WRITTEN BELOW.
provider “aws” {
region = “ap-south-1”
profile = “rishabh”
}
resource “aws_instance” “web” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “lwkey”
security_groups = [“sg-06e0792308c1261f0”]
subnet_id = “subnet-1e3f5452”
tags = {
Name = “rishu”
}
}
resource “aws_ebs_volume” “v1” {
availability_zone = “ap-south-1b”
size = 1
tags = {
Name = “rishuv1”
}
}
IF ALL GOES RIGHT AFTER
- TERRAFORM INIT
- TERRAFORM VALIDATE
- TERRAFORM APPLY
THEN THE INSTANCE WILL BE CREATED ON THE AWS CLOUD AS SHOWN BELOW.
HERE THE FOLLOWING KEYWORDS ARE EXPLAINED. PLEASE READ IT ONCE.
- PROVIDER - It tells that on which the service is to be provided. either it is AWS , Openstack or any other cloud or service provider.
- TAGS - It actually tells about the name of the instance or the resource which you are creating.
- PROFILE - It tells about the profile (that we have created above) in which the services are created.
- KEY_NAME - It is the key which is used to create the instance which after creation will help in accessing the instance.(it might be different for some reasons). Use only the .pem key pair which is downloaded in your pc.
- SECURITY GROUP ID - It is the id of the security group created above(the id might be different because it is made again).Each time you make the security group the id changes.
- INSTANCE_TYPE - It is the type which actually means the CPU used for the instance.
- REGION - It tells that in which region you have to provide for your instance.
- SUBNET_ID - It tells that in which datacenter the instance is created and stored with its full information.
STEP - 4 CREATE THE VOLUME (ADDITIONAL STORAGE) FOR THE INSTANCE AND ATTACH IT TO THE INSTANCE.
JUST FOLLOW THE CODE WRITTEN AND SHOWN BELOW :
resource “aws_ebs_volume” “v1” {
availability_zone = “ap-south-1b”
size = 1
tags = {
Name = “rishuv1”
}
}
resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = “${aws_ebs_volume.v1.id}”
instance_id = “${aws_instance.web.id}”
force_detach = true
}
IF ALL GOES RIGHT AFTER
- TERRAFORM INIT
- TERRAFORM VALIDATE
- TERRAFORM APPLY
THEN THE VOLUME WILL BE CREATED AND ATTACHED AS SHOWN BELOW (ON AWS CLOUD WEBUI).
HERE THE FOLLOWING KEYWORDS ARE EXPLAINED. PLEASE READ IT ONCE.
- VOLUME_ID - It is the id of the volume which is created in the cloud. Here you can see that it is stored in “${aws_ebs_volume.v1.id}” it actually means that it is a form of string which will fetch the current id of volume created everytime.
- INSTANCE_ID - It is the instance id which is also written in the same format as volume id is written ${aws_instance.web.id} . It also fetches the current id of the instance created.
- FORCE_DETACH - It is the feature provided in the code so that if the volume needed to be detach from the instance in any condition then it can be detached as the option is given when created by the AWS CLOUD WEBUI.
STEP- 5 CREATE THE S3 BUCKET AND UPLOAD AN OBJECT INTO THE BUCKET.
JUST FOLLOW THE CODE WRITTEN AND SHOWN BELOW :
provider “aws” {
region = “ap-south-1”
profile = “rishabh”
}
resource “aws_s3_bucket” “b” {
bucket = “rishu1234”
acl = “private”
tags = {
Name = “rishu1234”
Environment = “Dev”
}
}
resource “aws_s3_bucket_object” “object” {
depends_on = [
aws_s3_bucket.b
]
bucket = “rishu1234”
key = “me.jpg”
source = “D:\\Videos and Pics\\me.jpg”
etag = “${filemd5(“D:\\Videos and Pics\\me.jpg”)}”
}
IF ALL GOES RIGHT AFTER
- TERRAFORM INIT
- TERRAFORM VALIDATE
- TERRAFORM APPLY
YOUR BUCKET WILL BE CREATED AND YOU CAN CONFIRM IT FROM THE WEB UI OF THE AWS CLOUD.
HERE THE FOLLOWING KEYWORDS ARE EXPLAINED. PLEASE READ IT ONCE.
- BUCKET - It is the name of the bucket as in this rishu1234 is given.(Please choose the name of the bucket properly because it should be unique and if the name is nit given terraform itself asks you for the name of the bucket and if it is not unique it gives the unique name by itself.)
- OBJECT - It is the image which is under the bucket. Here it is me.jpg
- KEY - It is the name of the object which is to be kept after uploading it in the bucket itself.
- DEPENDS_ON - It is used here because when the code is written to create the bucket and upload the object in the bucket then the code for the bucket’s object uploading runs first as a priority and an error occurs while running the code. So, for vanishing the error depends on is used due to which the object uploading code only run after the bucket creation code runs successfully.
- ACL - It is actually used to tell the bucket whether the object is to be public or private.(Here it is private because we will access it through cloudfront.
STEP - 6 CREATE A CLOUDFRONT WITH THE CODE GIVEN BELOW THROUGH WHICH WE WILL ACCESS THE S3 OBJECT.
provider “aws” {
region = “ap-south-1”
profile = “rishabh”
}
resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = “rishu1234.s3.amazonaws.com”
origin_id = “S3-rishu1234”
}
enabled = true
is_ipv6_enabled = true
comment = “”
default_root_object = “me.jpg”
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “S3-rishu1234”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = “*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = “S3-rishu1234”
forwarded_values {
query_string = false
headers = [“Origin”]
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}
price_class = “PriceClass_All”
restrictions {
geo_restriction {
restriction_type = “none”
}
}
tags = {
Environment = “production”
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
IF ALL GOES RIGHT AFTER
- TERRAFORM INIT
- TERRAFORM VALIDATE
- TERRAFORM APPLY
YOUR CLOUDFRONT WILL BE CREATED AND YOU CAN CONFIRM IT FROM THE WEBUI OF THE AWS CLOUD AS SHOWN BELOW.
ACCESS WITH THE DOMAIN NAME AND AFTER THAT / AND NAME OF OBJECT WITH EXTENSION. YOU WILL BE ABLE TO ACCESS YOUR IMAGE.
THERE ARE MANY OF THE TERMINOLOGIES WHICH CANNOT BE EXPLAINED HERE BECAUSE THE BLOG WILL BECOME SO LONG.
IF YOU ARE INTERESTED YOU CAN KNOW IT FROM HERE. JUST SCROLL DOWN AND YOU WILL GET EVERY TERM AND CODE WITH EXPLANATION:
NOW LET’S MOVE TO OUR END TO END AUTOMATION.
HERE WE GO!
NOW HERE WE WILL BE ACCESSING OUR INSTANCE AND AFTER THAT WILL PERFORM SOME INSTALLATION WITH THE DOWNLOADING OF CODE FROM GITHUB TO LAUNCHING IT A THE WEB SERVER.
- ENTERING INTO THE INSTANCE THROUGH SSH:
resource “null_resource” “nullremote3” {
depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/risha/Downloads/lwkey.pem”)
host = aws_instance.web.public_ip
}
2. PERFORMING INSTALLATION IN INSTANCE AND DOWNLOADING CODE FROM GITHUB.
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/rishabhjain1799/multi1.git /var/www/html/”
]
}
}
3. LAUNCHING THE CODE TO THE WEB SERVER.
resource “null_resource” “nulllocal1” {
depends_on = [
null_resource.nullremote3,
]
provisioner “local-exec” {
command = “start chrome ${aws_instance.web.public_ip}”
}
}
IN THE CODE OF GITHUB WE HAVE PUT THE DOMAIN NAME OF THE CLOUDFRONT AS SHOWN BELOW TO ACCESS IT ON THE WEB SERVER.
HERE IS THE CODE!!
4. NOW HERE WRITE THE CODE AS WRITTEN HERE AND RUN IT AS DONE BEFORE.
resource “null_resource” “nullremote3” {
depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/risha/Downloads/lwkey.pem”)
host = aws_instance.web.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/rishabhjain1799/multi1.git /var/www/html/”
]
}
}
resource “null_resource” “nulllocal1” {
depends_on = [
null_resource.nullremote3,
]
provisioner “local-exec” {
command = “start chrome ${aws_instance.web.public_ip}”
}
}
IF ALL GOES RIGHT AFTER
- TERRAFORM INIT
- TERRAFORM VALIDATE
- TERRAFORM APPLY
YOUR AUTOMATED SYSTEM IS CREATED AS SHOWN BELOW.
ALL THE CODES FOR ALL THE RESOURCES CREATED ARE SEPERATELY DRAFTED IN THE FILES WITH THE RESPECTIVE NAMES OF THEIR SERVICES.