top of page

Hybrid Cloud Task#1 : Terraform + AWS + Git-GitHub + Jenkins

  • Writer: Subhabrata Datta
    Subhabrata Datta
  • Jun 16, 2020
  • 4 min read

Updated: Jun 18, 2020

This was the 1st Task given by my mentor Mr Vimal Daga (LinuxWorld) as part of Hybrid Multi-cloud course program.

Ø Problem Statement.


Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

*Optional*

1) Those who are familiar with jenkins or are in devops AL have to integrate jenkins in this task wherever you feel can be integrated

2) create snapshot of ebs

Above task should be done using terraform

Ø Pre-requisites


1. Setup the aws profile. Details have been provided in below link


2. Download, install and setup terraform. Create a working directory (folder where we will put our codes or .tf files).

All Steps involved :

# Step 1 : Setup key & security group which allow port 80

# Step 2: Launch aws instance & connect with the security group we created. Install http & git.

# Step 3: EBS volume created

# Step 4: Developer uploads code to github repo

# Step 5: Connect to EC2 instance with EBS volume.

# Step 6: Mount EBS volume to /var/www/html & download code from the Github repository.

# Step 7: Creating AWS S3 bucket & S3 bucket object


# Step 1 : Setup key & security group which allow port 80


Key can either be created in AWS EC2 or we can import our own public key to Amazon AWS.

(refer: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html ) Public key could be generated using a thrird-party app like puttygen or ssh-gen. I used ssh-gen which is readily available in windows10. The following command is used to create the key

ssh-keygen -t rsa

We need to give the filename where key is to be stored (here sbdkey). 2 files are generated

sbdkey.pem which is our private key, sbdkey.pem.pub which is our public key

ree

The public key generated (sbdkey.pem.pub) is used to create AWS key-pair, using terraform resource "aws_key_pair" (refer: https://www.terraform.io/docs/providers/aws/r/key_pair.html).


AWS security group is created using terraform resource "aws_security_group", to allow ssh connectivity to port 22, and http protocol connectivity to port 80.

ree

# Step 2: Launch aws instance & connect with the security group we created. Install http & git.


We use the terraform resource "aws_instance" for launching aws instance. For the key_name, I have used variable (we can also directly type the key name here instead i.e. key_name = "sbdkey").

The 'connection' block is used to connect to the instance using ssh protocol.

Provisioner 'remote-exec' is being used to run commands inside the aws instance to install http & git, and then enable the httpd service. Enabling the service will start it every time the OS is startedd/rebooted.

resource "aws_instance" "webdatta1" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.keyname.key_name
  security_groups = [ "allow_port80" ]
  
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/subha/sbdkey.pem")
    host     = aws_instance.webdatta1.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
} 

# Step 3: EBS volume created


resource "aws_ebs_volume" "task1vol" { 
availability_zone = aws_instance.webdatta1.availability_zone
  size              = 1
  tags = {
  name = "task1vol"
}
}

# Step 4: Developer uploads code to github repo


ree

# Step 5: Connect to EC2 instance with EBS volume.


To attach OS with EBS volume, resource "aws_volume_attachment" is used.

In the device name (/dev/sdd) , AWS internally replaces 's' with 'xv' i.e. EBS volume name becomes /dev/xvdd.

Option force_detach=true is there so that we can destroy the volume forcefully, even when OS is attached to it.

resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdd"
  volume_id   = aws_ebs_volume.task1vol.id
  instance_id = aws_instance.webdatta1.id
  force_detach = true
}

# Step 6: Mount EBS volume to /var/www/html & download code from the Github repository.


resource "null_resource" "nullremote1"  {

depends_on = [
    aws_volume_attachment.ebs_att,
  ]
  
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/subha/sbdkey.pem")
    host     = aws_instance.webdatta1.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdd",
      "sudo mount  /dev/xvdd  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/SubhabrataDatta/cloudtask1.git /var/www/html/"
    ]
  }
}

# Step 7: Creating AWS S3 bucket & S3 bucket object


AWS S3 services are meant for persistent storage. They could be used without the need of attaching an OS to them.

S3 bucket is like a folder. S3 Bucket Objects are the files inside the S3 Bucket . In this context, key is the filename of the S3 Bucket Object.



resource "aws_s3_bucket" "s3bucket" {
  bucket = "hybridcloud-task1-bucket"

  tags = {
    Name        = "hybridcloud-task1-bucket"
    Environment = "Prod"
  }
}

resource "aws_s3_bucket_object" "s3object" {
  bucket = aws_s3_bucket.s3bucket.bucket
  key    = "task1image.jpg"
  source = "lwindia.JPG"
  acl    = "public-read"
}
ree



FINAL OUTPUT

ree


Comments


bottom of page