Day68 ---> 90DaysOfDevOps Challenge @TWS

Day68 ---> 90DaysOfDevOps Challenge @TWS

Auto Scaling and Loadbalancing on AWS with Terraform 🚀

Understanding Scaling and load balancing

Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs.

Load balancing in AWS refers to the process of distributing incoming network traffic across multiple resources (such as Amazon EC2 instances, containers, or IP addresses) to optimize resource utilization, improve application availability, and ensure high performance.

Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform apply will create or destroy the resources as per our needs.

What is an Auto Scaling Group in AWS?

Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand.

To create an Auto Scaling Group (ASG) in Terraform, the following prerequisites are typically required:

  1. Launch Configuration or Launch Template: You need to define a launch configuration or launch template that specifies the configuration details for the instances launched by the ASG. It includes attributes such as the AMI ID, instance type, security groups, user data, etc.

  2. VPC and Subnets: You should have a Virtual Private Cloud (VPC) set up with one or more subnets where the instances will be launched. The ASG needs to be associated with one or more subnets within the VPC.

  3. Load Balancer or Target Group: If you want the ASG to distribute traffic among instances, you'll need a load balancer or target group. The ASG should be associated with the load balancer or target group to receive traffic.

  4. Security Groups: Specify one or more security groups that control inbound and outbound traffic to the instances launched by the ASG. These security groups should be created beforehand.

  5. Scaling Policies: Decide on the scaling policies that define how the ASG scales based on metrics such as CPU utilization, network traffic, etc. You can specify scaling policies to add or remove instances dynamically based on the defined thresholds.

In your main.tf file, add the following code to create an Auto Scaling Group:

resource "aws_launch_configuration" "name_of_LC" {
  name                     = "sample"
  image_id                 = "ami-123456789"
  instance_type            = "@@@@@"
  security_groups          = ["@@@@@@"]
  associate_public_ip_address = true

  user_data = #Use your web server setup configuartion
}

resource "aws_autoscaling_group" "name_of_asg" {
  name                 = "@@@@@"
  launch_configuration = aws_launch_configuration.name_of_LC.name
  min_size             = 1
  max_size             = 3
  desired_capacity     = 1
  health_check_type    = "EC2"
  target_group_arns    = [aws_lb_target_group.web_server_tg.arn]
  vpc_zone_identifier  = ["@@@@", "@@@@"]

  tag {
    key                 = "Name"
    value               = "@@@@@"
    propagate_at_launch = true
  }
}

resource "aws_alb" "web_server_lb" {
  name            = "web-server-lb"
  internal        = false
  security_groups = ["@@@@@"]
  subnets         = [
    "subnet-@@@@@",
    "subnet-@@@@@"
  ]
}

resource "aws_lb_listener" "web_server_listener" {
  load_balancer_arn = aws_alb.web_server_lb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type  = "forward"
    target_group_arn = aws_lb_target_group.web_server_tg.arn
  }
}

resource "aws_lb_target_group" "web_server_tg" {
  name        = "web-server-target-group"
  port        = 80
  protocol    = "HTTP"
  target_type = "instance"
  vpc_id      = "vpc-@@@@@@"
  health_check {
    path = "/"
  }
}

output "alb_dns_name" {
  value = aws_alb.web_server_lb.dns_name
}

Let's create an Autoscaling group with load balancing on the AWS environment--->

provider "aws" {      
  region = "us-east-1"
}

resource "aws_launch_configuration" "web_server_lc" {
  name                     = "web-server"
  image_id                 = "ami-053b0d53c279acc90"
  instance_type            = "t2.micro"
  security_groups          = ["sg-0277e2f4d375c8965"]
  associate_public_ip_address = true

  user_data = <<-EOF
              #!/bin/bash

              INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)

              echo '<!DOCTYPE html>
              <html>
              <head>
                <title>Instance ID Retrieval</title>
                <style>
                  body {
                    background-color: #000080;
                    color: white;
                    display: flex;
                    justify-content: center;
                    align-items: center;
                    height: 100vh;
                    margin: 0;
                    padding: 0;
                  }
                  h1 {
                    font-size: 48px;
                  }
                  p {
                    font-size: 24px;
                  }
                </style>
              </head>
              <body>
                <div>
                  <h1>You are doing Great! </h1>
                  <p>Auto scaling using Terraform (HTTP server)</p>
                  <p>Instance ID: '$INSTANCE_ID'</p>
                </div>
              </body>
              </html> > index.html
              sudo python3 -m http.server 80 &
              EOF
}

resource "aws_autoscaling_group" "web_server_asg" {
  name                 = "web-server-asg"
  launch_configuration = aws_launch_configuration.web_server_lc.name
  min_size             = 1
  max_size             = 3
  desired_capacity     = 1
  health_check_type    = "EC2"
  target_group_arns    = [aws_lb_target_group.web_server_tg.arn]
  vpc_zone_identifier  = ["subnet-0fb1b21f41c1408e2", "subnet-07d528f8595558d7e"]

  tag {
    key                 = "Name"
    value               = "Web_Servers"
    propagate_at_launch = true
  }
}

resource "aws_alb" "web_server_lb" {
  name            = "web-server-lb"
  internal        = false
  security_groups = ["sg-0277e2f4d375c8965"]
  subnets         = [
    "subnet-0fb1b21f41c1408e2",
    "subnet-07d528f8595558d7e"
  ]
}

resource "aws_lb_listener" "web_server_listener" {
  load_balancer_arn = aws_alb.web_server_lb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type  = "forward"
    target_group_arn = aws_lb_target_group.web_server_tg.arn
  }
}

resource "aws_lb_target_group" "web_server_tg" {
  name        = "web-server-target-group"
  port        = 80
  protocol    = "HTTP"
  target_type = "instance"
  vpc_id      = "vpc-04d61d6f66d196f03"
  health_check {
    path = "/"
  }
}

output "alb_dns_name" {
  value = aws_alb.web_server_lb.dns_name
}

Run terraform apply to create the Auto Scaling Group.

image

image

image

image

image

image

Test up scaling - down scaling

Edit your main.tf with the increased desired capacity to 2 then apply it.

image

image

image

image

Wait a few minutes for the new instance to be launched.

image

Go to the EC2 Instances service and verify that the new instances have been launched.

image

Access you load balancer url to check its fuctionality to distribute the traffic across those 2 web servers.

image

image

Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated.

image

image

image

image

Go to the EC2 Instances service and verify that the extra instances have been terminated.

image

image

image

image

Congratulations🎊🎉 You have successfully scaled up and scaled down your infrastructure with Terraform.

Happy Learning :)

Day 68 task is complete!

90DaysOfDevOps Tasks👇

github.com/Chaitannyaa/90DaysOfDevOps.git

Chaitannyaa Gaikwad | LinkedIn

Â