You are on page 1of 9

AWS ELB interview questions

1. What Is Elastic Load Balancing in AWS?

Elastic Load Balancing automatically distributes incoming application traffic across multiple
targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can
handle the varying load of your application traffic in a single Availability Zone or across multiple
Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the
high availability, automatic scaling, and robust security necessary to make your applications
fault

Types of AWS Elastic Load Balancers

There are mainly three types of Amazon load balancers:

 Classic Load Balancer


 Network Load Balancer
 Application Load Balancer

Classic Load Balancer

Classic Load balancer in AWS is used on EC2-classic instances. This is the previous generation’s load
balancer and also it doesn’t allow host-based or path based routing.

The Classic Load balancer will route traffic to all registered targets in the Availability Zones, it doesn’t
check what is in the servers in those targets. It routes to every single target. Mostly it is used to route
traffic to one single URL.

Routing decisions can be taken in transport layer (TCP/SSL) or the application layer (HTTP/HTTPS).
Currently, the Classic Load Balancers require a fixed connection between the load balancer port and
container instance port.

Network Load Balancer

Network Load Balancer in AWS takes routing decisions in the Transport layer (TCP/SSL) of the OSI
model, it can handle millions of requests per second. Widely used to load balancing the TCP traffic and
it will also support elastic or static IP.

Let us see a simple example, you own a video sharing website which has decent traffic every day. One
day, after a video on your website, went viral the website’s traffic is very high and you need an
immediate solution to maintain it. AWS Network Load Balancer to the rescue!

AWS Network Load Balancer can be trusted in these types of situations. It can handle millions of
requests and a sudden spike of traffic because it works at the connection level.
Application Load Balancer

An Application Load Balancer in AWS makes routing decisions at the application layer (HTTP/HTTPs) of
the OSI model, thus the name Application Load Balancer. ALB supports path-based and host-based
routing, we will look at them after learning how the ALB works.

The Application Load Balancer receives the route requests, then it inspects the received packets. Then
it chooses the best target possible for the type of load and sends to the target with the highest
efficiency.

Host-based Routing using ALB

If you have two websites, intellipaat.com and dashboard.intellipaat.com. Both of the websites are
hosted in different EC2 instance and you want to distribute incoming traffic between them to make
them highly available.

Normally, we would create two AWS load balancers using CLB, but using ALB it is possible with one and
also your money is saved. Instead of paying for 2 ELBs, only pay for a single ELB.

Path-based Routing using ALB

In this type of routing, the websites URL path will be hosted on different EC2 instances. For example,
consider intellipaat.com and intellipaat.com/tutorial and these URL paths are hosted on different EC2
instances. Now, if you want to route traffic between these two URLs then you can use a path-based
routing method. ALB can be used to solve this problem too, you can use traffic routing according to the
path feature by using just one ALB

tolerant.

But how to access a Amazon ELB?

There are multiple ways to that,

 AWS Management Console – Using the AWS web interface you can create load balancers

 AWS Command Line Interface – AWS provides a command line interface which is compatible in
Mac, Windows, and Linux

 AWS SDKs – Language specific APIs are provided and can be used for any function using the
load balancer or other services.

 Query API – This is the most direct way to call load balancers in AWS, but you must only use
low-level API actions like sending HTTPs requests.
2. How AWS Elastic Load Balancing Works?

The basic working principle is the Elastic Load Balancer accepts incoming traffic from its clients
and then routes requests to the targets which the client want. If the load balancer finds an
unhealthy target, then it will stop redirecting it’s users there and will move with the other
healthy targets until that target is declared healthy.
To make an AWS ELB accept incoming traffic you have to configure them by specifying one or
more listeners. A listener is a process which will check for connection requests.
Availability zones – You can enable Availability Zones for your Amazon load balancer, then a
load balancer node will be created in that Availability Zone. Enabling multiple Availability Zones
and also make sure they at least have one registered target. Having at least one registered
target will allow the load balancer to route traffic there. The advantage of having multiple AZs
and targets will allow AWS load balancer to route traffic to other targets when few targets fail.

3. List few Pros/cons (advantages and disadvantages) of using AWS ELB?

4. What is the difference between auto-scaling and ELB?

Elastic Load Balancing is used to automatically distribute your incoming application traffic
across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage
incoming requests by optimally routing traffic so that no one instance is overwhelmed.
To use Elastic Load Balancing with your Auto Scaling group, you set up a load balancer and then
you attach the load balancer to your Auto Scaling group to register the group with the load
balancer.
Your load balancer acts as a single point of contact for all incoming web traffic to your Auto
Scaling group. When an instance is added to your group, it needs to register with the load
balancer or no traffic is routed to it. When an instance is removed from your group, it must
deregister from the load balancer or traffic continues to be routed to it.
When you use Elastic Load Balancing with your Auto Scaling group, it's not necessary to register
your EC2 instances with the load balancer. Instances that are launched by your Auto Scaling
group are automatically registered with the load balancer. Likewise, instances that are
terminated by your Auto Scaling group are automatically deregistered from the load balancer.
After registering a load balancer with your Auto Scaling group, you can configure your Auto
Scaling group to use Elastic Load Balancing metrics such as the request count per target (or
other metrics) to scale the number of instances in the group as the demand on your instances
changes.
You can also optionally enable Amazon EC2 Auto Scaling to replace instances in your Auto
Scaling group based on health checks provided by Elastic Load Balancing

Hence, ELB distributes the traffic among the instances, cloud watch triggers the Auto Scaling
whenever the scaling of instances is to be done and as result, Auto Scaling performs the scaling
to ensure the availability of right no. of instances.
ELB, Auto Scaling and CloudWatch all 3 work in Sink.

5. List type of techniques that are used by load balancers?

Currently Amazon ELB only supports Round Robin (RR) and Session Sticky Algorithms.

A. Round Robin Algorithm


 
Round robin (RR) algorithm is a circular distribution of requests to enterprise servers in a sequence.
There are two types of Round Robin – Weighted Round Robin and Dynamic Round Robin. Used mainly
for a cluster of different servers, in weighted round robin each server is assigned a weight depending
on its composition. Based on the preassigned efficiency the load is distributed in a cyclical procedure.
Dynamic round robin are used to forward requests to associated servers based on real time calculation
of assigned server weights.

Least outstanding requests (LOR) algorithm is now available for Application Load Balancer. This
is in addition to the round-robin algorithm that the Application Load Balancer already supports.
Customers have the flexibility to choose either algorithm depending on their workload needs. 

With this algorithm, as the new request comes in, the load balancer will send it to the target
with least number of outstanding requests. Targets processing long-standing requests or having
lower processing capabilities are not burdened with more requests and the load is evenly
spread across targets. This also helps the new targets to effectively take load off of overloaded
targets.

B. What is a sticky session

Session stickiness, a.k.a., session persistence, is a process in which a load balancer creates an affinity between a
client and a specific network server for the duration of a session, (i.e., the time a specific IP spends on a website).
Using sticky sessions can help improve user experience and optimize network resource usage.

With sticky sessions, a load balancer assigns an identifying attribute to a user, typically by issuing a cookie or by
tracking their IP details. Then, according to the tracking ID, a load balancer can start routing all of the requests of this
user to a specific server for the duration of the session.
 

6. What do you mean by a target group in AWS Load Balancing?

A target group tells a load balancer where to direct traffic to: EC2 instances, fixed IP addresses;
or AWS Lambda functions, amongst others. When creating a load balancer, you create one or
more Listener and configure Listener rule to direct the traffic to one target group.

You can now add more than one target group to the forward action of a listener rule, and
specify a weight for each group. For example, when you define a rule having two target groups
with weights of 8 and 2, the load balancer will route 80% of the traffic to the first target group
and 20% to the other.

7. What is the difference between a load balancer and Amazon's Route 53?

Both Route53 and ELB are used to distribute the network traffic. These AWS services appear similar but
there are minor differences between them.

1. ELB distributes traffic among Multiple Availability Zone but not to multiple Regions. Route53
can distribute traffic among multiple Regions. In short, ELBs are intended to load balance across
EC2 instances in a single region whereas DNS load-balancing (Route53) is intended to help balance
traffic across regions.
2. Both Route53 and ELB perform health check and route traffic to only healthy resources.
Route53 weighted routing has health checks and removes unhealthy targets from its
list. However, DNS is cached so unhealthy targets will still be in the visitors cache for some
time. On the other hand, ELB is not cached and will remove unhealthy targets from the target
group immediately.
Use both Route53 and ELB: Route53 provides integration with ELB. You can use both Route53 and ELB
in your AWS infrastructure. If you have AWS resources in multiple regions, you can use Route53 to
balance the load among those regions. Inside the region, you can use ELB to load balance among the
instances running in various Availability Zones.

8. Can we use multiple Aws elastic load balancers for single EC2 Instance?

In theory, yes you can. Individual EC2 server instances can be attached to multiple ELBs without
any issue. You could then map DNS records to each of them using Route53's health checks.

9. Why AWS Elastic Load Balancers have more than 1 IP addresses?

When you create an internet-facing load balancer, you can optionally specify one Elastic IP
address per subnet. If you do not choose one of your own Elastic IP addresses, Elastic Load
Balancing provides one Elastic IP address per subnet for you. These Elastic IP addresses provide
your load balancer with static IP addresses that will not change during the life of the load
balancer. You cannot change these Elastic IP addresses after you create the load balancer.

When you create an internal load balancer, you can optionally specify one private IP address
per subnet. If you do not specify an IP address from the subnet, Elastic Load Balancing chooses
one for you. These private IP addresses provide your load balancer with static IP addresses that
will not change during the life of the load balancer. You cannot change these private IP
addresses after you create the load balancer.

10. Can you explain NLB in AWS?

Refer Network Load Balancing in Q.1

11. How to create an alert for AWS load balancer 'outofservice'?

Setting Up a Latency Alarm Using the AWS Management Console

To create a load balancer latency alarm that sends email

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.


2. In the navigation pane, choose Alarms, Create Alarm.
3. Under CloudWatch Metrics by Category, choose the ELB Metrics category.
4. Select the row with the Classic Load Balancer and the Latency metric.
5. For the statistic, choose Average, choose one of the predefined percentiles, or specify a custom
percentile (for example, p95.45).
6. For the period, choose 1 Minute.
7. Choose Next.
8. Under Alarm Threshold, enter a unique name for the alarm (for example, myHighCpuAlarm)
and a description of the alarm (for example, Alarm when Latency exceeds 100s). Alarm names must
contain only ASCII characters.
9. Under Whenever, for is, choose > and enter 0.1. For for, enter 3.
10. Under Additional settings, for Treat missing data as, choose ignore (maintain alarm state) so
that missing data points don't trigger alarm state changes.

For Percentiles with low samples, choose ignore (maintain the alarm state) so that the alarm
evaluates only situations with adequate numbers of data samples.
11. Under Actions, for Whenever this alarm, choose State is ALARM. For Send notification to,
choose an existing SNS topic or create a new one.
To create an SNS topic, choose New list. For Send notification to, enter a name for the SNS topic (for
example, myHighCpuAlarm), and for Email list, enter a comma-separated list of email addresses to be
notified when the alarm changes to the ALARM state. Each email address is sent a topic subscription
confirmation email. You must confirm the subscription before notifications can be sent.
12. Choose Create Alarm.

Setting up a Latency Alarm Using the AWS CLI

To create a load balancer latency alarm that sends email

1. Set up an SNS topic. For more information, see Setting Up Amazon SNS Notifications.
2. Create the alarm using the put-metric-alarm command as follows:
aws cloudwatch put-metric-alarm --alarm-name lb-mon --alarm-description "Alarm when Latency
exceeds 100s" --metric-name Latency --namespace AWS/ELB --statistic Average --period 60
--threshold 100 --comparison-operator GreaterThanThreshold --dimensions
Name=LoadBalancerName,Value=my-server --evaluation-periods 3 --alarm-actions arn:aws:sns:us-
east-1:111122223333:my-topic --unit Seconds
3. Test the alarm by forcing an alarm state change using the set-alarm-state command.
a. Change the alarm state from INSUFFICIENT_DATA to OK.
aws cloudwatch set-alarm-state --alarm-name lb-mon --state-reason "initializing" --state-value OK
b. Change the alarm state from OK to ALARM.
aws cloudwatch set-alarm-state --alarm-name lb-mon --state-reason "initializing" --state-value
ALARM
c. Check that you have received an email notification about the alarm.

12. 12) Why does ELB have 60 seconds of non-configurable request timeout?

For each request that a client makes through a Classic Load Balancer, the load balancer maintains two
connections. The front-end connection is between the client and the load balancer. The back-end
connection is between the load balancer and a registered EC2 instance. The load balancer has a
configured idle timeout period that applies to its connections. If no data has been sent or received by
the time that the idle timeout period elapses, the load balancer closes the connection. To ensure that
lengthy operations such as file uploads have time to complete, send at least 1 byte of data before each
idle timeout period elapses, and increase the length of the idle timeout period as needed.

If you use HTTP and HTTPS listeners, we recommend that you enable the HTTP keep-alive option for
your instances. You can enable keep-alive in the web server settings for your instances. Keep-alive,
when enabled, enables the load balancer to reuse back-end connections until the keep-alive timeout
expires. To ensure that the load balancer is responsible for closing the connections to your instance,
make sure that the value you set for the HTTP keep-alive time is greater than the idle timeout setting
configured for your load balancer.
Note that TCP keep-alive probes do not prevent the load balancer from terminating the connection
because they do not send data in the payload.

Configure the idle timeout using the console

By default, Elastic Load Balancing sets the idle timeout for your load balancer to 60 seconds. Use the
following procedure to set a different value for the idle timeout.

To configure the idle timeout setting for your load balancer

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


2. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
3. Select your load balancer.
4. On the Description tab, choose Edit idle timeout.
5. On the Configure Connection Settings page, type a value for Idle timeout. The range for the
idle timeout is from 1 to 4,000 seconds.
6. Choose Save.

Configure the idle timeout using the AWS CLI

Use the following modify-load-balancer-attributes command to set the idle timeout for your load
balancer:

aws elb modify-load-balancer-attributes --load-balancer-name my-loadbalancer --load-balancer-


attributes "{\"ConnectionSettings\":{\"IdleTimeout\":30}}"

The following is an example response:

{
"LoadBalancerAttributes": {
"ConnectionSettings": {
"IdleTimeout": 30
}
},
"LoadBalancerName": "my-loadbalancer"
}

13. How do we create a VPC load balancer in AWS?

Step 1: Select a load balancer type

Step 2: Define your load balancer


Step 3: Assign security groups to your load balancer in a VPC

Step 4: Configure health checks for your EC2 instances

Step 5: Register EC2 instances with your load balancer

Step 6: Tag your load balancer (optional)

Step 7: Create and verify your load balancer

Step 8: Delete your load balancer (optional)

14. What is VPC load balancer?

You might also like