Pre warming AWS Load balancer

AWS: Pre-Warming the Load Balancer




How to Resolve Load balancer request over flow for a sudden traffic spike ? 

Load balancer is giving 504 error ?

Load balancer is giving target connection  error ?

If you are facing such issues ,  please read this article to get the solution. 

Do you expect a spike in traffic? Let’s say your stakeholders expect a ramp of 20000 users in the first minutes of your website. How do you handle a scenario where you expect tens of thousands of users in the first minutes since the launch of your website? This is a great example of handling fault tolerance in AWS.

If one wants to achieve fault tolerance in AWS, there a few options to do that:

Use a Load Balancer – no matter how much the traffic increases, if you place your instances behind a Load Balancer it is always a great idea because the traffic is balanced across all the healthy instances.

Use an Auto Scaling Group – Load balancer can scale up/down with as many instances as you want, this is a really powerful feature of AWS that one can use.

The ELB/ALB are designed to handle large loads of traffic (20kb/sec) without a problem when this traffic increases gradually over a long period of time (several hours). However, when you expect high increase in traffic over a short period of time, then you face a problem.

AWS considers that if the traffic increases more than 50% in less than 5 minutes then it means that the traffic is sent to the load balancer at a rate that increases faster than the ELB/ALB can scale up to meet it. What can you do in such cases?

Well, one needs to contact AWS to do an operation called “pre-warming”. What does that mean? This means that the AWS tech guys will configure the Load Balancer to have an appropriate level of capacity based on the expected traffic. There is a full list of answers that the AWS guys need in order to do that and I share that list below with some of the values we used already for this operation:


1. Traffic delta or request rate expected at surge(in Requests Per Second)

2. Average amount of data passing through the ELB per request/response pair (In Bytes)

    This information can be seen from Load balancer access logs. 

3. Rate of traffic increase i.e. % increase over a time period

4. Are Keep-Alives used on the back-end?

5. Percent of traffic using SSL termination on the ELB

6. Is the back-end scaled to event/spike levels? [Y/N] [If N, when will you scale the back-end? and how many and what type of back-end instances

7. Start date/time and timezone for elevated traffic patterns

8. End date/time and timezone for elevated traffic patterns

9. A brief description of your use case. What is driving this traffic? (e.g. application launch, event driven like marketing/product launch/sale, etc)

raise a ticket, call etc with AWS with all above answers , AWS team will pre-warm the load balancer for you.

Comments

Popular Posts

How to Increase Apache Request Per Second ?

how to clear dispatcher cache in aem ?

Configure/Decoding AEM AuditLogs

How to Configure CSP header in AEM , Dispatcher ?

How to protect AEM against CSRF Attack ?

How to prevent DDoS in Apache ?

Security best Practice in AEM

Difference between Adobe AEM Enterprise vs Adobe AEM as a Cloud Service

How Does S3 works with AEM ?

OakAccess0000: Access denied