Evaluate your SIEM
Get the guideOperational Visibility From AWS
Machine data holds hidden secrets that deliver true insights about the operational health of your AWS infrastructure. Learn more about operational visibility from AWS today!
August 15, 2016
An often under-appreciated service on AWS is Route 53. One could make the mistake of thinking of AWS Route 53 as just another DNS service. To the contrary, using AWS Route 53 for global load balancing, you can benefit from improved latency and better availability for your application stack.
How is this done? This article will give you an overview of how it can be setup, and hopefully it will provide a few tips to help you along the way.
Machine data holds hidden secrets that deliver true insights about the operational health of your AWS infrastructure. Learn more about operational visibility from AWS today!
If you’re unfamiliar with load balancing or global load balancing, a quick explanation is in order.
Load balancing is a method of distributing application workload across multiple computing resources. This is typically done in a few different ways:
Global load balancing involves routing application traffic to geographically diverse servers or data centers. This can be done with both physical and virtual infrastructure. We’ll be discussing using both DNS and server side load balancing using Route 53 and Elastic Load Balancing.
[Learn More: AWS Monitoring]
There are a number of use cases where you can benefit from global load balancing.
Say you want to set up virtual datacenters in multiple global AWS regions. While it is possible to globally load balance between both physical and virtual data centers, for the purposes of this discussion, our infrastructure will exist solely on AWS.
In this example, we will be using four different AWS regions to provide application availability to our clients. These regions are:
Using these regions will provide reasonable coverage for our theoretical globally located web clients. Naturally, you will need to assess where your clients are located for your own application implementation and which regions are best suited to serve them. You may only have US clients, in which case, US based AWS regions may be more appropriate than the global locations that have been chosen here.
Plan and deploy your application stack in each location. Some things you will need to consider are redundancy, auto-scaling groups in each availability zone, AWS elastic load balancer deployment and cross-AZ load balancing, data replication between AZs and regions, VPC layout, connectivity between regions, application and instance monitoring, and your deployment and configuration management system.
As with everything on this list, be sure that you give careful consideration to your IP addressing scheme when connecting VPCs together. It is easy to overlook this when planning your deployment, and IP address conflicts could result if you simply use the default IP addressing in each VPC. This will make routing and network connectivity between regions difficult, if not impossible.
Our primary domain to be resolved by the web clients will be example.com. It is helpful to become familiar with Route 53 routing policies prior to attempting to configure your DNS.
The DNS records will be as follows:
Weighted domains: aws-elb-usw1.examples.com, aws-elb-use1.example.com, aws-elb-apne1.example.com, aws-elb-euc1.example.com
The latency-based domains will point to the weighted domains. Each latency-based domain will have listed all of the weighted domain records given above.
Weighted domains give you the ability to route local client traffic to the region of your choice. For example, you want to take your datacenter in EU-Central-1 offline for maintenance, you could send that region’s traffic to another region, such as US-East-1 by changing the weighting of your records.
In the case above, you might provide a weighting of 100 to the aws-elb-euc-1.example.com, and 0 to all of the other domains. When you want to redirect the traffic for your EU clients, you would increase the weighting of US-East-1 to 100, then reduce the EU-Central-1 weighting to 0.
Traffic will then shift to the domain with the heavier weight.
One helpful tip: Be sure to increase the weight of your target domain before reducing the weight of the one from which you want to direct traffic away. Otherwise, you run the risk of not giving an endpoint for clients to connect to, effectively blackholing the traffic.
You should be cognizant of a few things when setting up your global load balancing infrastructure.
In this article, we’ve covered load balancing, global load balancing, their use cases, AWS Route 53, and a variety of tricks and tips to consider when you look to plan and set up your AWS Route 53 infrastructure.
One topic not covered here is log analytics. Sumo Logic is built on AWS and has deep integrations with AWS services including AWS Lambda, Amazon VPC Flow, CloudTrail, AWS Elastic Load Balancing, and Kinesis to name a few. Visit the AWS Apps page for more information.
Editor’s Note: Global Load Balancing Using AWS Route 53 is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to visit the Sumo Logic Developers for free tools, API’s and example code that will enable you to monitor and troubleshoot applications from code to production.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial