Chapter 9: Amazon Route 53

Image result for Route 53

Amazon Route 53


Why is it called this?

AWS supposedly named the service Route 53 because all DNS requests are handled through port 53, and the "route" piece resembles the historic "Route 66" of the USA.

The title of Chapter 12 is a bit confusing "Domain Name System (DNS) and Amazon Route 53". I initially took this to mean that there's a DNS, and then there's Route 53, which was puzzling as I thought Route 53 was the Amazon DNS.  This is true. However, the chapter introduces DNS concepts 1st before talking about Route 53. And it turns out that Route 53 is a DNS+ as it has DNS, Domain Registration and Health Checking.

So the second thing I found odd about this chapter was the discussion of top level domains (TLD). There's no mention of geographical TLDs, or Country Code TLDs (ccTLD).  This is odd as most people in the world actually don't live in the USA.

Because Route 53 works for Domain Registration this is also an issue. Different countries have different rules about what and who and how you register different domains in ccTLDs.

For example, if you want to register an Australian domain?

Only second-level domains are available. Amazon Route 53 supports the second-level domains .com.au and net.au.

Route 53 can be used as a DNS and will work for IP addresses (e.g. EC2), CloudFront, S3 and ELB. But what about other services such as AWS API Gateway, Lambda etc?

It looks like it's possible to go from Route 53 to CloudFront to API Gateway.

Resource record sets uses hosted zones to do this. Public and private hosted zones.

Not sure I understand this material very well and it's hard to find a good overview. TODO

A Hosted Zone is a collection of resource record sets hosted by Route 53, and represents a single domain name. A private hosted zone is for VPCs. And a public hosted zone is for anything else (I guess).


Supported Record Types

The docs are here. and a summary:

A Format

The value for an A record is an IPv4 address in dotted decimal notation.

AAAA Format

The value for a AAAA record is an IPv6 address in colon-separated hexadecimal format.

CNAME Format


A CNAME Value element is the same format as a domain name.

MX Format


Each value for an MX resource record set actually contains two values:
  • An integer that represents the priority for an email server
  • The domain name of the email server

NAPTR Format

A Name Authority Pointer (NAPTR) is a type of resource record set that is used by Dynamic Delegation Discovery System (DDDS) applications to convert one value to another or to replace one value with another. 

NS Format

An NS record identifies the name servers for the hosted zone. The value for an NS record is the domain name of a name server. 


PTR Format


A PTR record Value element is the same format as a domain name.

SOA Format

A start of authority (SOA) record provides information about a domain and the corresponding Amazon Route 53 hosted zone. 

SPF Format

SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the record type is SPF. 

SRV Format

An SRV record Value element consists of four space-separated values. The first three values are decimal numbers representing priority, weight, and port. The fourth value is a domain name. 

TXT Format

A TXT record contains a space-separated list of double-quoted strings. A single string include a maximum of 255 characters. In addition to the characters that are permitted unescaped in domain names, space is allowed in TXT strings. All other octet values must be quoted in octal form. Unlike domain names, case is preserved in character strings.


Amazon Route 53 currently supports the following DNS record types:
  • A (address record)
  • AAAA (IPv6 address record)
  • CNAME (canonical name record)
  • MX (mail exchange record)
  • NAPTR (name authority pointer record)
  • NS (name server record)
  • PTR (pointer record)
  • SOA (start of authority record)
  • SPF (sender policy framework)
  • SRV (service locator)
  • TXT (text record)
  • Additionally, Amazon Route 53 offers ‘Alias’ records (an Amazon Route 53-specific virtual record). Alias records are used to map resource record sets in your hosted zone to Amazon Elastic Load Balancing load balancers, Amazon CloudFront distributions, AWS Elastic Beanstalk environments, or Amazon S3 buckets that are configured as websites. Alias records work like a CNAME record in that you can map one DNS name (example.com) to another ‘target’ DNS name (elb1234.elb.amazonaws.com). They differ from a CNAME record in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record.
We anticipate adding additional record types in the future.

Routing Policy

The summary from the docs is:






Simple Routing Policy
Use a simple routing policy when you have a single resource that performs a given function for your domain, for example, one web server that serves content for the example.com website. In this case, Amazon Route 53 responds to DNS queries based only on the values in the resource record set, for example, the IP address in an A record.
Weighted Routing Policy
Use the weighted routing policy when you have multiple resources that perform the same function (for example, web servers that serve the same website) and you want Amazon Route 53 to route traffic to those resources in proportions that you specify (for example, one quarter to one server and three quarters to the other). For more information about weighted resource record sets, see Weighted Routing.
Latency Routing Policy
Use the latency routing policy when you have resources in multiple Amazon EC2 data centers that perform the same function and you want Amazon Route 53 to respond to DNS queries with the resources that provide the best latency. For example, you might have web servers for example.com in the Amazon EC2 data centers in Ireland and in Tokyo. When a user browses to example.com, Amazon Route 53 chooses to respond to the DNS query based on which data center gives your user the lowest latency. For more information about latency resource record sets, see Latency-Based Routing.
Failover Routing Policy (This uses Health Checks!)
Use the failover routing policy when you want to configure active-passive failover, in which one resource takes all traffic when it's available and the other resource takes all traffic when the first resource isn't available. For more information about failover resource record sets, see Configuring Active-Passive Failover by Using Amazon Route 53 Failover and Failover Alias Resource Record Sets. For information about creating failover resource record sets in a private hosted zone, see Configuring Failover in a Private Hosted Zone.
Geolocation Routing Policy
Use the geolocation routing policy when you want Amazon Route 53 to respond to DNS queries based on the location of your users. For more information about geolocation resource record sets, see Geolocation Routing.








Note that Geolocation can get tricky due to overlapping geographic locations, and the fact that not all IPs are mapped to a location (need a default resource record set).

And that Health checks are used for Failover.  See:

http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

The book notes that it could take up to 90s to detect and failover. Harder to find in the docs. I also wonder how the ELB and Route 53 health checks work together? E.g. if you are using both ELB and Route 53 together?

This deep dive is recommended watching and the slides.


I wonder how different policies work together (which they obviously have to). E.g. if you use Geolocation to limit access to regions combined with failover, and the only region people in a geolocation are allowed to access becomes unhealthy then what?!

There is another related service, Traffic flow. It would be nice if there was GUI tool support for this? yes!

Image result for aws route 53 traffic flow





Which other Amazon services does Route 53 work with? From the docs:

You can use Amazon Route 53 to route traffic to a variety of AWS resources.
Amazon CloudFront
Amazon EC2
AWS Elastic Beanstalk
Elastic Load Balancing
Amazon RDS
Amazon S3
Amazon WorkMail

More about using ELB with Route 53 (not sure this answers my questions about how the health checks interact).

Ah, maybe the Route 53 health checking only checks health of ELBs and their associated instances and then stops using an ELB with no healthy instances?

A blog which mentions the introduction of the Route 53 + ELB service.

Until today, it was difficult to use DNS Failover if your application was running behind ELB to balance your incoming traffic across EC2 instances, because there was no way configure Route 53 health checks against an ELB endpoint-to create a health check, you need to specify an IP address to check, and ELBs dont have fixed IP addresses.

There'a also active-active and active-passive failover options.

Fail Whale


  • The application's fail.domain.com Route 53 record is ALIASed to an AWS CloudFront, Amazon's content delivery service, distribution of an S3 bucket hosting a static "fail whale" version of an application.
  • Application's www.domain.com Route 53 record is ALIASed to prod.domain.com (as primary target) and fail.domain.com (as secondary target) with Failover routing policy - this ensures www.domain.com routes to production ELBs if at least one of them is healthy or the "fail whale" if all of them appear to be unhealthy
Also referred to here:

For example, you could create a nice fail whale page with a friendly message to your customers, and perhaps a phone number or email address so that your customers can reach you even though your website is down.

Really? This would work really well for a high traffic/mission critical web site (e.g. Census). NOT.

And why would you need a "Fail Whale" static site on AWS? Surely you would just architecture for HA across regions and ensure that the actual dynamic web site was always available?  I guess the Zombie apocalypse scenario caused by pushing bad code (or bad AWS infrastructure code perhaps) everywhere at once is plausible (also from the cloudnative blog):


  • Total zombie apocalypse with all application instances failing their health checks (or the slightly more likely case of pushing bad application code globally) - handled by Route 53. Requests are routed to a static "fail whale" version of an application served from AWS edge locations and hosted on S3.
And the original Fail Whale (twitter). Obviously they weren't using AWS?! No they weren't. See this blog on twitter infrastructure and s/w history.



I guess I've never kept up with Twitter so missed this (more so now that President Trump has taken over Twitter :-(

My favourite Fail Whale the Fall Whale from Hitchhikers guide to the galaxy!



And the bad bit, the Ground...




And finally for something else slightly scary (other than falling whales, and not about Route 53 rather CloudFront so what's it doing in this chapter?:

  • (Optionally) Application's content (both static and dynamic) is served using CloudFront - this ensures the content is delivered to clients from AWS edge locations spread all over the the world with minimal latency.
  • Serving dynamic content from CDN, cached for short periods of time like several seconds, takes the load off the application and further improves its latency and responsiveness.
Really????


  • From Edge Location to Origin – The nature of dynamic content requires repeated back and forth calls to the origin server. CloudFront edge locations collapse multiple concurrent requests for the same object into a single request. They also maintain persistent connections to the origins (with the large window size). Connections to other parts of AWS are made over high-quality networks that are monitored by Amazon for both availability and performance. This monitoring has the beneficial side effect of keeping error rates low and window sizes high.

When Content Isn’t So Dynamic

Sometimes content changes infrequently – for example, your favicon probably changes rarely. Blog posts, once written, seldom change. Serving these items from a CDN is still an effective way to reduce load on your webserver and reduce latency for your users. But when things do change – such as updated images, additional comments, or new posts, how can you use CloudFront to serve the new content? How can you make sure CloudFront works well with your updated content?

Object versioning

A common technique used to enable updating static objects is called object versioning. This means adding a version number to the file name, and updating the link to the file when a new version is available. This technique also allows an entire set of resources to be versioned at once, when you create a versioned directory name to hold the resources.
Object versioning works well with CloudFront. In fact, it is the recommended way to update static resources that change infrequently. The alternative method, invalidating objects, is more expensive and difficult to control.

Combining the Above Techniques

You can use a combination of the above techniques to create a low-latency service that caches sometimes-dynamic content. For example, a WordPress blog could be optimized by integrating these techniques into the WordPress engine, perhaps via a plugin. Here’s what you’d do:
  • Create a CloudFront distribution for the site, setting its custom origin to point to the webserver.
  • Poke holes in the distribution necessary for the admin, login, and forms pages.
  • Create new versions of pages, images, etc. when they change, and new versions of the pages that refer to them.
Even though WordPress generates each page via PHP, this collection of techniques allows the pages to be served via CloudFront and also be updated when changes occur. I don’t know of a plugin that combines all these techniques, but I suspect the good folks at W3-EDGE, producers of the W3 Total Cache performance optimization framework I mentioned above, are already working on it.

PS


Recall that last month the CloudFront team announced lowering the minTTL customers can set on their objects, down to as low as 0 seconds to support delivery of dynamic content. 

I.e. you don't want to cache dynamic content. Ever? Well maybe. In the diagram he shows an example with a 900 (s) minTTL time for ad serving (bottom right).  Are ads dynamic content? Yes maybe. 900s looks like it is designed to load once per user session perhaps? 15 minutes is  somewhat more than average session duration (1-3 minutes), but many ok for cache refresh policy for ads.



PPS

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. very clear cut information i get some knowledge after read this blog thank you keep share your information with us AWS Online Training Bangalore

    ReplyDelete
  3. Hosting Domain Registration in USA

    Less Hosting Domain Registration - Domain name registration at very low prices. Buy .com, .in, .net & .org domain names. 2 email A/Cs FREE with every domain in USA.


    to get more -https://www.lesshosting.net/en/domain-registration/idn

    ReplyDelete
  4. Want to change your career in Selenium? Red Prism Group is one of the best training coaching for Selenium in Noida. Now start your career for Selenium Automation with Red Prism Group. Join training institute for selenium in noida.

    ReplyDelete

Post a Comment

Popular posts from this blog

Chapter 11: AWS Directory Service, Cloud Directory

AWS Solution Architecture Certification Postscript

Chapter 2: Amazon Simple Storage Service (S3) and Amazon Glacier Storage (for storing your beer?)