Hacker News new | past | comments | ask | show | jobs | submit login
Amazon Route 53 Releases Auto Naming API for Service Name Management/Discovery (amazon.com)
88 points by MayBeColin on Dec 5, 2017 | hide | past | favorite | 20 comments



This seems like a useful feature but it would be nice to have a better understanding of how it affects rate-limiting.

Historically Route53 has been pretty useless for service discovery because of the heavy rate-limiting. Something around 5 updates per second, on the whole account with a window of 60 seconds. I don't know if it's still the case but it used to be quite painful, especially when you need to scale up the number of instances quickly.


Why would these instances not be behind ELB?


There are service architectures where putting them behind an ELB is not an ideal scenario (an example would be RabbitMQ; works great with haproxy in front of it, not so much with ELBs). ELBs aren't always a perfect fit, and depending on what sort of connection lifecycle or scaling velocity you require, direct addressing of service instances might be required.


WebSocket. ELB adds latency and is harder to debug.


Have you considered using NLBs instead?


That was before NLBs. But yeah, NLBs might be a good choice now.


It's not clear how this is different from what is currently possible. I'm not a route53 guru but can't you already a) create a subdomain microservice.mydomain.com b) create instances c) add the instances IP address to an A or AAAA record for the subdomain.

Is it that they didn't have APIs for these operations and now they do?

I know I'm missing something.


If I'm reading the underlying docs correctly, previously you would have called ChangeResourceRecordSets[0] with a quite verbose XML document. It looks like you'd need to first query for the existing RR set, modify it, then update it, and deal with potential race conditions if two service instances are starting concurrently. Technically possible, but quite a bit of complexity.

Now with auto-naming, you create a service[1], then a service instance calls RegisterInstance[2] on start-up with a much simpler JSON payload.

0: https://docs.aws.amazon.com/Route53/latest/APIReference/API_...

1: https://docs.aws.amazon.com/Route53/latest/APIReference/API_...

2: https://docs.aws.amazon.com/Route53/latest/APIReference/API_...


There were APIs to do the operations you mentioned. I think for most services an ELB would do the trick; create an ELB, add instances to it, create a CNAME or Alias to the ELB.

The one time I've wanted this is with auto scaling groups for services that don't use ELBs. I haven't found docs on it, but if this could be used to add/remove DNS records based on auto scaling events that would be useful. It would save from using lifecycle hooks to trigger a Lambda function.

Also this seems to be a larger service discovery play, it just doesn't seem very fleshed out yet.


It's not a long article, like three paragraphs. The complexity is handling health checks and the like. If one of your endpoints goes down, you want to update the DNS record to remove it. Which means you have to make or use software that continuously monitors your endpoints and updates DNS accordingly. Now Route 53 will do those health checks for you automatically.


When I first read the article I was under the impression that now one would be able to connect a zone with an autoscaling group (and, as you mentioned, avoid allocation internal ELBs), but it looks like it's really just some sugar on top of the existing API.

Am I right?


Only 8 records per answer? I wonder why. Many of us run services comprising hundreds of endpoints or more.


It seems to me up to 8 records (ie. 8 IPs) per endpoint (like "api.mysuperwebsite.com" will return up to 8 IPs that can serve the request for that endpoint). That would mean that you can have hundreds of endpoints, but each of them would return up to 8 records. I wonder if there are more than 8 corresponding IPs, how the returned ones are selected (besides the health check), but that's a different question, I guess.


Considering this is DNS there has been a historical limit of 512 bytes. Despite this not usually being a limitation now you are really pushing the ideal packet size with hundreds of answers each of which are multiple bytes. High chance of packets being dropped.


> Considering this is DNS there has been a historical limit of 512 bytes.

Only with UDP transport, longer responses are told to requery via TCP.


These days EDNS0 allows bigger UDP responses in many cases, which may mean some fragment re-assembly. Unfortunately there are a staggering number of networks and firewalls that don't open TCP 53, and also ones that don't permit UDP fragments. So if you want DNS to work reliably /everywhere/, sadly it's wise to stay below the 512 limit.


We're talking about service discovery here. This is internal DNS traffic in AWS, where these issues to which you refer are nonexistent.


That means 8 IPs will be returned in a DNS query.


For folks who are using Consul or Zookeeper, would you consider replacing service discovery with this?


Not yet. I think this is just the first step towards an AWS-native service mesh solution. I suspect there will be future announcements over the coming months that continue to put all the pieces together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: