Hacker News new | past | comments | ask | show | jobs | submit login
AWS data center latencies, visualized (benjdd.com)
529 points by orliesaurus 20 days ago | hide | past | favorite | 209 comments



It would be really cool if it didn't just show the ping, but how much worse it is compared to the theoretical optimum (speed of light in fiber optic medium, which I believe is about 30% slower than c).

I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.


I'm under the impression that within the hyperscalers (and probably the big colo/hosting firms, too), this is known. It's important to them, and customers, especially when a customer is trying to architect an HA or DR system and needs to ensure they don't inadvertently choose a region (or even a zone that isn't physically in the same place at other zones in the same region) that has "artificially" (can be for all kinds of legitimate reasons) latency from the primary zone.

This is not an uncommon scenario. My current employer specializes in SAP migrations to cloud and this is now a conversation we have with both AWS & GCP networking specialists when pricing & scoping projects... after having made incorrect assumptions and being bitten by unacceptable latency in the past.


Doesn't look like this is a ping[0]! Which is good. Rather it is a socket stream connecting over tcp/443. Ping (ICMP) would be a poor metric.

[0] https://github.com/mda590/cloudping.co/blob/8918ee8d7e632765...


ping is synonymous with echo-request, which is largely transport agnostic.

but you're right


why 443? are you assuming ssl here? serious question, I'm not sure. But if it is, wouldn't it be hard to disregard the weight of SSL in the metric?


The code closes the connection immediately after opening a plain TCP socket, so no SSL work is done. Presumably 443 is just a convenient port to use.


tcp/443 is likely an open port on the target service (Dynamodb based on the domain name). TLS is not involved.

ICMP ECHO would be a bad choice as it is deprioritized by routers[0].

[0] https://archive.nanog.org/sites/default/files/traceroute-201...


The script connects to well known 'dynamodb.' + region_name + '.amazonaws.com' server that expects HTTPS


You would have to map out the cables to do that.

Light in fiber optic cable travels roughly 70% of the speed of light ~210,000 km/s Earth's circumferences is ~40,000 kilometers. Direct route from the other side of Earth to another would be roughly 100 milliseconds, round trip 200 ms.


It’s pretty trivial to do this, any big fiber company will provide you with Google Earth KMZ files (protected by NDA) when considering a purchase. This is absolutely necessary when designing a redundant network or if you want lower latency.


Since light travels at 100% the speed of light in a vacuum (by definition), I have wondered if latency over far distances could be improved by sending the data through a constellation of satellites in low earth orbit instead. Though I suspect the set of tradeoffs here (much lower throughput, much higher cost, more jitter in the latency due to satellites constantly moving around relative to the terrestrial surface) probably wouldn't make this worth it for a slight decrease in latency for any use case.


Hollow core fiber (HCF) is designed to substantially reduce the latency of normal fiber while maintaining equivalent bandwidth. It's been deployed quite a bit for low latency trading applications within a metro area, but might find more uses in reducing long-haul interconnect latency.


Absolutely! The distance to LEO satellites (like spacex or kuiper) is low enough that you would beat latency of fiber paths once the destination is far enough.

In the past we just had line of sight microwave links all over the US instead.

I think it's just too damn expensive for your average webapp to cut out ten milliseconds from backend latency.


Yes. There are companies that sell microwave links over radio relay towers to various high frequency traders.


I am pretty sure this was one of the advertised strength of Starlink. Technically the journey is a bit longer, but because you can rely on the full speed of light you still come out ahead.


Cable mapping would be nice but 100ms is a meaningfully long amount of time to make straight-line comparison worthwhile


clicking around that map, I don't see any examples where the latency is a long way out of line with the distance.

Obviously it's theoretically possible to do ~40% better by using hollow fibers and as-the-crow-flies fiber routing, but few are willing to pay for that.


The 'practical' way to beat fiber optics is to use either

(i) a series of overground direct microwave connections (often used by trading firms)

(ii) a series of laser links between low altitude satellites. This would be faster in principle for long distances, and presumably Starlink will eventually offer this service to people that are very latency sensitive


Low-bandwidth/low-latency people tend to also demand high reliability and consistency. A low-orbit satellite network might be fast but, because sats move to quickly, cannot be consistent in that speed. Sats also won't ever connect data centers other than perhaps for administrative stuff. The bandwidth/reliability/growth potential just isn't there compared to bundles of traditional fiber.


> Low-bandwidth/low-latency people tend to also demand high reliability and consistency.

For trading applications, people will absolutely pay for a service that is hard down 75% of the time and has 50% packet loss the rest, but saves a millisecond over the fastest reliable line. Because otherwise someone else will be faster than you when the service is working.

They can get reliability and consistency with a redundant slower line.


Can you provide a source to this statement? The redundancy needed to transmit at desirable reliability with 50 % packet loss would, I imagine, very quickly eat into any millisecond gains -- even with theoretically optimal coding.

Someone more familiar with Shannon than I could probably quickly back-of-the-napkin this.


Financial companies have taken and upgraded/invested in microwave links because they can be comparatively economical to get "as the crow flies" distances between sites:

https://www.latimes.com/business/la-fi-high-speed-trading-20...

https://arstechnica.com/information-technology/2016/11/priva...

https://en.wikipedia.org/wiki/TD-2#Reemergence

I'm not sure about the high packet loss statement, but it wouldn't suprise me that it's true if the latency is lower enough to get to take advantage of arbitrage opportunities often enough to justify the cost.


Traders wouldn't use redundancy etc. Whenever a packet with info arrives, they would trade on that info (eg. "$MSFT stock is about to go down, so buy before it drops!"). If there is packet loss, then some info is lost, and therefore some profitable trading opportunities are missed. But thats okay.

There are thousands of such opportunities each second - they can come from consumer 'order flow' - ie. information that someone would like to buy a stock tells you the price will slightly rise, so go buy ahead of them and sell after them in some remote location.


There is also a market for stocks that trade on different exchanges, resulting in fleeting differences in price between exchanges. Those who learn of price moves first can take advantage of such differences. In such cases, all you need to transmit is the current stock price. The local machine can then decide to buy or sell.


There's definitely a few billion a year in revenue for Starlink if they sell very low latency, medium bandwidth connections between Asia, the US, Europe and Australia to trading firms. Even if the reliability is much worse than fiber.


Starlink latencies sadly aren't competitive due to the routing paths it uses. And sadly there are currently no competitors to starlink.


The routing paths traveling via ground stations, you mean? My understanding is that they were experimenting with improvements to this, they just haven't deployed anything yet.


A radio will beat starlink on ping times. Even a simple ham bouncing a off the ionosphere can win out over an orbiting satellite, at least for the very small amounts of data needed for a trade order. The difficulty in such schemes is reliability, which can be hit-or-miss depending on a hundred factors.


No, even with proposed inter-satellite routing paths, they are too slow. The trading industry has very much done the math on this.

The comparison is against radio and hollow-core fiber, not conventional fiber.


Laser links between satellites have been active since late 2022, or was there some additional improvement you're referring to?


I haven't kept track of that, but there is no other improvement. Even with the straightest possible laser links in space, they are too slow.


> sats move to quickly, cannot be consistent

Satellites in geostationary orbit are a (very common) thing.


Geostationary is so much further than LEO though so worse latency


AU <-> South Africa & South America is way less than distance.


Author here - Interesting. Someone on X also gave this idea to me. Any good resources for how to accurately compute this?


The theoretical best latency would be something like speed_of_light_in_fiber/great_circle_distance_between_regions, both of which are pretty easy to find. The first is a constant you can look up, and the second you can compute from coordinates of each region pair.


Thats what we did as well, via wolfram alpha. I.e. we were too lazy to look up everything ourselves and just asked it straight up how long of a roundtrip it would be between two destinations via fiber. We checked one result and it was spot on. This was six years ago tho


IIRC about 125 miles per ms


I have red-green color blindness, which makes it hard/impossible for me to distinguish between the <100ms and >200ms lines.

This affects about 8% of male population btw, maybe you can add a color-blind mode, very nice visualization otherwise!


As a quick workaround, you can set a CSS filter on the whole page: Either use dev tools to put a rule `filter: hue-rotate(60deg);` on the `body` element, or simply run `javascript:void(document.body.style.filter='hue-rotate(60deg)')` from the url bar.


Nice hack, thank you! :-)


Can also use Chrome Extentions like Colorblindly to change all the colors. Tested it on the webpage now (I am not colorblind) and I see that the colors change: https://chromewebstore.google.com/detail/colorblindly/flonia...


You can also use ublock origin, it has a section for your own filters:

https://gist.github.com/aclarknexient/c39c83f2f97c3c6b1c307c...


benjdd.com##html:style(filter:hue-rotate(45deg))

Tested with uBlock Origin on Firefox Mobile.


In case you are not aware, you can put this sort of thing in a bookmark on the bookmark bar (both FF and Chrom{e|ium}, I assume other browsers too) for easy access. If you don't have the bookmark bar visible hit [ctrl][shift][B] to flip it on (and the same to flip it back off later if you don't want to keep it).


let i = 0; setInterval(() => document.body.style.filter=`hue-rotate(${i++}deg)`, 16);

Disco mode!

(better to use requestAnimationFrame but I'm lazy atm)


There you go:

  let i = 0;
  function bump() {
      document.body.style.filter=`hue-rotate(${i+=2}deg)`, 16;
      requestAnimationFrame(bump);
  }
  requestAnimationFrame(bump);


Author here - Thanks for the suggestion Alex. From your perspective, what are some of the best ways you've seen people solve for this in the past? If you have links, please share.


Hi! Thanks for getting back to me, appreciate it. To be honest, I‘m not an expert at all in this topic. I‘d imagine choosing a colorblind-friendly palette (see: https://davidmathlogic.com/colorblind/ ) would be an easy fix. Alternatively, or in addition, you could use dotted/dashed/straight lines to visualize the latency buckets. Might make for an interesting effect?

Also it‘s common to hide this „colorblind mode“ behind a checkbox somewhere. So you don’t have to uglify your product. :-)


Cool, thank you for the input.


not op but this is one of the classic dataviz color palette pickers https://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3

https://venngage.com/tools/accessible-color-palette-generato... also seems nifty


I don’t see any lines at all. Just blue dots repeating the data centers. Very confusing.


You have to click one of the data centres


Tap a blue dot.


Click dots.


Do you have some kind of an accessibility tool for this? Maybe a whole screen filter that changes colors in a specific way so you can distinguish them?


No I don't. It's actually not a big deal in day-to-day life. People often go "But how the hell can you drive if you can't distinguish red from green at the stoplight?"... in reality it's more nuanced. As another comment already mentioned, perception varies across even among colorblind people. I find it hard to distinguish R/G if the colors are not fully saturated or in low-light situation. Also the brain knows that "red is on top" and "green is at the bottom" at the stoplight and thereby improves the contrast for me. ;-)

My comment was meant to raise awareness of this issue with the author of the tool. Many video games, especially the ones with some kind of HUD, minimap, etc. these days have a color-blind mode.


Yeah I’ve seen colorblind modes in a lot of apps. It is great for those affected. And probably not a huge hassle to implement anyway.

But I was curious if one needs to rely on the application developers to deliver a solution or if there was a generalized filter or whatever that would work always. Maybe like screen readers, those work fine if the app does not do something horrible. But with some help from apps, they perform much better.


Color-blind man here. While I think it’s important to consider color blindness when choosing colors, it’s not actually 8% of men who would have trouble distinguishing the two colors. That number is somewhat lower. Perception of color varies even across colorblind people so just because someone says it works for them doesn’t mean it will work for someone else, and vice versa.


This is such an easy thing to overlook for those of us that don’t. Red/green tends to be a default selection, perhaps because of traffic lights?

I started putting myself in the shoes of a family member who is in the 8% and now i spend more time trying to pick better color schemes


FWIW, anyone reading on a monochrome e-ink device will have similar issues.

Those are becoming somewhat more prevalent these days.


There are some chrome extensions for colorblind. It might be helpful to you. Please check it out.


It's sad that this is the top comment for the post. Many people have stopped posting their crappy work online due to harsh comments like yours. There's no easy reply to your comment.

Maybe we should be less critical specially with "Make it fit for my workflow" type comment, and more so if it is built by some random guy in their free time, and not say a project which is asking money.


I think this an uncharitable take – the parent comment is just proposing an improvement that would really help them given their colour-blindness (they also say they like the visualisation). Personally I find part of the reason for putting things on the internet is to allow other people to use them and obtain their feedback.


Every product has flaws which are outside of design scope. Pointing that is unnecessary. If I want feedback on my quick and dirty project, I want it on within the scope of design, not the missing features, bad accessibility etc.

Specially HN crowd is very susceptible to feeling for accessibility comment. Return of "think of poor kids in Africa".


> If I want feedback on my quick and dirty project, I want it on within the scope of design, not the missing features, bad accessibility etc.

This wasn't posted with that directive.

And if your "design is great" but your implementation sucks then maybe the design sucks too.


Maybe we should be building accessible UIs by default rather than treating an actual disability as a 'my workflow' problem.


This was not a harsh criticism. Accessibility on the web is important, especially if you want people to actually engage with what you have published.

Color blindness is nothing new, there are freely available color blind friendly color plates. Pointing out to the author that they could make a small tweak to make their work more accessible is good feedback and should continue to be given.


Sorry it came across that way, that was not my intent at all… it was meant as a simple suggestion for a potential low-hanging fruit improvement that would benefit people like me. Clearly you did not perceive it that way.


I totally understand being frustrated about people demanding workflow changes or huge accessibility features, but this is literally just a color swap that can be done with a touch of CSS it's really not a big deal.


Oh, calm down. Some people aren’t aware of this, someone pointed it out.


The easy reply is "thanks, I learned something today!"


Random fact: I did some planning around this for a client a while ago. While measuring the AWS latencies I found I could get approximate latencies (within 10%) by measuring the rough undersea cable length (km) and dividing by 150.

While not overly surprising, it was very consistent.

Edit: I think it was actually 155


That reminds me of the story of the 500 mile email (https://www.ibiblio.org/harris/500milemail.html)


I read this yeaaaars ago. I'm about to re-read this, but before I do, I think this was the article that installed a little goblin in my brain that screams "TTS" in instances like this. I will edit this if the article confirms/denies this goblin.

EDIT: mostly, probably, sort of.


Funny story. He must thank the department of statistics for the quick turn around.


I think this is because of medium velocity of light

"Through LabVIEW the speed of light in the optical fiber is calculated to be ~ 2.054 x 108 m/s corresponding to a refractive index of n ≈ 1.4606 which is a typical value" https://web.phys.ksu.edu/posters/2009/juma-Adv-Lab-S09.pdf


There's a surprising amount of real-world modelling that can be done to satisfactory precision with just multiplication and addition.


This page is such a well executed interactive map. Really enjoyed it

Is the math-planation of your random fact basically

(thanks to https://news.ycombinator.com/user?id=Hikikomori for correcting the lightspeed in fibre medium from 3e5 to 2e5 !)

- lightspeed is 2e5 km/s ~ 2e2 km/ms, so/

- length (km) / 200 (km)/ms ~ K length (km) / 200 (km)/ms, so

- latency (ms) ~ K' length (km)

Where K is approximately 1.3 (K' is 1/155) and factors in things like:

- non straight line distance

- networking overhead / switching

- both ways / measurement error

Basically?


Speed of light in a medium like fiber is about 200 000km/s.


Oh shit! Thanks. Good point. That actually makes it more plausible, as K is smaller.


1/2 c in circuit boards (FR-4), 1/3 c in cables, two useful numbers to remember.


Thanks, nice! But wait - so we have

1/2 c ~ 150 km/ms in circuit board.

1/3 c ~ 100 km/ms in cable. And...

2/3 c ~ 200 km/ms in fiber?

I'm a bit confused about difference between cable and fiber heh :)


Sorry, it was a typo. I meant 2/3 (including common cables and fiber optics), not 1/3.


Depends on what kind of cable? As twisted pair network cable is at 2/3.


What an embarrassing typo! I was thinking of 0.66, and somehow I thought 0.66 = 1/3 (must've been distracted by the "2" in 1/2). I should've written 0.66 or 2/3.


It is possible I was measuring latency in a single direction, rather than round-trip-time. My memory is a little hazy now.


No I think you had it right. I was off on the speed. Anyway, it could have matched accounting K for other factors heh :)


Right, looking at the visualization most (all?) of the red lines are the longer ones - eg North America to South Africa.


Interesting. If you click on one of the blue circles representing a data center, it shows latencies to the other data centers. This took me a second to figure out — maybe consider adding a note along the lines of “click to select a data center” on the site?


These aren't even data centers, but aggregates. They're regions, composed of many different bits of networking and compute in various levels of abstraction - dc, edge installation, whatever.

Within these regions there's a lot of variation from zone to zone, so the methodology matters.


Author here. This is great feedback, thanks.


I appreciate the effort to collect the data, but I think the rotating globe is an idea that looks cool, but makes the visualization harder to use. If I click on us-east-1, there's a 229ms line to...somewhere that I can't see. Meanwhile, I can't see the latency between us-east-1 and us-east-2.

Perhaps if you selected a datacenter, and it switched to a 2-d projection with that datacenter at the center of the map, it would be better?

Or perhaps augment the visualization with a table?


Author here - You can see the raw data as a table here: https://www.cloudping.co. Sometimes visualizations like this are a careful act of balancing practicality with cool-factor.


Winkel Tripel projection would mitigate this nicely.

(One of several options, though the best IMO.)

<https://en.wikipedia.org/wiki/Winkel_tripel_projection>


Idea: select a data center by default (i.e. us-east-1) to make it more clear.

Bonus: select the nearest data center based on the user’s IP :)


Nitpick detail: us-east-1 (and all other availability zones) are also not a single datacenter by definition. The can also spend several


AWS provides latency numbers between regions, AZ's and within an AZ in network manager. Useful to have as a latency baseline and to see if they have any issue.

https://docs.aws.amazon.com/network-manager/latest/infrastru...


> AWS provides latency numbers between regions, AZ's and within an AZ in network manager.

AWS also provides dashboards that shows what regions/services are down, and history tells us those are not to be trusted for precisely the same reasons.


Afaik it also requires someone to manually set it to be down on that page. Pretty sure that nobody is entering latency numbers manually every second, but maybe they have a team for that.


Cool visualization and concept. I do wish the colors were on a ramp instead of bucketed. The reason is that it makes 100ms look much worse than 99ms, but equal to 200ms. If you click on us-east-1, for example, the latency to the data centers in Western Europe look quite different with eu-central-1 and eu-south-1 looking completely different even though the latency is only around 9ms difference and eu-north-1 and ap-south-1 look the same even though there's about a 88ms difference!

There's some comments here also wondering about the best possible latency for speed of light vs what these measurements are. The problem with this is that c isn't the propagation velocity of information through fiber, it's some velocity well under c and depends on a number of different factors, many of which are unknowable, such as repeater latency and so on. In practice, the best theoretical value is no higher than 70% of c just measuring the velocity of light in a medium as c measures light in a vacuum.


Author here - The ramping is a really good idea. The current visualization makes a 90ms latency look "good" when in reality, thats totally unacceptable for many applications, especially for things where multiple round-trips need to happen to fulfill a request.


How did you choose which datacenters to include? For example, eu-south-2 (Spain) is missing.

The reason I know is because I worked on a project that required latency to be under 30ms between datacenters, and we had to use eu-west-1 (Ireland) and eu-south-2.

Turns out that latency is closer to 42ms, mainly because there are no undersea cables between Ireland and the continent (they only go to England, then they have to route across England to get to a cable to the content).


> How did you choose which datacenters to include? For example, eu-south-2 (Spain) is missing.

At the bottom of the page it says: «Data scraped from CloudPing», with the CloudPing dataset linked through. If you click through to CloudPing, you won't find «eu-south-2» in the dataset.


Author here - I just used what was available on https://www.cloudping.co, which is certainly missing a few. The CloudPing GitHub repo has not had a code change in 4 years. Maybe a few new regions have popped up since it was last actively worked on.


How do you know there aren't any cables between Ireland and the main European continent? I'm genuinely curious where this is published.


You can search google for [undersea cable map] but this one is the best:

https://www.submarinecablemap.com

This will show you everything connected to Ireland:

https://www.submarinecablemap.com/country/ireland


There’ll be one to France as of 2026 (as part of this project: https://en.wikipedia.org/wiki/Celtic_Interconnector)


There are good few DCs missing on that atlas


Israel (il-central-1) is also missing.


Yeah the new(ish) Melbourne region is missing too


The data is really useful, and the globe is visually impressive, but it feels like it'd be more practically useful to have a flat world map that shows all the data centers at once and makes it easier to read the lines without them getting excessively close to each other.



A 2D world may not give you the perception of how far some of these locations really are. I think an option to switch between the two would be better.



Draw them as great circles. And in any case, yes, switching between the cool 3D projection and a "show everything at once" 2D map would work.


The lines between points can be drawn to show curvature, like an airplane route map.


Agreed! It looks cool, but it's not the best visualisation to actually read the data.


Author here - Cool and useful is a careful balancing act.


Completely valid. The 3D globe is cool, it's just awkward to get the data out of.


Obviously the biggest contribution to latency is distance. But there's also some close-ish regions with poor latency because there's not fiber running directly between them (for example, over the poles)

Are there an examples of regions which dramatically violate the triangle equality? (That is, where the A--C latency is much worse than the best A--B + B--C latencies)?

Just as a curiosity, could you use that idea to "infer" which data-centers are most likely directly connected by fiber, and show only the likely fiber connections?


I can't speak for AWS specifically, but in my PhD thesis [1] I found a bunch of such examples by using RIPE Atlas probes. Essentially looking for pairs of probes where the RTT between probe A and probe C is larger than probe A-B + B-C.

Now there are some issues with this methodology (all common issues with ICMP/RTT measurements + traffic was not really routed through the "relay" probe), but such pairs do exist.

[1] https://theses.hal.science/tel-03666771/document (see page 84 for an example; if you can read French :-))


> Are there an examples of regions which dramatically violate the triangle equality? (That is, where the A--C latency is much worse than the best A--B + B--C latencies)?

I don't think this would happen at a significant scale, due to how routing works. If taking the "detour" through B is how the ICMP packets get there cheapest, that's the path they will go.

If anything, we could look at where A–C is nearly equal to A–B + B–C and find where such a thing has happened. I suppose it could happen for reasons other than lack of fiber: financially better peering agreements, etc?


There’s very few and poor cables in the areas between Russia/Mongolia/India.

AFAIK, the latency from Mumbai to southern Russia (not that far in distance) is surprisingly high. Much higher than from e.g. Frankfurt to Moscow. Don’t know if it’s enough to violate the triangle equality between Frankfurt-Moscow-Mumbai.


You can just look at the map of fiber optic cables around the world: https://www.submarinecablemap.com/

It's highly unlikely there are any non-disclosed undersea ones since they cost rather a lot to lay down.


> It's highly unlikely there are any ..

Going back decades when a billion US was real money the original NSA (No Such Agency) that essentially no one had ever heard of, including most of the US houses and much of the defence committees that had clearance but not that clearance, had a 4 Billion+ budget for "off-book" satellites.

Black cables are a damn sight cheaper than black satellites.


Yes, although I think it would also clean up the visualization, since you wouldn't have nearly so many lines connecting data centers which actually aren't connected; and it would therefore also be explanatory


There are some regions that have notoriously bad networking with higher packet loss, for example South America and South Asia are pretty bad overall.


I was just using this the other day: https://aws-latency-test.com/


Author here - yeah, I came across that when I was digging for data I could use for this. It's cool.


I thought Australia sucked until I checked the latencies for South America and South Africa. Not a single "good" latency link =)


Cool, but information about what links were used would be nice. I assume it's latencies for default AWS links, which you likely won't use if you _care_ about the latency.


Author here - The data used was scraped from https://cloudping.co. You can find more info on the GitHub repo: https://github.com/mda590/cloudping.co.


I worked at a military company and we made a SIEM tool to use at government facilities. Our Director, which was an ex-colonel at Miltiary IT, found that login screen is too plain and we need to have a world spinning. So we've implemented the end of Terminator 3 nukes over globe screen in WebGL to please him. This reminds me of it.

Anyways, although this looks cool, it'd be much more easily understandable in a 2d map instead of a rotating one.


It is insane to see this and conceptualize that you can send data across the world and back in under 500ms. Imagine telling someone that 100 years ago.


Actually it's been just about 100 years since we've been able to do this. Someone 100 years ago who you told this to would probably respond "I know, isn't it impressive!"


Author here - it is quite cool.


Its a bit sad, we have one of the Swedish, actual physical buildings for AWS in my town. But of course the traffic does not exit here, but is instead aggregated between the different sites spread around cities regionally. So no sub-ms latencies for me towards that center. I think the traffic basically went a couple hundred kms before turning back here.


I presume your town is too small for it since you called it a town, but AWS also does these things called "wavelength zones" which are as close as possible to certain cellular networks (designed to ride th 5G M2M hype train - self-driving vehicles, etc). Not sure if they do something similar for fixed networks.

Of course, it doesn't actually matter since friends don't let friends use AWS.


Does your ISP peer directly with Aws?


No, which is a possible solution, but I suspect they simply dont peer locally.


Even if they did it's very unlikely they would peer in your town.

Geographic proximity isn't the main factor for peering and even if they did your session may still get terminated at a border network gateway in a big city and your traffic has to travel the same way.


Some AWS data center latencies, there are many missing: https://aws.amazon.com/about-aws/global-infrastructure


Technically speaking those aren’t datacenters, they’re regions. AWS regions can have multiple AZs, and each AZ can have multiple datacenters spread around a city, each with different latency characteristics. This is completely opaque to customers, you don’t really get to choose which one you’re in. There are ways to gain info about it though.

This is way outdated now but gives a rough idea: https://wikileaks.org/amazon-atlas/document/AmazonAtlas_v1/A...

Source: did some latency work for a market maker.


I did a graphviz visualization of cross-az latency for all azs here https://xkyle.com/Measuring-AWS-Region-and-AZ-Latency/ (gosh, 4 years old now)


For inter region latency the difference is negligible.


This jives with measurements I've done before. I ended up running a ping setup for a few months from every region to every other region to get these timings. I was using it to calculate what our GQL latencies would look like if the backing servers were in another region or in the same region as a way to start regionalization work. Sadly we had to depend on those latencies so much that it was deemed a non-starter of an approach. Even us-west-2 (home) to us-east-2 took us from p99 300ms to p99 2.4 seconds. That sweet sweet latency reduction.


This is really cool, wish there were something like it for Azure as well.


Author here - If you have a resource that provides good data on Azure or GCP latencies, please send them my way.


Wow! This is so cool. Thank you so much for sharing!

I used my own tool Livedocs to visualize the same and here are the results. https://livedocs.com/livedocs/aws-latencies-3f2fefd5-f45d-48...


As we operate 720+ servers running ping and traceroute continously, I will try to see if we can publish intra provider or intra asn latency data like this but on a massive scale. We have a ton measurement internally but to publish them as a product or even as a web dashboard is tricky as it is hard to measure what the interest could be.


it always shocked by how few main regions they have in the US. I know they have the mini region things but still.


Note that latencies between regions are subject to change: individual links can go down or become overloaded, resulting in traffic needing to take alternate (longer) paths. Unless you have a contract specifying a specific latency, you should still be prepared for things to slow down on occasion.


interesting

the "globe" visualization is good ... but we cna only see half the world at a time ... can we have an option of a flat projection as well ... so I can see all latencies for a region at a glance?

ap-south-2 (Asia Pacific - Hyderabad,India) opened in Nov 2022 seems to be missing from the list?


All of the datacenters are colour-coded as blue, which is not on the legend. What does this mean?


Since all of them are same color its not really color coding and hence not on the legend, its just OPs choice for color of points


No lines were working for me, so the only feature was the blue colour of the dots, hence thinking they must be colour coded in line with the legend.


click on one


Ah, this was what I tried first, and it didn't do anything, but now it does seem to be doing something. That makes sense now, thanks.


Same... didn't work until I moved the globe a bit. I thought the site was broken, or getting the HN hug of death.


Is this just fiber distance between each datacenter? The coloring makes it seem significant, but from the distances it kinda just looked like ever < _km (100ms) was green, everything between _km(100ms) and _km(200ms) was orange, and everything over was red.


I'm not sure what you're wondering here. Of course physical distance is going to be a dominating factor, but this is measuring packet transit times. The speed of light over half a great-circle is only 67ms or so, ot maybe 100ms considering velocity factor in fiber, so clearly there's more to it than just distance. We can talk about what those other things are, but we both know they exist, right?


Basically yes, as distance is the most important factor when it comes to latency.


Maybe a question with an obvious answer, but why are there not yet more data centers in Africa?


I believe it's a combination of a lack of customers and lack of infrastructure. It's a big continent to cover with the necessary fibre capacity, and the market is much smaller for nearby services.

Also what you don't see in things like this, or even a list of datacenter locations, is the relative sizes of the datacenters. After US east/west coasts and Europe, datacenter capacity rapidly tails off. Parts of Asia have plenty but not on the same scale I believe (although I don't know about the Chinese market). The difference in size can be quite a few orders of magnitude between different regions.


Smaller market, less reliable power grids, more challenging heat management, less political stability in many african countries. Also, given their pricing, big cloud vendors AWS are a luxury many local businesses would probably not even consider.


This isn’t just an issue for cloud providers. It’s also not easy to find collocation space either.

My best guess is that it is a combination lower demand (vs rest of world), and infrastructure availability (connectivity + power).

I can imagine a bunch of secondary factors too, but this to me sounds like the key broad reasons.


Hah, CloudPing is awesome. I just wrote a TUI in Rust for exactly the same thing: https://github.com/obviyus/pong

I found myself going to CloudPing often enough to make a CLI for it


Just curious, why is there no us central or us texas region? It could maybe be useful.


Azure has Ohio and Texas, but you'll probably not be able to provision all the machines you want so it doesn't really matter.

I think a lot of people are sleeping on the benefits of hosting workloads in these regions. Many finance, banking & insurance companies have already taken advantage. Most of your credit card transactions are handled by data centers that live ~barycentric to the continental US. Much of small US banking tech happens in places like Missouri.


US-East-2 is also Ohio


and why are there hardly any in South America and Africa?


Pedantic but important: These are latencies between regions not data centers.


Very roughly 1/3 of the speed of light. One third is lost to the physical medium it seems. What accounts of the remaining third and how much could be plausibly shaved off of it?


Undersea cable routing. For example there are no undersea cables from Ireland to the European continent. All that traffic has to land in England first and go through a few routers to get sent on its way.

South America and Africa are even worse in that regard. Very few if any direct links.


I think the lines the data goes through are unfortunately not quite as straight as they appear in this visualization. Nor is it light all the way.


newbie here, you basically loaded d3.js to draw that globe. Is there a tutorial you followed to create those lines dynamically on the globe? Mind sharing some info on how you made this?


Is there any oss software which replicates that ping grid/table where you can have multiple sensors feeding information back to get an overview of latency on your own network?


It's cool you can look at 1/2 of the earth and see only one data center and one link. ap-southeast-2 and it's only link is too far away to show.


It shows links to wherever you selected. Select this DC to see the latency between it and everywhere else.


Would be cool if there was a mode that showed YOUR latency to all the datacenters. That is what I assumed this would do initially.


There is an official tool to do that https://aws-latency-test.com/


This is cool, but per footer "This is not an official AWS project"


Great visualisation and way of presenting info!


Author here - Thank you for the compliment.


So, there's no good fallback for `sa-east-1`...

Us-east-1 is hard to beat.

Close to the network centrality of the of the internet, low latency to both the west coast and Europe.


> Close to the network centrality of the of the internet

Most of the Internet is fractured even though technically publically routable. E.g., for someone living in China the US isn't anywhere near "network centrality".

If an internet centrality exists, it is somewhere in France or the Netherlands - usually cross-continent traffic goes through there, they have dedicated interchanges for that.


I did some research about this a while back: https://utdemir.com/posts/choosing-cloud-regions.html#:~:tex...

You are right, the best region that optimises median latency against all internet users over the world is `us-west-3`, which is Paris - I believe. Likely because it has much better latency towards Asia where the majority of internet users are.

I also investigated which two regions to choose for a multi-region setup, which ends up being London and Japan.


Although in my experience all the traffic from East Asia and Oceania (Australia, Japan, Korea, Hong Kong, Singapore, etc) to Europe goes through the US. So network-wise, the US is more central.


Agreed the network is much denser in Europe than the USA. This is obvious if you've really tried looking for network infrastructure services. The USA is just where a lot more high-level services are, like social media, due to the peculiarities of capitalism. There's no shortage of infrastructure there either, of course.

Data caps are apparently illegal here. This is good for the quality of infrastructure.

And if you have customers somewhere else you want to be in that place, or close to it network-wise.


My knee jerk reaction was to comment that this is an America-centric thinking (I live in eastern Europe, us-east is not that great), but... After consulting the map, it really looks better than the other options (of course assuming you care mostly about Europe, Americas, and don't want to piss off Asia too much)


It would be great to have all major providers on here. With a history chart to compare all providers.


Is there something similar for GCP?


Be the change you want to see. How much does it cost to set up a VM in every availability zone for an hour?

Late edit: it would also be cool to see inter-cloud latency.


Additional - I see the app here just shows good/moderate/bad latency. An actual data table would be useful to many people but not as pretty. Maybe there should be a distributed latency measurement network project.


This is not quite the same thing, but if you want to see the latency from your computer to GCP, there is this website:

https://gcping.com/


It is amazing how many people code without thinking about the speed of light.


Excellent work! Would be very interested in something similar for Azure.


There're three certainties in life death, tax and network latencies.


And off-by-one errors =P


not sure why you/source is missing ca-west-1?


Author here - The data came from https://cloudping.co, and the GitHub repo has not been updated in 4 years. See https://github.com/mda590/cloudping.co. Is that a newer data center? The CloudPing site is definitely missing a few.


really cool tool, thanks for building this.


Author here - thank you!


Houston, we need an AWS data center.


where did you get this 3D world model from?

is it part of some common js lib?


[flagged]


@dang spam right here




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: