It would be really cool if it didn't just show the ping, but how much worse it is compared to the theoretical optimum (speed of light in fiber optic medium, which I believe is about 30% slower than c).
I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.
I'm under the impression that within the hyperscalers (and probably the big colo/hosting firms, too), this is known. It's important to them, and customers, especially when a customer is trying to architect an HA or DR system and needs to ensure they don't inadvertently choose a region (or even a zone that isn't physically in the same place at other zones in the same region) that has "artificially" (can be for all kinds of legitimate reasons) latency from the primary zone.
This is not an uncommon scenario. My current employer specializes in SAP migrations to cloud and this is now a conversation we have with both AWS & GCP networking specialists when pricing & scoping projects... after having made incorrect assumptions and being bitten by unacceptable latency in the past.
Light in fiber optic cable travels roughly 70% of the speed of light ~210,000 km/s
Earth's circumferences is ~40,000 kilometers.
Direct route from the other side of Earth to another would be roughly 100 milliseconds, round trip 200 ms.
It’s pretty trivial to do this, any big fiber company will provide you with Google Earth KMZ files (protected by NDA) when considering a purchase. This is absolutely necessary when designing a redundant network or if you want lower latency.
Since light travels at 100% the speed of light in a vacuum (by definition), I have wondered if latency over far distances could be improved by sending the data through a constellation of satellites in low earth orbit instead. Though I suspect the set of tradeoffs here (much lower throughput, much higher cost, more jitter in the latency due to satellites constantly moving around relative to the terrestrial surface) probably wouldn't make this worth it for a slight decrease in latency for any use case.
Hollow core fiber (HCF) is designed to substantially reduce the latency of normal fiber while maintaining equivalent bandwidth. It's been deployed quite a bit for low latency trading applications within a metro area, but might find more uses in reducing long-haul interconnect latency.
Absolutely! The distance to LEO satellites (like spacex or kuiper) is low enough that you would beat latency of fiber paths once the destination is far enough.
I am pretty sure this was one of the advertised strength of Starlink. Technically the journey is a bit longer, but because you can rely on the full speed of light you still come out ahead.
clicking around that map, I don't see any examples where the latency is a long way out of line with the distance.
Obviously it's theoretically possible to do ~40% better by using hollow fibers and as-the-crow-flies fiber routing, but few are willing to pay for that.
The 'practical' way to beat fiber optics is to use either
(i) a series of overground direct microwave connections (often used by trading firms)
(ii) a series of laser links between low altitude satellites. This would be faster in principle for long distances, and presumably Starlink will eventually offer this service to people that are very latency sensitive
Low-bandwidth/low-latency people tend to also demand high reliability and consistency. A low-orbit satellite network might be fast but, because sats move to quickly, cannot be consistent in that speed. Sats also won't ever connect data centers other than perhaps for administrative stuff. The bandwidth/reliability/growth potential just isn't there compared to bundles of traditional fiber.
> Low-bandwidth/low-latency people tend to also demand high reliability and consistency.
For trading applications, people will absolutely pay for a service that is hard down 75% of the time and has 50% packet loss the rest, but saves a millisecond over the fastest reliable line. Because otherwise someone else will be faster than you when the service is working.
They can get reliability and consistency with a redundant slower line.
Can you provide a source to this statement? The redundancy needed to transmit at desirable reliability with 50 % packet loss would, I imagine, very quickly eat into any millisecond gains -- even with theoretically optimal coding.
Someone more familiar with Shannon than I could probably quickly back-of-the-napkin this.
Financial companies have taken and upgraded/invested in microwave links because they can be comparatively economical to get "as the crow flies" distances between sites:
I'm not sure about the high packet loss statement, but it wouldn't suprise me that it's true if the latency is lower enough to get to take advantage of arbitrage opportunities often enough to justify the cost.
Traders wouldn't use redundancy etc. Whenever a packet with info arrives, they would trade on that info (eg. "$MSFT stock is about to go down, so buy before it drops!"). If there is packet loss, then some info is lost, and therefore some profitable trading opportunities are missed. But thats okay.
There are thousands of such opportunities each second - they can come from consumer 'order flow' - ie. information that someone would like to buy a stock tells you the price will slightly rise, so go buy ahead of them and sell after them in some remote location.
There is also a market for stocks that trade on different exchanges, resulting in fleeting differences in price between exchanges. Those who learn of price moves first can take advantage of such differences. In such cases, all you need to transmit is the current stock price. The local machine can then decide to buy or sell.
There's definitely a few billion a year in revenue for Starlink if they sell very low latency, medium bandwidth connections between Asia, the US, Europe and Australia to trading firms. Even if the reliability is much worse than fiber.
The routing paths traveling via ground stations, you mean? My understanding is that they were experimenting with improvements to this, they just haven't deployed anything yet.
A radio will beat starlink on ping times. Even a simple ham bouncing a off the ionosphere can win out over an orbiting satellite, at least for the very small amounts of data needed for a trade order. The difficulty in such schemes is reliability, which can be hit-or-miss depending on a hundred factors.
The theoretical best latency would be something like speed_of_light_in_fiber/great_circle_distance_between_regions, both of which are pretty easy to find. The first is a constant you can look up, and the second you can compute from coordinates of each region pair.
Thats what we did as well, via wolfram alpha. I.e. we were too lazy to look up everything ourselves and just asked it straight up how long of a roundtrip it would be between two destinations via fiber. We checked one result and it was spot on. This was six years ago tho
As a quick workaround, you can set a CSS filter on the whole page: Either use dev tools to put a rule `filter: hue-rotate(60deg);` on the `body` element, or simply run `javascript:void(document.body.style.filter='hue-rotate(60deg)')` from the url bar.
In case you are not aware, you can put this sort of thing in a bookmark on the bookmark bar (both FF and Chrom{e|ium}, I assume other browsers too) for easy access. If you don't have the bookmark bar visible hit [ctrl][shift][B] to flip it on (and the same to flip it back off later if you don't want to keep it).
Author here - Thanks for the suggestion Alex. From your perspective, what are some of the best ways you've seen people solve for this in the past? If you have links, please share.
Hi! Thanks for getting back to me, appreciate it. To be honest, I‘m not an expert at all in this topic. I‘d imagine choosing a colorblind-friendly palette (see: https://davidmathlogic.com/colorblind/ ) would be an easy fix. Alternatively, or in addition, you could use dotted/dashed/straight lines to visualize the latency buckets. Might make for an interesting effect?
Also it‘s common to hide this „colorblind mode“ behind a checkbox somewhere. So you don’t have to uglify your product. :-)
Do you have some kind of an accessibility tool for this? Maybe a whole screen filter that changes colors in a specific way so you can distinguish them?
No I don't. It's actually not a big deal in day-to-day life. People often go "But how the hell can you drive if you can't distinguish red from green at the stoplight?"... in reality it's more nuanced. As another comment already mentioned, perception varies across even among colorblind people. I find it hard to distinguish R/G if the colors are not fully saturated or in low-light situation. Also the brain knows that "red is on top" and "green is at the bottom" at the stoplight and thereby improves the contrast for me. ;-)
My comment was meant to raise awareness of this issue with the author of the tool. Many video games, especially the ones with some kind of HUD, minimap, etc. these days have a color-blind mode.
Yeah I’ve seen colorblind modes in a lot of apps. It is great for those affected. And probably not a huge hassle to implement anyway.
But I was curious if one needs to rely on the application developers to deliver a solution or if there was a generalized filter or whatever that would work always. Maybe like screen readers, those work fine if the app does not do something horrible. But with some help from apps, they perform much better.
Color-blind man here. While I think it’s important to consider color blindness when choosing colors, it’s not actually 8% of men who would have trouble distinguishing the two colors. That number is somewhat lower. Perception of color varies even across colorblind people so just because someone says it works for them doesn’t mean it will work for someone else, and vice versa.
It's sad that this is the top comment for the post. Many people have stopped posting their crappy work online due to harsh comments like yours. There's no easy reply to your comment.
Maybe we should be less critical specially with "Make it fit for my workflow" type comment, and more so if it is built by some random guy in their free time, and not say a project which is asking money.
I think this an uncharitable take – the parent comment is just proposing an improvement that would really help them given their colour-blindness (they also say they like the visualisation). Personally I find part of the reason for putting things on the internet is to allow other people to use them and obtain their feedback.
Every product has flaws which are outside of design scope. Pointing that is unnecessary. If I want feedback on my quick and dirty project, I want it on within the scope of design, not the missing features, bad accessibility etc.
Specially HN crowd is very susceptible to feeling for accessibility comment. Return of "think of poor kids in Africa".
This was not a harsh criticism. Accessibility on the web is important, especially if you want people to actually engage with what you have published.
Color blindness is nothing new, there are freely available color blind friendly color plates. Pointing out to the author that they could make a small tweak to make their work more accessible is good feedback and should continue to be given.
Sorry it came across that way, that was not my intent at all… it was meant as a simple suggestion for a potential low-hanging fruit improvement that would benefit people like me.
Clearly you did not perceive it that way.
I totally understand being frustrated about people demanding workflow changes or huge accessibility features, but this is literally just a color swap that can be done with a touch of CSS it's really not a big deal.
Random fact: I did some planning around this for a client a while ago. While measuring the AWS latencies I found I could get approximate latencies (within 10%) by measuring the rough undersea cable length (km) and dividing by 150.
While not overly surprising, it was very consistent.
I read this yeaaaars ago. I'm about to re-read this, but before I do, I think this was the article that installed a little goblin in my brain that screams "TTS" in instances like this. I will edit this if the article confirms/denies this goblin.
What an embarrassing typo! I was thinking of 0.66, and somehow I thought 0.66 = 1/3 (must've been distracted by the "2" in 1/2). I should've written 0.66 or 2/3.
Interesting. If you click on one of the blue circles representing a data center, it shows latencies to the other data centers.
This took me a second to figure out — maybe consider adding a note along the lines of “click to select a data center” on the site?
These aren't even data centers, but aggregates. They're regions, composed of many different bits of networking and compute in various levels of abstraction - dc, edge installation, whatever.
Within these regions there's a lot of variation from zone to zone, so the methodology matters.
I appreciate the effort to collect the data, but I think the rotating globe is an idea that looks cool, but makes the visualization harder to use. If I click on us-east-1, there's a 229ms line to...somewhere that I can't see. Meanwhile, I can't see the latency between us-east-1 and us-east-2.
Perhaps if you selected a datacenter, and it switched to a 2-d projection with that datacenter at the center of the map, it would be better?
Or perhaps augment the visualization with a table?
Author here - You can see the raw data as a table here: https://www.cloudping.co. Sometimes visualizations like this are a careful act of balancing practicality with cool-factor.
AWS provides latency numbers between regions, AZ's and within an AZ in network manager. Useful to have as a latency baseline and to see if they have any issue.
> AWS provides latency numbers between regions, AZ's and within an AZ in network manager.
AWS also provides dashboards that shows what regions/services are down, and history tells us those are not to be trusted for precisely the same reasons.
Afaik it also requires someone to manually set it to be down on that page. Pretty sure that nobody is entering latency numbers manually every second, but maybe they have a team for that.
Cool visualization and concept. I do wish the colors were on a ramp instead of bucketed. The reason is that it makes 100ms look much worse than 99ms, but equal to 200ms. If you click on us-east-1, for example, the latency to the data centers in Western Europe look quite different with eu-central-1 and eu-south-1 looking completely different even though the latency is only around 9ms difference and eu-north-1 and ap-south-1 look the same even though there's about a 88ms difference!
There's some comments here also wondering about the best possible latency for speed of light vs what these measurements are. The problem with this is that c isn't the propagation velocity of information through fiber, it's some velocity well under c and depends on a number of different factors, many of which are unknowable, such as repeater latency and so on. In practice, the best theoretical value is no higher than 70% of c just measuring the velocity of light in a medium as c measures light in a vacuum.
Author here - The ramping is a really good idea. The current visualization makes a 90ms latency look "good" when in reality, thats totally unacceptable for many applications, especially for things where multiple round-trips need to happen to fulfill a request.
How did you choose which datacenters to include? For example, eu-south-2 (Spain) is missing.
The reason I know is because I worked on a project that required latency to be under 30ms between datacenters, and we had to use eu-west-1 (Ireland) and eu-south-2.
Turns out that latency is closer to 42ms, mainly because there are no undersea cables between Ireland and the continent (they only go to England, then they have to route across England to get to a cable to the content).
> How did you choose which datacenters to include? For example, eu-south-2 (Spain) is missing.
At the bottom of the page it says: «Data scraped from CloudPing», with the CloudPing dataset linked through. If you click through to CloudPing, you won't find «eu-south-2» in the dataset.
Author here - I just used what was available on https://www.cloudping.co, which is certainly missing a few. The CloudPing GitHub repo has not had a code change in 4 years. Maybe a few new regions have popped up since it was last actively worked on.
The data is really useful, and the globe is visually impressive, but it feels like it'd be more practically useful to have a flat world map that shows all the data centers at once and makes it easier to read the lines without them getting excessively close to each other.
Obviously the biggest contribution to latency is distance. But there's also some close-ish regions with poor latency because there's not fiber running directly between them (for example, over the poles)
Are there an examples of regions which dramatically violate the triangle equality? (That is, where the A--C latency is much worse than the best A--B + B--C latencies)?
Just as a curiosity, could you use that idea to "infer" which data-centers are most likely directly connected by fiber, and show only the likely fiber connections?
I can't speak for AWS specifically, but in my PhD thesis [1] I found a bunch of such examples by using RIPE Atlas probes. Essentially looking for pairs of probes where the RTT between probe A and probe C is larger than probe A-B + B-C.
Now there are some issues with this methodology (all common issues with ICMP/RTT measurements + traffic was not really routed through the "relay" probe), but such pairs do exist.
> Are there an examples of regions which dramatically violate the triangle equality? (That is, where the A--C latency is much worse than the best A--B + B--C latencies)?
I don't think this would happen at a significant scale, due to how routing works. If taking the "detour" through B is how the ICMP packets get there cheapest, that's the path they will go.
If anything, we could look at where A–C is nearly equal to A–B + B–C and find where such a thing has happened. I suppose it could happen for reasons other than lack of fiber: financially better peering agreements, etc?
There’s very few and poor cables in the areas between Russia/Mongolia/India.
AFAIK, the latency from Mumbai to southern Russia (not that far in distance) is surprisingly high. Much higher than from e.g. Frankfurt to Moscow. Don’t know if it’s enough to violate the triangle equality between Frankfurt-Moscow-Mumbai.
Going back decades when a billion US was real money the original NSA (No Such Agency) that essentially no one had ever heard of, including most of the US houses and much of the defence committees that had clearance but not that clearance, had a 4 Billion+ budget for "off-book" satellites.
Black cables are a damn sight cheaper than black satellites.
Yes, although I think it would also clean up the visualization, since you wouldn't have nearly so many lines connecting data centers which actually aren't connected; and it would therefore also be explanatory
Cool, but information about what links were used would be nice. I assume it's latencies for default AWS links, which you likely won't use if you _care_ about the latency.
I worked at a military company and we made a SIEM tool to use at government facilities. Our Director, which was an ex-colonel at Miltiary IT, found that login screen is too plain and we need to have a world spinning. So we've implemented the end of Terminator 3 nukes over globe screen in WebGL to please him. This reminds me of it.
Anyways, although this looks cool, it'd be much more easily understandable in a 2d map instead of a rotating one.
It is insane to see this and conceptualize that you can send data across the world and back in under 500ms. Imagine telling someone that 100 years ago.
Actually it's been just about 100 years since we've been able to do this. Someone 100 years ago who you told this to would probably respond "I know, isn't it impressive!"
Its a bit sad, we have one of the Swedish, actual physical buildings for AWS in my town. But of course the traffic does not exit here, but is instead aggregated between the different sites spread around cities regionally. So no sub-ms latencies for me towards that center. I think the traffic basically went a couple hundred kms before turning back here.
I presume your town is too small for it since you called it a town, but AWS also does these things called "wavelength zones" which are as close as possible to certain cellular networks (designed to ride th 5G M2M hype train - self-driving vehicles, etc). Not sure if they do something similar for fixed networks.
Of course, it doesn't actually matter since friends don't let friends use AWS.
Even if they did it's very unlikely they would peer in your town.
Geographic proximity isn't the main factor for peering and even if they did your session may still get terminated at a border network gateway in a big city and your traffic has to travel the same way.
Technically speaking those aren’t datacenters, they’re regions. AWS regions can have multiple AZs, and each AZ can have multiple datacenters spread around a city, each with different latency characteristics. This is completely opaque to customers, you don’t really get to choose which one you’re in. There are ways to gain info about it though.
This jives with measurements I've done before. I ended up running a ping setup for a few months from every region to every other region to get these timings. I was using it to calculate what our GQL latencies would look like if the backing servers were in another region or in the same region as a way to start regionalization work. Sadly we had to depend on those latencies so much that it was deemed a non-starter of an approach. Even us-west-2 (home) to us-east-2 took us from p99 300ms to p99 2.4 seconds. That sweet sweet latency reduction.
As we operate 720+ servers running ping and traceroute continously, I will try to see if we can publish intra provider or intra asn latency data like this but on a massive scale. We have a ton measurement internally but to publish them as a product or even as a web dashboard is tricky as it is hard to measure what the interest could be.
Note that latencies between regions are subject to change: individual links can go down or become overloaded, resulting in traffic needing to take alternate (longer) paths. Unless you have a contract specifying a specific latency, you should still be prepared for things to slow down on occasion.
the "globe" visualization is good ... but we cna only see half the world at a time ... can we have an option of a flat projection as well ... so I can see all latencies for a region at a glance?
ap-south-2 (Asia Pacific - Hyderabad,India) opened in Nov 2022 seems to be missing from the list?
Is this just fiber distance between each datacenter? The coloring makes it seem significant, but from the distances it kinda just looked like ever < _km (100ms) was green, everything between _km(100ms) and _km(200ms) was orange, and everything over was red.
I'm not sure what you're wondering here. Of course physical distance is going to be a dominating factor, but this is measuring packet transit times. The speed of light over half a great-circle is only 67ms or so, ot maybe 100ms considering velocity factor in fiber, so clearly there's more to it than just distance. We can talk about what those other things are, but we both know they exist, right?
I believe it's a combination of a lack of customers and lack of infrastructure. It's a big continent to cover with the necessary fibre capacity, and the market is much smaller for nearby services.
Also what you don't see in things like this, or even a list of datacenter locations, is the relative sizes of the datacenters. After US east/west coasts and Europe, datacenter capacity rapidly tails off. Parts of Asia have plenty but not on the same scale I believe (although I don't know about the Chinese market). The difference in size can be quite a few orders of magnitude between different regions.
Smaller market, less reliable power grids, more challenging heat management, less political stability in many african countries. Also, given their pricing, big cloud vendors AWS are a luxury many local businesses would probably not even consider.
Azure has Ohio and Texas, but you'll probably not be able to provision all the machines you want so it doesn't really matter.
I think a lot of people are sleeping on the benefits of hosting workloads in these regions. Many finance, banking & insurance companies have already taken advantage. Most of your credit card transactions are handled by data centers that live ~barycentric to the continental US. Much of small US banking tech happens in places like Missouri.
Very roughly 1/3 of the speed of light. One third is lost to the physical medium it seems. What accounts of the remaining third and how much could be plausibly shaved off of it?
Undersea cable routing. For example there are no undersea cables from Ireland to the European continent. All that traffic has to land in England first and go through a few routers to get sent on its way.
South America and Africa are even worse in that regard. Very few if any direct links.
newbie here, you basically loaded d3.js to draw that globe. Is there a tutorial you followed to create those lines dynamically on the globe? Mind sharing some info on how you made this?
Is there any oss software which replicates that ping grid/table where you can have multiple sensors feeding information back to get an overview of latency on your own network?
> Close to the network centrality of the of the internet
Most of the Internet is fractured even though technically publically routable.
E.g., for someone living in China the US isn't anywhere near "network centrality".
If an internet centrality exists, it is somewhere in France or the Netherlands - usually cross-continent traffic goes through there, they have dedicated interchanges for that.
You are right, the best region that optimises median latency against all internet users over the world is `us-west-3`, which is Paris - I believe. Likely because it has much better latency towards Asia where the majority of internet users are.
I also investigated which two regions to choose for a multi-region setup, which ends up being London and Japan.
Although in my experience all the traffic from East Asia and Oceania (Australia, Japan, Korea, Hong Kong, Singapore, etc) to Europe goes through the US. So network-wise, the US is more central.
Agreed the network is much denser in Europe than the USA. This is obvious if you've really tried looking for network infrastructure services. The USA is just where a lot more high-level services are, like social media, due to the peculiarities of capitalism. There's no shortage of infrastructure there either, of course.
Data caps are apparently illegal here. This is good for the quality of infrastructure.
And if you have customers somewhere else you want to be in that place, or close to it network-wise.
My knee jerk reaction was to comment that this is an America-centric thinking (I live in eastern Europe, us-east is not that great), but... After consulting the map, it really looks better than the other options (of course assuming you care mostly about Europe, Americas, and don't want to piss off Asia too much)
Additional - I see the app here just shows good/moderate/bad latency. An actual data table would be useful to many people but not as pretty. Maybe there should be a distributed latency measurement network project.
I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.