Hacker News new | past | comments | ask | show | jobs | submit login
Latency numbers every programmer should know (samwho.dev)
237 points by iamwil 7 months ago | hide | past | favorite | 128 comments



``` for (const { children } of document.getElementsByClassName("latency-container")) { console.log(`${children[0].innerText.padEnd(35, " ")} = ${children[1].innerText}`); } ```

L1 cache reference = 1ns

Branch mispredict = 3ns

L2 cache reference = 4ns

Mutex lock/unlock = 17ns

Send 1K bytes over 1 Gbps network = 44ns

Main memory reference = 100ns

Compress 1K bytes with Zippy = 2us

Read 1 MB sequentially from memory = 3us

Read 4K randomly from SSD = 16us

Read 1 MB sequentially from SSD = 49us

Round trip within same datacenter = 500us

Read 1 MB sequentially from disk = 825us

Disk seek = 2ms

Send packet CA->Netherlands->CA = 150ms

Can we discuss the actual material now.


Thanks. Your UI is much better than the one on the site. There are two problems there:

1. The vertical text is difficult to read despite its size, because it's vertical.

2. When we click on it a large part of the text disappears below the bottom margin of the page.

Problem number 1 is not so bad but the combination with 2 kills the UX. The text in the clicked bar should appear somewhere on screen, horizontally.

Edit: if anybody like me wonders what's Zippy its a C++ compression library from Google. It's called Snappy now [1]

[1] https://en.wikipedia.org/wiki/Snappy_(compression)


1. I didn't expect people to have such a negative reaction to sideways text. It doesn't bother me personally, but it seems some people really can't work with it. I'll likely avoid it in everything else I do going forward.

2. I feel a big part of the problem here is that it's not obvious how to get it back once it's gone. I could certainly try making the text visible after the bar is gone.


I guess the most readable form would be a static logarithmic plot with colored dots/bars and a legend in the corner (or on tap/hover). Everyone interested in these numbers likely knows how to read it.


Point 1, we're used to sideways text because of books on a shelf but here it's compounded by the text almost disappearing after the click. The only way to get it back is clicking multiple times in the empty space above the bar. The only hint to click there is in one of the steps in the text box on the left, which probably nobody reads. Something to click above the bar (an arrow up?) would probably remove the need for the help text. Other hints could remove the need for any help text and free the box to display the content of the clicked bar.


It seems like there's some potential here, but not quite nailed yet.

I'd already seen cost model numbers like these before, but this interactive visualization still seemed to obscure the information as I was taking a first look.

I wonder whether it would be more useful adapted to a visualization/calculator for specific numbers, maybe for multiple operations in an algorithm, and the alternatives for implementing each? (And the click-to-scale is for selecting N for each operation, and maybe somehow constants?)


FWIW I don't mind the sideways text, but the ant's-eye-view histogram is one of the strangest user experiences I've had in data science.


Not to diminish your art or anything, but if you just want to present some numbers, a <table> with two columns is fine. We can infer scale.


The table already exists, it’s linked to on what I made. I wanted to try and remix it a bit :)


I disagree. Nano vs mili vs micro is not in the least intuitive compared to seeing how much longer something is.

That’s why we have graphs and charts in the first place.


My intuition was that scrolling would increase the y-axis maximum. (Effectively, scrolling would “zoom out”)

And that scrolling horizontally would pan me through the content.

Browsing on mobile, I should clarify.

But I’ll add that I also got the hang of scrolling back “in” fairly quickly. After I had zoomed out a couple times, then finally stopped to read the instructions.


Mmm.. you could rotate both text and bars, right? Like, horizontal bars.


It also has very poor contrast. I can turn my phone on the side to read the vertical text but the white text is impossible to read on the yellow and orange backgrounds.


On an iPad most of text is obscured and I can’t read some of the bars in landscape or portrait mode. The sideways text is also hard to read.


Another problem is that on low resolution screens (or small browser windows) the boxes on the top left hide the text on the bars behind. I had to zoom out to 50% for it to be readable, which then put other bars behind the boxes.


The original source linked from this post [0] is using models that assume exponential growth of bandwidths over time (see the JavaScript at the bottom of the page): this is fun, but these figures are real things that can be measured, so I think it’s very misleading for the site in this link to present them without explaining they’re basically made up.

The 1Gb network latency figure on this post is complete nonsense (I left another comment about this further down); looking at the source data it’s clear that this is because this isn’t based on a 1Gb network, but rather a “commodity NIC” with this model, and the quoted figure is for a 200Gb network:

    function getNICTransmissionDelay(payloadBytes) {
            // NIC bandwidth doubles every 2 years
            // [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
            // TODO: should really be a step function
            // 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
            // 125*10^6 = a*b^x
            // b = 2^(1/2)
            // -> a = 125*10^6 / 2^(2003.5)
            var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
            var b = Math.pow(2, 1.0/2);
            var bw = a * Math.pow(b, shift(year));
        // B/s * s/ns = B/ns
            var ns = payloadBytes / (bw / Math.pow(10,9));
            return ns;
        }

[0] https://colin-scott.github.io/personal_website/research/inte...


It's very surprising to me that a main memory reference takes longer than sending 1K over a gigabit network.


Because they're comparing two different things. The main memory reference is latency, the 1K is a throughput measurement.

In other words they're not saying "if you send only 1K of data it will take this long". They're saying "if you send 1 GB, then the total time divided by 1 million is this much".


This figure quoted on this website is completely wrong: the serialisation delay of 1KiB on a 1Gb link is much higher than that, it’s actually closer to 10us.

This is a transcription error from the source data, which as it turns out is based on a rough exponential model rather than real data, but first let’s consider the original claim:

If there’s a buffer on the send side, then assuming the buffer has enough space, the send is fire and forget, and costs a 1KiB memcpy regardless of the link speed.

If there’s no buffer, or the buffer is full, then you will need to wait the entire serialisation delay, which is orders of magnitude higher than 44ns.

One might further make assumptions on the packet size and arrival rate distributions, and compute an expected wait time, but otherwise the default assumption for a figure like this would be to assume the link is saturated, and the sender has to wait the whole serialisation delay.

> They're saying "if you send 1 GB, then the total time divided by 1 million is this much".

This would take ~8s to serialise, neglecting L1 overheads, dividing that by 1MM gives you 8us (my ~10us figure above), which is ~200x higher than 44ns.

Looking at the source data [0], it says “commodity network”, not 1Gb, so based on the presented data, they must be talking about a 200Gb network, which is increasingly common (although rare outside of very serious data centres), not a 1Gb network like the post claims.

Interestingly the source data quotes an even smaller number of 11ns when first loaded, which jumps back to 44ns if you change the year away from 2020 (the default when it loads) and back again.

That implies 800Gb: there is an 800GbE spec (802.3df), but it’s very recent, and probably still too specialised/niche to be considered “commodity”.

Digging further, we see that the source data is computed based models that show various bandwidths growing exponentially over time, not based on a any real data, so these data are extremely rough, given these are real figures that can actually be measured:

    function getNICTransmissionDelay(payloadBytes) {
            // NIC bandwidth doubles every 2 years
            // [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
            // TODO: should really be a step function
            // 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
            // 125*10^6 = a*b^x
            // b = 2^(1/2)
            // -> a = 125*10^6 / 2^(2003.5)
            var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
            var b = Math.pow(2, 1.0/2);
            var bw = a * Math.pow(b, shift(year));
        // B/s * s/ns = B/ns
            var ns = payloadBytes / (bw / Math.pow(10,9));
            return ns;
        }


[0] https://colin-scott.github.io/personal_website/research/inte...


Yeah it makes no sense given they're saying that a 1 Gbps link is somehow getting faster...??


They’re saying that a “commodity NIC” doubles in bandwidth every 2 years, and extrapolating forward given that 1Gb was (supposedly) standard in 2003; the website in the post transcribed this incorrectly and put 1Gb in the description of the datapoint, but we can see from first principles that the figure is clearly that of a 200Gb link.


I think VME busses were extended using high speed serial links in order to send data faster than you could using the 32 bit address/data bus.


> Send 1K bytes over 1 Gbps network = 44ns

Doubt.


a few others

40ms - average human thinks the operation is instant.

15s - user gets frustrated and closes your app or website.


I don't think I'd wait even 15 seconds. Maybe on average across all users because a lot of users have slower connections or devices so they're more patient. But I'd expect to have something in 3 or 4 seconds. Even that I consider slow. At probably 8 or 10 I'm gone.


I say this without hate: it's absolutely fascinating how bad this UX is. Having said this, I am sure that I have committed worse UX crimes in my career, but when curse of knowledge hits you, only your users can see the problems. But lucky samwho has the HN community that is not shy of criticizing ;-).

I think it's really interesting and instructional to think about why the UX feels so bad. My ideas are:

- The page has one main job: presenting latency numbers to the viewer.

- This job is easy enough. There are many ways to get this done. So people expect the main job to be done at least as good as with these other ways.

- I hypothesize that the page prioritizes other jobs before the main job. It tries to make finding the relationship between those numbers fun to detect. * Users are foremost interested in the main job, but this main job is done poorly because you don't see all latency numbers in one view (maybe after clicking a few times at the right places, but for such an easy task this is way too much work)

- It's very difficult to grasp the mental model of the UI just aby using it. You click somewhere and things happen. Even now that I have used it for a few minutes, I have no idea what it does or is supposed to do. I found it very interesting how much it frustrated my that repeated clicks are not idempotent and made the UI "diverge". It makes you somehow feel lost and worry about breaking things.

- The user must read the help text. But users don't do this. At least I didn't until I was very frustrated. Then this help text changes. And changes again. I don't want to learn a new application only to read a simple list of numbers.

These are my main points, I think. To me, it was very interesting. Thanks for that, samwho. and kudos for sharing this publically :-)


No hate taken. The art is not the artist, etc. :)

I'm in the middle of writing up a self-reflective post about this and I just wrote the following:

"Ultimately, the way I'm presenting the data is egregious and unnecessary. I can see why people are annoyed about it. The extra visuals and interactions get in the way of what's being shown, they don't enhance it. Tapping around feels fun to me, but it isn't helping people understand. This experiment prioritised form way more than it prioritised function."

We've come to some of the same conclusions, though you in more detail than me. The idea about clicks not being idempotent wasn't something I ever noticed, but now you've said it I can't not.

If you're willing, I'd love to connect with you 1:1 and talk a bit more about this. My contact details are on my homepage.


Great attitude :) I'll try to get in contact but don't be mad if I forget 8-|


I mostly agree, and certainly a list of numbers or maybe a log plot would be better if the goal was communicating the raw data. Certainly the click-about jumpy interface is pretty janky. However there’s one thing I think this does better than a list of numbers would: Most people (me included) have a hard time getting an intuitive feel for things like just how much smaller 1ns is compared to 1ms or truly how much a billion dollars is. SI prefixes or a log scale can give the wrong /feeling/ even when they’re giving the right /information/.

Sometimes, the inconvenience of a linear scale is the point.

Pages that I think use this technique to really good effect:

https://xkcd.com/1732/ https://mkorostoff.github.io/1-pixel-wealth/


Log scale on a static graph is far easier to visualize and understand without a complex, UX-unfriendly interaction that doesn't make sense.


The title is missing "Latency" which would show many other results on searching. My go to is this one[0] because it's plain text and shows "Syscall" and "Context switch".

  Latency numbers every programmer should know

  L1 cache reference ......................... 0.5 ns
  Branch mispredict ............................ 5 ns
  L2 cache reference ........................... 7 ns
  Mutex lock/unlock ........................... 25 ns
  Main memory reference ...................... 100 ns             
  Syscall on Intel 5150 ...................... 105 ns
  Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
  Context switch on Intel 5150 ............. 4,300 ns  =   4 µs
  Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
  SSD random read ........................ 150,000 ns  = 150 µs
  Read 1 MB sequentially from memory ..... 250,000 ns  = 250 µs
  Round trip within same datacenter ...... 500,000 ns  = 0.5 ms
  Read 1 MB sequentially from SSD* ..... 1,000,000 ns  =   1 ms
  Disk seek ........................... 10,000,000 ns  =  10 ms
  Read 1 MB sequentially from disk .... 20,000,000 ns  =  20 ms
  Send packet CA->Netherlands->CA .... 150,000,000 ns  = 150 ms

  Assuming ~1GB/sec SSD
[0] https://gist.github.com/nelsnelson/3955759


I added the word "latency" into the title of the page. Sorry for the confusion.


I don't get how expressing these numbers in time unit is useful ?

I've been a developer for embedded systems in the telecom industry for nearly two decades now, and I had never met anyone using something else than "cycles" or "symbols" until today... Except obviously for the mean RTT US<->EU.


> I've been a developer for embedded systems in the telecom industry for nearly two decades now

On big computers, cycles are squishy (HT, multicore, variable clock frequency, so many clock domains) and not what we're dealing with.

If we're making an architectural choice between local storage and the network, we need to be able to make an apples to apples comparison.

I think it's great this resource is out there, because the tradeoffs have changed. "RAM is the new disk", etc.


then, why not just using qualifiers ? from slowest to fastest. You might not know that, but you can develop bare metals solution for HPC that are used in several industries like telecommunication. Calculation based on cycles are totally accurate whether the number of cores...


> then, why not just using qualifiers ? from slowest to fastest.

Because whether something is 5x slower or 5000x slower matters. Is it better to wait for 10 IOs, random access memory 10000x, or do a network transaction? We can figure out the cost of the memory/memory bandwidth, etc, but we also need to consider latency.

I've done plenty of work counting cycles; but it's a lot harder and less meaningful now. Too many of the things here happen in different clock domains. While it was a weekly way to look at problems for me a couple of decades ago, now I employ it for far less: perhaps once a year.

> Calculation based on cycles are totally accurate whether the number of cores...

No, they're not, because cores contend for resources. We contend for resources within a core (hyperthreading, L1 cache). We contend for resources within the package (L2+ cache lines and thermal management). And we contend for memory buses, I/O, and networks. These things can sometimes happen in parallel with other work, and sometimes we have to block for them, and often this is nondeterministic. In turn, the cycle counts for doing anything within the larger system are really nondeterministic.

Counting cycles works great to determine execution time on a small embedded system or a 1980s-1990s computer, or for a trivial single threaded loop running by itself on a 2020s computer. But most of the time now we need to think account for how much of some other scarce resource we're using (cache, memory bandwidth, network bandwidth, a lock, power dissipated in the package, etc), and think about how various kinds of latencies measured in different clock domains compose.


Not to take away from your point, but I'd argue that counting cycles is usually misleading even for small embedded systems now. It's very difficult to build a system where cycles aren't equally squishy these days.


Depends on how small we're looking at.

Things like Cortex-M-- stuff's deterministic. Sure, we might have caches on the high end (M55/88), and contention for resources with DMA, but we can reason about them pretty well.

A few years ago I was generating NTSC overlay video waveforms with SPI from a cortex-M4 while controlling flight dynamics and radio communications on the same processor. RMS Jitter on the important tasks was ~20 nanoseconds-- 3-4 cycles, about a factor of 100x better than the requirement.

But I guess you're right: you could also consider something like a dual-core Cortex-A57 quite small, where all the above complaints are true.


Because it's something very different. I was expecting standalone numbers that would hint to the user something is wonky if they showed up in unexpected places - numbers like 255 or 2147483647.


It gives you a rough understanding of how many you can do in a second.


I bring criticism: The first few bars on my screen cannot be read, as the text is hidden behind the floating HUD. If I click on the next few bars, to bring those below the box, then the bar becomes too small and the text is cropped, so I cannot read it either.

It is also a bit uncomfortable to read 90° text. It's fun to click the bars and play with the UI, but not to actually read what they say. It's a nice visualization, but it suffers from form over function! I can't comfortably use it to learn about the numbers I should know :(


I appreciate the feedback! I'm trying to get better and comments like this genuinely help.

Are you reading on a landscape tablet? I know the sizes of stuff are wrong on that form factor. Desktop and mobile shouldn't have the first couple of bars obscured.

The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.


>> Desktop and mobile shouldn't have the first couple of bars obscured.

I am on a desktop with a huge monitor in ultra high res. It is pretty bad.

>> The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.

Then the subtle nod is lost on me... why not turn the text when I click, or have hover text, or make the whole page rotated 90 degrees?

Like the original response, it was fun for one second, then I was like I can not read this stuff, or its painful.


I'm reading on a 1080p desktop. Although accounting for the browser chrome (bookmarks, tabs on the side), my window.inner{Width,Height} comes out as 1583x950


For me (pretty default FullHD desktop screen in landscape) the first bar is not obscured, but the next two are covered by the floating UI.


I’m on iOS and can’t see the bottom of any of the bars. They’re hidden behind the Safari controls at the bottom of the screen.


Add some padding to bottom and use horizontal texts for "compressed" columns.

It is big but unreadable in 4k 32inch screen.



Don’t forget Grace Hopper (1906 – 1992), American computer scientist, mathematician, and United States Navy rear admiral.

> Hopper became known for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long—11.8 inches (30 cm)—the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds." She was careful to tell her audience that the length of her nanoseconds was actually the maximum distance the signals would travel in a vacuum, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire 984 feet (300 meters) long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.

https://en.wikipedia.org/wiki/Grace_Hopper


Why not just make them 27cm and then you get the true distance signals travel?


Maybe because speed of light is more approachable to a general audience, and maybe because she’s making a point about absolute upper bounds for all possible signals and didn’t want to reference something that could be improved, or because signal speed depends on the medium and maybe she didn’t feel like rat-holing on materials to make a point about speed of light? Light signals travel at light speed through space. 27cm works for electrical signals but not optical signals in fiber or space, nor other signal types. 30cm as a bound always works, and happens to be a nice round number too, more memorable… I see a lot of reasons why not just. :P


Great content from Jeff Dean slides. Since they are dating from 2009, I wonder what are the things he had changed his mind of.


a well documented rabbit hole


Some of these have always been quite counterintuitive to me, particularly the networking ones. Google Stadia was always an exercise in edge cases in expectations on these numbers for me.

It felt weird that a gaming computer in a datacenter could be "faster" than a computer on my network, but one frame takes ~16ms to render, bandwidth is big enough to stream, network latency might only be another ~frame, and suddenly the image is on my machine within 2 or 3 frames. However there were unexpectedly slow parts! The controller actually ran over WiFi directly, so that inputs went straight to the server rather than via Bluetooth, comparing with Xbox Cloud on a Bluetooth controller, this made a huge difference, but that makes sense because Bluetooth's latency might be 1-2 frames itself. It's counterintuitive to me that the latency from my controller to my computer, less than 1m, might be higher than the latency from my computer, to my router, to my ISP, to Google's DC, and to a server. Similarly, the latency on HDMI from a computer to my TV is in the same ballpark of a few frames because of all the processing my cheap TV does to look good.


Man, I had such high hopes for Stadia. I was an SRE at Google when it was being built and knew some of the traffic folks working on the networking parts of it. Some of the absolute best people. Such a shame.

I’d never have considered adding WiFi to the controller to _reduce_ latency, that’s absolutely wild. Thanks for sharing!


I'm not sure why you'd find it wild. Any gamer with decent tech knowledge never buys Bluetooth wireless devices (mouse kb headset etc) for gaming precisely for this reason. Sites like rting measures latency for the same reason.


Gamers know about bluetooth latency when specifically compared to wired peripherals, and in that case I think that's intuitive. The counterintuitive part is that WiFi – a much more complex spec, plus all the IP stack, connecting to web services from a small low powered device, etc – is faster than a "simple" bluetooth connection designed for such devices.


I'm on Firefox mobile. I can make head nor tail of what this is meant to demonstrate.


The meaning of the number with the + and - is completely escaping me. It looks like a year but goes into the future.


It is indeed a year. The latencies are based on the calculations done by Colin Scott in https://github.com/colin-scott/interactive_latencies and support projecting out into the future. Sorry it's not as obvious as it could be.


The main memory one stays 100ns for every year?


It’s indeed been about 100ns for a long time. Part of this is that memory is larger though, so there may be more decisions to make to look up a line (and those are made faster). And throughput has improved. Some consumer high-end desktop hardware (think gaming rather than workstation) can have lower latency ram.


Slightly mysterious title. I thought it would be about 16, 256, 65536, 16777216, 4294967296, etc.


It's a remix of something from quite a long time ago by the same name. There's another comment in here somewhere that links to the full lineage.


It's missing "Latency" at the beginning of the title, then it's very familiar to those who've seen them listed before.


The 1MB streaming data from disk should be closer to 4ms (250MB/s). Disk streaming rates (on 7200rpm drives) have not improved significantly, based upon the published “sustained transfer rate OD” metrics from the three drive manufacturers.


The time to send over a 1Gbps network looks very wrong. Each bit takes 1ns (by definition) so sending 1K byte must take at least 8192ns


Yes, I think that's a transcription error


Holy smokes, this design is terrible and the site is unusable (on mobile at least).


Do you have any specific feedback about why it's unusable? I'd like to get better at this.


First of all, I hope you don't take the feedback here personally. It's great that you tried your design skills in a risky, unconventional presentation form. With all that said, I still think this page is unusable. The colors, fonts are nice though.

Just open your site on mobile and imagine that you don't know the dataset by heart.

Can you glance at the values, can you easily compare different values? When you scroll half a screen to the right, are you completely lost? I know I am. When you select the largest or smallest item, what do you see? What if you then scroll to the other end of the spectrum? Can you read what the smallest item stands for? L1 C something? Can you change the scale so that you can improve what you see? Does scrolling up and down behave intuitively?

All in all, it's just impossible to extract useful information and context using this design. It looks great, you could post it on Behance and it will get positive feedback, but when someone actually wants to use it to discover what the data says, it's a very frustrating user experience.


I honestly don't understand what people are talking about. Looks fine to me.


Are we to believe that in 2030, sending 1K over a gig network will be faster than a CPU-internal missed branch prediction?

Seems highly unlikely for a wide variety of reasons.


the UX on this needs to be seriously reconsidered, I don't read with the head tilted sideways. function over form if your target is technical ppl.


Do you think it would look better done vertically and scroll down?


This is great. Lots of feedback and info sharing from everyone’s collective knowledge base. Lot of feedback from thr grey beards around in the retro era eg 1980-1990 and those who are on the bleeding edge inventing the future. Good job. Keep going.

1) If a technology didn’t exist. I’d make the bar black / grey. The pedants will hound you to death

2) A means to see the trend of a given feature on log plot if necessary. Eg Memory access 1980-2030 is interesting.

3) An info icon on the bar (i in a circle) to get details about the measurement. Disks seek for example is pegged at 1ms because it’s where mechanical disks have limits. If so is it track to track or full stroke seek ?


All really great feedback, thank you <3


reading sideways sucks


Very cool little thing. The interface initially isn't very intuitive but once you figure out the correlation between the above/below bar clicks it is kind of cool.


Agreed, on my second view I noticed the tutorial box, totally missed it the first time.

It's neat. OP, have you considered adding a toggle to switch to logarithmic scale? You could retain the time travel feature, but also show all values simultaneously.


Thank you!

I hadn’t, but it would be very easy to add.


I didn't spend quite as long on this as I would my "proper" posts, so I completely agree the interface could be better. To help me learn and get better at this, I'd love to hear what you specifically had problems with trying to use it <3


Every click changes the text below the year selection and explains how to use the UI.


I made this! Happy to answer any questions.

Can’t stress enough how grateful I am to Colin Scott for doing the work and open sourcing the calculations for the latencies over time.


I think it’s a nice visualization, and I really appreciate your open welcoming attitude in the comments here to all manner of critique.

How much work was this and what are the trickiest parts? I see a lot of discussion of different platforms & browsers, which has always been surprisingly tough.

FWIW, I think you’re close to a nice UX and it wouldn’t take much to eliminate most complaints, though I’m speculating, and I know (from experience developing UIs) that everyone’s got different expectations and opinions about their UIs so it can be hard to find the maxima…

I don’t mind the sideways text much, but it would be cool if the bars were narrower so they all fit on-screen at once. (I’m on iPad btw.) It would be nice if the text didn’t move with the bar, but always stayed fixed and visible while the bar changed sizes.

It would be nice if the credit dialog could be minimized/toggled. Or maybe positioned as a title banner, and the data didn’t overlap? I don’t know why but overlapping things give me anxiety.

At first I didn’t understand how the bigger/smaller controls on a data bar worked, I thought it was a toggle, so I thought I had a hard time getting the L1 number to come back up, and didn’t realize I needed to click further up near the title overlay. A toggle might be more intuitive? (i.e. click above bar to make it scale to top of screen, click on bar to set scale so that the bar to the right is exactly top of screen…) Might be neat if the bars were draggable - are they already?


If I tap almost anything it will select that text and bring up a search bar.

You should disable pointer events / text selection etc.

I'm on Android (Samsung Galaxy S23+)


If I open this on iPhone and tap a bar, they eventually shrink down until they look like this https://ibb.co/B2PJXD7 and stay that way. What’s the point?


If you tap above a bar you can make them larger again. The bars are scaled to always stay relative to each other, the point is that you can explore back and forth to see the relative time it takes to complete various operations for a computer.


Why should I want to tap the bars in the first place?

Why can I tap the largest bar so much that it can become as small as others in the end?

Why can't bars just auto-resize when I scroll the page horizontally?

You're the designer here, so it's up to you to shape user experience to deliver your point. Instead you allow users to ruin it for themselves and don't even try to control the narrative of the page. Sorry, don't know what you wanted to do here but I don't think it's working.


The design doesn’t work on safari on iPad, the navigation blocks the content and it’s not possible to read half the columns.

A simple ASCII blog post would have been better


How is this something "every programmer should know" ? I mean 95% of us work with software that is not performance critical, where security/auditing/monitoring/stability/maintainability/etc. is more important than raw performance.


I keep reading that fragmentation doesn’t matter for SSDs. But the latency difference between a sequential read and a random read is absolutely huge? When are we going to accept that fragmentation really does matter?


interesting demo. also scaling logarithmically would look cool as most bars disappear most of the time. (maybe you can add horizontal bars to the background and scale the distances between them as well)


DC round trip at a constant 500 mics is a bit surprising to me. I guess it’s just a hard number for Scott to get historical values for, and depends a lot on how networking in a datacentre is set up.


That's what I guessed, too. It almost certainly hasn't stayed constant over the last few decades.


Every programmer should know how to represent "the numbers". Without reasonable representation your numbers are just useless crap.


Is there any explanation for why programmers need to know these numbers? I don't know any of these and I, semi successfully, write some code (admittedly using nodejs).


Site is broken in mobile safari. Can't scroll at all.


You should be able to scroll horizontally, but not vertically. Is working for me in mobile Safari.


> Is working for me in mobile Safari.

Apologies for the directness, but your definition of working doesn’t align with others then. If you mean “working as intended” where the intention is a hard to read and clumsy ux, then yep it’s working. As a proposal to how to make it less clumsy and easier to read, maybe consider having the text not inside the bars?


I did try a version with the text outside of the bars but struggled to make it work in a way I was happy with.

Do you have any other feedback? What is it specifically about the UX that you find clumsy? I'm still quite new to this sort of thing and do want to improve.


I think that the UX could work really well for mobile with one significant tweak.

A typical design pattern on mobile is that if information is obscured for some reason, you click on it to expand it. Consider a drop-down text box on a blog: there's a little arrow and cut-off text with an ellipsis (...). When you click on the arrow, the cut-off text expands to fill the screen and allows you to read the rest of it. In contrast to what other users have said, this doesn't need to be idempotent. Tapping again hides the box.

To apply this design pattern to your site, simply make it so that tapping anywhere on a bar brings the UI to a known state, eg with the bar in the center with the text at a readable scale. This would work either horizontally or vertically.

Benefits:

- Your idea of the UI rescaling is preserved, and you can preserve the animations between states. I think the "rescaling bars" idea is fun.

- Cause and effect is preserved. If I want to read the text on the bar, I should not have to click on some arbitrary point above the current location of the bar.

- Further, the user does not have to hunt for the correct spot to click on a bar to make the text visible. Instead, clicking on a bar immediately and always makes all the information on that bar fully visible, by design.

I think this tweak would significantly improve the experience of interacting with the website.


The design doesn’t work on safari on iPad, the navigation blocks the content and it’s not possible to read half the columns.

A simple ASCII blog post would have been better


Nice idea, but I found the UI awkward.

And I probably not the only one that took a few minutes to realize the number referred to the year.


It was a small experiment with some data I found interesting, I didn't expect it to get around as much as it has. I'm trying to get better at aspects of the web I'm not very good at.

It has given me lots to think about and learn from.


Good on you for putting it out there. Feedback helps you grow, even if it is sometimes painful. Don't be discouraged!


It’s a nice visualization. Maybe move the label to above the bar if the bar becomes too short to fit it inside of it?



Agree.

New UI is just a cool animation, but it is unreadable on a 4K 32-inch screen.

Old UI is intuitive and doesn't require 10 pages of instructions and you can actually see numbers and labels all the time.


If you're willing, I'd love a screenshot of what you see on a 4K 32-inch screen. You can find my contact details on my homepage.



What aspects of this are unreadable? Is it all a bit too big? Does there need to be a way to identify the columns after they’re off screen that doesn’t involve tapping on them again?


My screen is big enough to show labels of those numbers.

It is impossible to see times and text labels at the same time.

I am not going to remember what that 100ms was when I am changing years.


Sorry, want to dig in a little bit here to help me improve for next time. I really appreciate your replies.

When you say it’s impossible to see times and text labels at the same time, are you aware you can move each bar to any height on the screen? They move to the height of the cursor when you click. It’s extremely unobvious, I know, I’m sorry.


If I move bar to the height of the screen, so I can see what is text label, then I cannot see how many ms this bar is, because bar is 100% height.

I have to make bar smaller (hide text label) to see time, but then I cannot see the text label.

I want to compare something in middle (memory read) and last bar (CA->Ned) at the same time.

https://imgur.com/a/7lBCQob


Ahh yes, that makes sense. Thanks for helping me understand that, definitely reduces the utility of the visualisation.


It always frustrated me a little that it's hard to use on mobile, so I had a go at making something a bit different. Did my best to give credit to Colin. :)


I really wanted to understand the numbers but the content does not help


The design is aesthetic and all, but it does not convey the information well.

Instead, just show an isometric graph with dates moving right-down, and each type, right-up, if that makes sense. Then, a single static 3d-ish image would show the different latencies and how they've changed over the years. No need for all the interactive gadgetry, that adds nothing, and in fact just obscures the big picture.


If you click through to the Colin Scott version I link to you more or less get what you’re looking for.


I mean something like this:

https://camo.githubusercontent.com/6e7f6707a2532cca1a5bf4ffb...

Unless using log scales, you might need some way to adjust the scales so that everything fits - maybe click on a 'column', or even an individual bar, to normalize it, and everything else scales relative, somewhat like your site.


Repeatedly clicking the same bar keeps shrinking it... weird


The top 3 speeds have not changed in 15 years.

Love the site.


Thank you


Where are these 1GBPS networks in 1980?


The bar represents where I would put my 1Gbps network in 1980, if I had one.


Some of those numbers are very weird.


Wait a packet CA -> NL -> CA was 150 ms in 1980?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: