I had a CS professor that used to hold up a length of string roughly that length and talk about how that is how far a bit of data can travel at the speed of light during a clock cycle or something. Honestly don't remember the point he was trying to make.
I'm sure that's what it was. I probably should have remembered that, but it was such a small part of one of his lectures it didn't resonate as deeply as it should have.
That's a different thing, the signal travel length in a nanosecond, roughly. This is about the 21 cm RF wave that glows from the sky - https://en.wikipedia.org/wiki/Hydrogen_line. One of the (hyper) finest names of things in nerddrom - "hyperfine transition".
I suppose it's interesting to think about. At today's clock rates, the distance between the CPU and RAM actually adds a small, but still significant delay.
It's ultimately what killed having a memory controller on the northbridge of a motherboard. Having the CPU talk to a separate chip to ultimately talk to the RAM simply added too much latency into the entire process.
And it may end up causing CAMM2 to end up being the next standard. The physical layout of the chips on the board means the traces can be shorter - leading to lower latency and higher stability.
I really hope CAMM2 takes off. It'd be a rare standard that could be used for both laptops and desktops. Having upgradable memory in a laptop again would be great. Using the same standard a desktop would make it easy to find sticks as time goes on.
The point of how fast computers are, and why you need to make them smaller to make them faster. Think about the bus between the CPU and GPU, not much shorter than that. Information cannot travel faster than the speed of light, so there is a hard constraint on how quickly the GPU can respond to commands. The same is true for RAM and even within the CPU, signals take time to propagate across it. The total length of your circuitry for a single instruction can't be longer than 21cm if that's how far light travels.