While the preamble allows the devices to sync their clocks, the gap allows them to reset.
The assumption is that there will be some drift between the clocks over the course of a transmission. A period of electrical silence makes it clearer when the last packet ends and when the next preamble begins.
I think of it like shouting in a canyon. Shout Hello and it’s intelligible, but shout a whole sentence, and it’s hard to make out among the echoes and wind and birds. With some time between words for noise to settle down, signal to noise ratio improves and everyone can resync with each word.
It’s not a big deal, and the gaps are smaller as line speeds increase. 10Base-T has big fat millisecond gaps, while 400G Ethernet has gaps that seem impossibly small.
I suspect it has (had) to do more with the fact that ethernet used to be a multi-access protocol and you were giving someone else a chance to talk. in general having read this its not clear how much of this is legacy and how much is modern. certainly there doesn't need to be a rest on a dedicated channel, and most other protocols leave the clock synched between peers (i.e. with 4b5b idle tokens)
Yeah the article skips past the multiple access (hub) era for sure, and I haven’t thought about it in many years myself (!).
I don’t know for sure how directly the IPG was related to multiple access. Early Ethernet would wait for silence, a la carrier sense multiple access, but it also used a collision detection and backoff mechanism. When someone tried to transmit, if someone else was talking, both would detect the collision and each would wait a semi-random period of time and try again, repeating with incrementally larger delays until a collision wasn’t detected and the frame could be transmitted.
When there were only hubs, or when everything was on one coaxial cable, collisions happened all the time. The more nodes on a segment, the more collisions would impede traffic. You’d design a network with gateways and routers and expensive little store-and-forward switches at what we’d now call the core, to partition everything into smallish segments and try to keep the collision domains small.
Cheap switching at what we’d now call the access level fixed all this by making all links effectively point to point.
Ethernet standards from 10G onward don’t even bother with multiple access (point to point only, meaning switched), but they still retain the inter-packet gap. In some places it’s referred to as a “guard interval.” So I do think it’s mostly to provide opportunities for resyncing.
Much of this applies in the RF domain as well. Before MIMO, Wi-Fi was directly analogous to an Ethernet hub… CSMA, one collision domain, incremental back off. Some access points still let you specify the guard interval manually. MIMO and things like beam-forming help reduce the collision domains by breaking the RF into cells and allowing something pretty close to a point to point link between the node and the AP. RF is its own dark art, but in terms of signaling, the problems and their solutions are much the same.
So you’re correct that the guard interval or IPG would create a bit of silence for someone else to jump into. Everyone would probably have to resolve the collision in that case, but Ethernet accommodates that scenario as well.
I wonder if they left it in because they just wanted to be really, really sure there wouldn't be any getting stuck in bad states like some I2C devices can.
It's still used to allow for timing discrepancies between the physical layer and higher layers. Phys will insert and delete symbols in the IPG when the FIFOs between layers have too few or too many entries. This allows for systems with ever so slightly different clocks to talk to each other without any packet loss.
In the Gemini spacecraft this was almost true. The ram they used would overheat if accessed too frequently so the assembly coders had to be sure to add no ops or find other things to do while the ram cooled down.
This one left me laughing hard.