It certainly improves the practical outcome; replicating layers between nodes in a rack/datacenter is way faster than pulling it down 20x from one external upstream.
Sure, but latency is not the same as size. You're talking about latency optimization via local caching. IPFS is a bit of a heavy solution compared to a simple cache. IPFS solves a totally different problem.
We see Alpine used in production less than most people think. People usually feel more comfortable writing a Dockerfile based on Debian or Ubuntu because those are what they're familiar with. In most cases it's worth the one-time cost of pulling the bigger base image, but it all depends on the use case.
Another common reason for large images is the Nvidia ecosystem. Most of the images that Nvidia publish can get close to 1GB. However, these should only need to be pulled to devices once until you need to upgrade the base image.
That seems like a really big image? Alpine-based images in my experience tend to be under 100MB.
But I would be fascinated to use p2p for distributing image layers; it feels like that shouldn't be too hard?
EDIT: it's been done, apparently:) https://blog.bonner.is/docker-registry-for-ipfs/