Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's not a trivial problem to collect bandwidth and buffer size estimates and provide them to every server without delaying the connection, but it would be fun to build such a system.

Tons of fun. Sadly, I don't have access to enough clients to do it anymore.

But napkin architecture. Collect per connection stats and report on connection close (you can do a lot with tcp_info or something something quic). That goes into some big map/reduce whatever data pipeline.

The pipeline ends up with some recommended initial segment limit and a mss suggestion [1], you can probably fit both of those into 8-bits. For ipv4, you could probably just put them into a 16 MB lookup table... shift off the last octet of the address and that's your index into the table. For ipv6 it's trickier, the address space is too big; there's techniques though.

At google scale, they can probably regenerate this data hourly, but weekly would probably be plenty fast.

[1] This is it's own rant (and hopefully it's outdated) but mss on a syn+ack should really start at the lower of what you can accept and what the client told you they can. Instead, consensus has been to always send what you can accept. But path mtu doesn't always work, so a lot of services just send a reduced mss. If you have the infrastructure, it's actually pretty easy to tell if clients can send you full mtu packets or not... with per network data, you could have 4 reasonable options, reflect sender, reflect sender minus 8 (pppoe), reflect sender minus 20 (ipip tunnel), reflect sender minus 28 (ipip tunnel and pppoe). If you have no data for a network, random select.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: