Hacker News new | past | comments | ask | show | jobs | submit login

Fair point that terms like this can change meanings.

But this version absolutely does not fit the original, which had nothing to do with amounts of traffic.

It was a problem with kernel APIs: if N threads sleep waiting for an event on a socket (i.e. a thread pool), there was no way to wake up only one of them when a single msg arrived. They would all wake, all check if there was work to do, then all (but one) would go back to sleep. This is a behavioral flaw even in the case of next to no traffic (though it may not have an material effect).

The problem has been solved with better kernel APIs.




I initially heard of "thundering herd" in reference to the pattern where a server handling N requests for the same data may only need to run the data handling logic once to fulfill all of those requests.

For example, if I have a server with an endpoint that needs to make a request to a different service for some data, I don't want to make that request 10 times when my server receives 10 requests while the first request is being handled; all 10 of those incoming requests can be fulfilled by 1 outgoing request to the secondary service.

In that sense, it's very similar to what you described, but it's still likely one process handling the requests.

I'll admit that the author seemed to use "thundering herd" in reference to your server just suddenly receiving a lot of traffic, which is also different from the usage I was familiar with.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: