Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

std::deque will never beat your hand-rolled ring buffer. It's a container, it will spend time managing memory every time you walk off the end of a chunk. If it's implemented right*, it will hold onto a chunk which fell off, and will only allocate twice. If it's implemented wrong*, it will allocate after `chunksize` pushes, and free every `chunksize` pops (which might be handled properly by your allocator). But it's still going to need to shuffle those chunks around, adding unnecessary overhead because you're using a ring in a way that is a perfect fit for your application: you use the value you're popping every time you push.

* for your purpose! Therein lies the challenge of writing standard libraries... choices must be made.



> std::deque will never beat your hand-rolled ring buffer.

std::deque may never beat a hand-rolled ring buffer of fixed size that maintains cursors, but it will beat a hand-rolled linked-list implementation.

The typical implementation will generally exhibit better cache locality than a linked list, and will outperform it for random access as required by the specification (amortized constant, vs linear).

(The typical implementation per cppreference, although I don't think this is formally required by the spec, is a sequence of fixed-length arrays.)


After some thought, I figured out a way to implement a chunked array of the sort specified by the c++ standard library, which does not require reallocation if the queue length stays below the chunksize. It involves storing cyclic offsets for each chunk.

So what I wrote isn't quite correct about needing to fiddle with memory regardless. But there is still overhead -- even if there's well-optimized code for the single chunk case, the container needs to check that condition.


Thanks for this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: