Hacker News new | past | comments | ask | show | jobs | submit login

I agree, although I have never had a performance bottleneck related to std::list that required a move to std::vector. Do you know at what point std::set becomes more optimal (in some dimension) than std::vector?



Bjarne Stroustrop showed in a presentation a while back that std::list never becomes more optimal than std::vector regardless of size.

Edit: Sorry, I read your comment incorrectly. Instead of std::set I would use a sorted vector without thought for anything up a thousand elements or so. After that it's probably worth profiling. Vector is probably still good even quite a good way after that depending on your use case.


> Bjarne Stroustrop showed in a presentation a while back that std::list never becomes more optimal than std::vector regardless of size.

Until memory fragments and you can't allocate a contiguous block for a big vector.


64bit memory spaces are pretty hard to fragment to that degree, but it could be a concern on smaller embedded systems still.


If memory serves me right, even on 64bit, the standard malloc won't allow you to grab a chunk larger than about 2G. You can still have as many chunks as you like, but you can't have a single chunk larger than that. Of course, you can always use a different memory manager.


Depends on your platform, libc version, compiler, etc. Anything recent should default to just passing through to mmap for allocations that are a small multiple of page size or larger, and on 64bit systems you can happily mmap huge regions.


It's the physical memory that fragments, not the virtual address space. So it's about the amount of physical memory available, not address sizes.


He said big vectors. You're talking about sub-page fragmentation.

Big vectors inherently avoid sub-page fragmentation because they're allocated directly via mmap. The only thing that matters is having unreserved address space, and 64bit gives you a lot.

What you say is true for a worst case load of millions of vectors that are larger than half of the mmap threshold size (typically roughly page size or a small multiple of it). So this might just be a semantic disagreement, but I don't think of millions of 3KB vectors as 'big' vs a smaller number of MB or GB vectors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: