I agree, although I have never had a performance bottleneck related to std::list that required a move to std::vector. Do you know at what point std::set becomes more optimal (in some dimension) than std::vector?
Bjarne Stroustrop showed in a presentation a while back that std::list never becomes more optimal than std::vector regardless of size.
Edit: Sorry, I read your comment incorrectly. Instead of std::set I would use a sorted vector without thought for anything up a thousand elements or so. After that it's probably worth profiling. Vector is probably still good even quite a good way after that depending on your use case.
If memory serves me right, even on 64bit, the standard malloc won't allow you to grab a chunk larger than about 2G. You can still have as many chunks as you like, but you can't have a single chunk larger than that. Of course, you can always use a different memory manager.
Depends on your platform, libc version, compiler, etc. Anything recent should default to just passing through to mmap for allocations that are a small multiple of page size or larger, and on 64bit systems you can happily mmap huge regions.
He said big vectors. You're talking about sub-page fragmentation.
Big vectors inherently avoid sub-page fragmentation because they're allocated directly via mmap. The only thing that matters is having unreserved address space, and 64bit gives you a lot.
What you say is true for a worst case load of millions of vectors that are larger than half of the mmap threshold size (typically roughly page size or a small multiple of it). So this might just be a semantic disagreement, but I don't think of millions of 3KB vectors as 'big' vs a smaller number of MB or GB vectors.