Hacker News new | past | comments | ask | show | jobs | submit login

64bit memory spaces are pretty hard to fragment to that degree, but it could be a concern on smaller embedded systems still.



If memory serves me right, even on 64bit, the standard malloc won't allow you to grab a chunk larger than about 2G. You can still have as many chunks as you like, but you can't have a single chunk larger than that. Of course, you can always use a different memory manager.


Depends on your platform, libc version, compiler, etc. Anything recent should default to just passing through to mmap for allocations that are a small multiple of page size or larger, and on 64bit systems you can happily mmap huge regions.


It's the physical memory that fragments, not the virtual address space. So it's about the amount of physical memory available, not address sizes.


He said big vectors. You're talking about sub-page fragmentation.

Big vectors inherently avoid sub-page fragmentation because they're allocated directly via mmap. The only thing that matters is having unreserved address space, and 64bit gives you a lot.

What you say is true for a worst case load of millions of vectors that are larger than half of the mmap threshold size (typically roughly page size or a small multiple of it). So this might just be a semantic disagreement, but I don't think of millions of 3KB vectors as 'big' vs a smaller number of MB or GB vectors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: