Hacker News new | past | comments | ask | show | jobs | submit login

Ideally you just preallocate what you're going to need.

There are circumstances where you genuinely need a growable buffer, but often you can determine either the exact size or a reasonable maximum.




Yep, I think this is the correct answer for the general case. With structures close to or larger than a page and on 64 bit systems you can go nuts with overallocation because the virtual address space is much larger than the amount of physical memory.


It's quite possible to go overboard even on a 64-bit system.

For example, I've seen people allocating 4GB buffers for everything, just because that made it impossible to have buffer overflows as long as they indexed with 32-bit integers. But on x86-64 you can only fit 65k such buffers in the address space, and it's not that hard to exhaust that.


I guess my definition of going nuts was a bit more conservative. Both x86-64 and ARM's AArch64 currently only use 48 bits for virtual addressing.

Still, even if you design for computers with 256GB (2^38) physical memory, you still have a virtual address space that's 2^9 times larger (assuming addresses with the MSB bit set are reserved for the kernel's memory space). This is opposed to 32 bit systems where the physical memory space is close to or larger than the virtual address space. E.g. high end smartphones sold in the last few years have 2GB RAM and only 3GB of virtual address space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: