> Here’s an example. I have a program that does a lot of memory allocation and reallocation.
It's likely that if memory allocation overhead is part of your bottleneck, that using malloc() is actually a bad idea. Other management strategies such as arenas or rolling buffer will likely perform better. They're also trivial to implement ad-hoc.
Of course, you might arrive at the conclusion that the complexity overhead of custom allocation strategies[1] might not be worth it, but in this case, it means speed is not really an issue.
[1]: as well as, maybe, the necessary redesign of the application to fit those allocation strategies more elegantly.
That's possible. Maybe that such OS "unfairly" stacks custom options against its malloc implementation.
But I wouldn't be so sure to claim that such a pool, implemented in assembler on an OS that leaves everything to you, wouldn't outperform the os allocator you refer to. After all, there is fundamentally less information to manage.
It's likely that if memory allocation overhead is part of your bottleneck, that using malloc() is actually a bad idea. Other management strategies such as arenas or rolling buffer will likely perform better. They're also trivial to implement ad-hoc.
Of course, you might arrive at the conclusion that the complexity overhead of custom allocation strategies[1] might not be worth it, but in this case, it means speed is not really an issue.
[1]: as well as, maybe, the necessary redesign of the application to fit those allocation strategies more elegantly.