The title is inaccurate. It's a memory allocator specifically design for real-time systems. This is an important and difficult problem, but it in no way makes it an allocator "to rule them all." In particular, I'm thinking about multithreaded allocation, which I think will become increasingly important.
I think the title is fine. He's not suggesting that this new malloc is the one "to rule them all". It sounds like he hasn't even tried it yet. Instead, he's wishing that there was such a malloc:
That’s the crazy thing about malloc implementations. They
all claim to be awesome in every way. So the burning
question is: is this better than what I’m using today? I
don’t know, but I’d sure like to!
I like the article, but I'm not sure that there needs to be a single winner. Different programs have different needs. Right now, I'd love to find a malloc that can work within shared mmap segments, sort of as described here: http://blog.directededge.com/2009/02/27/on-building-a-stupid...
nkurz is right about the intention of the title, but if TLSF is only intended for real-time systems, then why does it claim that it is general-purpose, and not include any caveats of the sort you do? Like all memory allocators I've encountered, none of them say "if you're doing X, you're probably better off using a different allocator." The closest it comes to a caveat is:
> Although TLSF works rather well in many scenarios, it stands out in applications with hard/soft real-time application which uses explicit memory allocation with high flexibility requirements due to a high variability of the data size or adaptability to new situations.
Or paraphrased: "TLSF is great, but is especially great when..."
It was designed for real time systems, and evaluated in the context of real time systems. (Check out the academic papers.) "General purpose" just means it implements the malloc/free interface, and can be used in place of the standard on your system.
I especially like the part near the end of the post where it says, "It could be that different implementations excel in different scenarios" as if that wasn't completely obvious for anyone who has thought for more than a few minutes about a malloc implementation.
Article author here -- apologies for making you dumber. I would love to hear any data you have about when I should choose one malloc() implementation over another; the theory that malloc implementations have trade-offs is basic enough, but I've never come across a memory allocator that says "here are situations where you shouldn't use this allocator, and should use allocator Y instead."
And since the malloc() that ships with any given libc is going to be used by 100% of applications that do not explicitly override malloc (and I have never come across an application that does), understanding which is the best all-around allocator seems like quite the worthwhile exercise.
I also don't necessarily buy the jemalloc paper's assertion that allocators cannot be benchmarked in isolation. It's true that you can't measure the effects that locality will have on the rest of the application, but you surely can measure locality of allocations.
General purpose malloc/free cannot "rule" in all situations due to its limited interface hence any particular understanding of resource management constraints. It's obvious that no malloc/free package can possibly outperform an arena scheme when it makes sense.
Has anyone here had the opportunity to test nedmalloc in real life? What sort of performance benefits did you see? Any problems you ran into?
I'm currently designing embedded systems in C, and am looking to get the most performance out of strained resources as possible...
I'm wondering if this'll compile for AVR or not.....