They don't seem to be quite zero cost though when applied to the whole program, because they require changes to the allocator to ensure the generations are never overwritten by user data.
If you store them inline with the program data for max speed(tm) you need to ensure that e.g. after 2 2kb chunks are deleted, you don't overwrite them with a 4kb chunk, because that would trample over a generation.
If you do keep the generations inline and rely on a statistical approach, you have to be very careful to never generate "common numbers" like 0 as a generation because then it's extremely likely there will be a collision.
It'd a hard problem and I'm quite curious how all the edge cases are handled.
Maybe Vale's "regions" are per-type (essentially arrays)? That way a specific memory location would only ever be used for that same type (== same size in memory) until the whole region is destroyed.
iOS is getting a 'typed allocator' which seems to work similar:
Yeah typed allocator would be my guess but those aren't zero cost either. They increase memory usage since if your program allocate an array of 100 ints, delete them, and then allocate an array of 100 floats, unless you allocate more ints on the heap that memory isn't getting reused.
If you store them inline with the program data for max speed(tm) you need to ensure that e.g. after 2 2kb chunks are deleted, you don't overwrite them with a 4kb chunk, because that would trample over a generation.
If you do keep the generations inline and rely on a statistical approach, you have to be very careful to never generate "common numbers" like 0 as a generation because then it's extremely likely there will be a collision.
It'd a hard problem and I'm quite curious how all the edge cases are handled.