Ok but then aren't you going to get memory fragmentation? If you allocate and then deallocate a billion 1kB objects, you can't then coalesce them to allocate larger units because the generation number locations before each 1kB can't be given back to user code.
In the basic generational references approach that was a drawback, and the reason it couldn't release memory back to the OS. We planned to use something like MESH [0] to reduce the fragmentation.
We created two newer approaches since then, which let any memory be reused for any purpose:
* Random generational references, where it's fine if generations overlap with other data.
* Side-table generations, which is slower but we keep the generations in a side-table. It's can be seen in old 0.1 versions as the "resilient-v2" mode, and I plan on resurrecting it for unrelated reasons.
The former will be the default, and the latter we'll be adding back in as an option. Hope that helps!