You do need to opt-in when compiling the kernel, but on Linux it doesn't take anything particularly special on the program's side to enable it. The rule is that non-fixed mmap() will only ever yield 48-bit addresses, until the program specifies a 57-bit address as a hint, after which it can yield 57-bit addresses at will. (This rule has lots of curious consequences for the location of the vDSO, when 5-level paging is enabled and an ELF program uses big addresses for its segments.)
Any idea what the use case is for such large addresses? Is it RDMA or something else? Even with RDMA I find it hard to believe that companies have > 256TiB of RAM installed behind a single switch.
I recall hearing one use case where you have a whole lot of threads, and most of them will only use a little memory, but a few will need a whole lot of memory. So you assign each thread a very large segment of the address space upfront, so that they never have to coordinate with other threads at runtime. At 1 GiB of space allocated to each thread, it takes only 262k threads to use up the whole 256 TiB address space.
Good question. I also may not understand that since as of today, and to my knowledge, top of the line Intel Xeon's can support up to 4TB per socket. This means that for a 8-socket system, largest amount of physical RAM would equate to 32TB ... which is not even close to the addressable (virtual) memory on even 48-bit systems (256TB).