Be careful. The JVM will never unmap the underlying regions it uses to support direct ByteBuffers, so if you end up creating a lot of them (such as for reading/writing files) or even just resizing them (which creates an entirely new one), you can run out of virtual memory.
This is a myth. Or more precisely, a mmaped region no longer in use will be unmapped whenever the JVM feels like it (which can, of course, be never). Try running the loop:
long size = 0;
while (size >= 0) {
size += Files.map(veryBigFile).capacity();
}
You will see that VSIZE grows to be very big, but will decrease in size whenever the JVM runs the finalizers for the mmaped regions. Adding System.gc() to the loop makes the VSIZE stay constantly low on my machine, proving that at least my JVM (build 1.6.0_29-b11-402-11D50b on a mac) will unmap the underlying regions (JVM is allowed to ignore System.gc()).
That's not exactly what Netty does. It does allocate large direct buffers and slices up pieces, but the pieces are not pooled because they are never returned to Netty. For this reason they also don't need to implement malloc in Java. All the slices reference the "parent" buffer, and once they are all collected, the parent can be collected as well. When there is no more room in the buffer, another one is allocated. (I just read that code last week because I wanted to know what was going on precisely because there was no way of returning a sliced buffer to the pool).
It is very easy (and efficient) to implement java.io input and output streams backed by a buffer (as long as the buffer doesn't have to grow).