If memory serves me right, even on 64bit, the standard malloc won't allow you to grab a chunk larger than about 2G. You can still have as many chunks as you like, but you can't have a single chunk larger than that. Of course, you can always use a different memory manager.
Depends on your platform, libc version, compiler, etc. Anything recent should default to just passing through to mmap for allocations that are a small multiple of page size or larger, and on 64bit systems you can happily mmap huge regions.
He said big vectors. You're talking about sub-page fragmentation.
Big vectors inherently avoid sub-page fragmentation because they're allocated directly via mmap. The only thing that matters is having unreserved address space, and 64bit gives you a lot.
What you say is true for a worst case load of millions of vectors that are larger than half of the mmap threshold size (typically roughly page size or a small multiple of it). So this might just be a semantic disagreement, but I don't think of millions of 3KB vectors as 'big' vs a smaller number of MB or GB vectors.