Not on Windows, where it's optional on a per request basis. It's common on Unix because of fork[1], but AFAIU only Linux and FreeBSD overcommit by default for all anonymous memory allocation (i.e. malloc backing).
[1] Notably, Solaris does not overcommit for fork.
I was not aware of any mechanism that would cause windows to allow overcommits.
The main allocation APIs allow for a "reservation" type of allocation, which reserves some of the virtual address space in the process, without actually allocating it. But those need to be converted to real allocations before use, and doing that adds to the processes commit charge.
I never found any API for actually making memory available to a process without increasing that processes's commit charge. That would obviously be necessary for real overcommit, as to my knowledge it is supposed to be an invariant of Windows that the sumOfAllProcessesCommitCharge < physicalMemory+pageFileSize.
If an API exists for this, I must have missed it.
I suppose a process could simulate overcommit by catching the Access Violation, verifying that the violation was a read/write of a reserved page allocated with some special "virtual overcommit" allocator, and requesting that it be committed, and resuming execution. Needless to say, such user mode page fault handling will be significantly slower than a kernel doing it.
Hmm, maybe I'm misunderstanding? When you make a (single) patently large allocation, macOS will do nothing; it won't show up as used virtual memory and the swap size won't go up. As you write to the memory usage will go up ("Memory" in Activity Monitor) but then it will quickly start swapping out until the limits of your disk, at which point you'll get OOM'd (assuming that your memory doesn't compress well, that is…).
[1] Notably, Solaris does not overcommit for fork.