There's no way to combine the NVMe drives into a larger sized unit for redundancy / failover though, so not sure what kind of future uptake this could have.
Everyone who uses NVMe-over-network-transport simply does redundancy at the client layer. The networking gear is very robust, and it is easier to optimize the "data plane" path this way (map storage queues <-> network queues) so the actual storage system does less work, which improves cost and density. That also means clients can have their own redundancy solutions that more closely match their requirements, e.g. filesystems can use block devices and implement RAID10 for e.g. virtual machine storage, while userspace applications may use them directly with Reed-Solomon(14,10) and manage the underlying multiple block devices themselves. This all effectively improves density and storage utilization even further.
NVMe-over-network (fabrics w/ RDMA, TCP, ROCEv2) is very popular for doing disaggregated storage/compute, and things like Nvidia Bluefield push the whole thing down into networking cards on the host so you don't even see the "over network" part. You have a diskless server, plug in some Bluefield cards, and it exposes a bunch of NVMe drives to the host, as if they were plugged in physically. That makes it much easier to scale compute and storage separately (and also effectively increases the capacity of the host machine since it no longer is using up bandwidth and CPU on those tasks.)
Yeah. It seems like directly presenting raw disks to the network means any kind of redundancy would need to be done by whatever device/host/thing is mounting the storage.
And doing that over the network (instead of over a local PCIe bus) seems like it'll have some trade-offs. :/
[1]: https://www.reddit.com/r/Proxmox/comments/134kqy3/iscsi_and_...