Hacker News new | past | comments | ask | show | jobs | submit login

Like an iops improvement of 30%+ and latency improvement of 20%+[1], ish.

[1]: https://www.reddit.com/r/Proxmox/comments/134kqy3/iscsi_and_...




There's no way to combine the NVMe drives into a larger sized unit for redundancy / failover though, so not sure what kind of future uptake this could have.


Everyone who uses NVMe-over-network-transport simply does redundancy at the client layer. The networking gear is very robust, and it is easier to optimize the "data plane" path this way (map storage queues <-> network queues) so the actual storage system does less work, which improves cost and density. That also means clients can have their own redundancy solutions that more closely match their requirements, e.g. filesystems can use block devices and implement RAID10 for e.g. virtual machine storage, while userspace applications may use them directly with Reed-Solomon(14,10) and manage the underlying multiple block devices themselves. This all effectively improves density and storage utilization even further.

NVMe-over-network (fabrics w/ RDMA, TCP, ROCEv2) is very popular for doing disaggregated storage/compute, and things like Nvidia Bluefield push the whole thing down into networking cards on the host so you don't even see the "over network" part. You have a diskless server, plug in some Bluefield cards, and it exposes a bunch of NVMe drives to the host, as if they were plugged in physically. That makes it much easier to scale compute and storage separately (and also effectively increases the capacity of the host machine since it no longer is using up bandwidth and CPU on those tasks.)


Interesting. Sounds like it'll make for higher potential scaleability, but also increase the cost (at the network layer) instead.

Probably a trade off that a lot of enterprise places would be ok with.


I’m not sure what you mean. You can add the disks to a software RAID in the worst case. Are you talking about on the host?


Yeah. It seems like directly presenting raw disks to the network means any kind of redundancy would need to be done by whatever device/host/thing is mounting the storage.

And doing that over the network (instead of over a local PCIe bus) seems like it'll have some trade-offs. :/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: