hpn-ssh is specifically designed for high latency, high bandwidth file transfer and is more than just "big buffers and multi-threaded." And the question remains: how does your solution compare in simulated and real-world testing?
It's a little strange that you "conducted an extensive literature review" of congestion algorithms but you aren't aware of basic common tools like hpn-ssh, wget2's multithreading mode, or GridFTP which is used extensively in particle physics and genetics research communities.
Thanks for the feedback. The file transfer ecosystem is very large and conducting a through review of the application level tools was not the goal of this project, as the overwhelming majority of them focus on differences at the application layer, not the transport layer.
We are specifically focusing on rebuilding a congestion control algorithm from the ground up that can better tolerate modern network conditions, including things like high bandwidth, high packet loss, and high latency.
With respect to Grid-FTP, wget2 multi-threading, and other multi-flow approaches: the problem with getting performance increases out of multiple, distinct traffic flows is that you become more and more unfair to other packet traffic as you increase the number of flows you are using. For example, if you use 9 TCP (or any other AIMD) flows to send a file over some link, and a tenth connection is started, you now are taking up to 90% of the available bandwidth (because AIMD flows are designed to be fair amongst themselves).
It's a little strange that you "conducted an extensive literature review" of congestion algorithms but you aren't aware of basic common tools like hpn-ssh, wget2's multithreading mode, or GridFTP which is used extensively in particle physics and genetics research communities.