Hacker News new | past | comments | ask | show | jobs | submit login

Really interesting project. Couldn't find the motivation explained. Is this just for research? Is it usable in production running in an FPGA? Are there plans to produce hardware?



It's from a group at UCSD, so yes, this is research.

The applications for these kinds of things range from SDN (software-defined networking) where low-latency is a concern and to applications in network monitoring. One could, for example, put together a system that performs line-rate TLS decryption at 10Gbps. You need an FPGA (a big one) for something like that.

There are commercial vendors for this kind stuff (selling closed source IP and hardware). It is not yet in Open Compute networking projects, but I expect that's coming soon.

You can now buy "whitebox" switches that run open network linux and put your own applications on them. In the not-too-distant future those "applications" will also extend to stuff that can run on FPGA hardware .


Nope! Netflix does 10G tls on commodity hardware in kernel space. CPU can do a lot.


I believe they are doing 100Gbps now: https://t.co/cbb7NA9vJf?amp=1

It's hard for me to see the use case of an FPGA nic. The reasons outlined above don't seem compelling when a commodity nic like mellanox do so much more already.


Mellanox NICs (and basically all commercial NICs) do not do what we want. Software is not precise enough, and is on the wrong side of the NIC hardware queues. The whole point of Corundum is to get control of the hardware transmit scheduler on the NIC itself.


It looks like the UCSD team are exploring data center TDMA which no commercial NIC supports. http://cseweb.ucsd.edu/~snoeren/papers/tdma-eurosys12.pdf


The group web page is here: https://circuit-switching.sysnet.ucsd.edu/

Corundum was originally geared more towards optical circuit switching applications, but it's certainly not limited to that. Since it's open source, the transmit scheduler can be swapped out for all sorts of NIC and protocol related research.


As others mentioned, datacenter SDN. A FPGA-based hybrid NIC used in production at Azure (>1M hosts): https://www.microsoft.com/en-us/research/uploads/prod/2018/0...



> The reasons outlined above don't seem compelling when a commodity nic like mellanox do so much more already.

This is could be useful for people doing testing and benchmarking on network appliances.


Yeah, I suppose that's a valid use case. Things like ixia need to be fpga-based to measure absolute latency without any uncertainty. You cannot currently get that with enough flexibility in commodity cards.


I saw it was from UCSD but that in itself wasn't an answer. There are plenty of things that are usable in production that are built by research groups initially.

SDN isn't the answer either unless these FPGAs can be used directly in production and so there's a path for network cards to no longer being built on dedicated hardware. So to clarify my question I could see this being:

1) A pure research effort on network card hardware design. Useful to test things in a lab and publish papers.

2) Something that can be pushed into production by actually shipping an FPGA in the router, perhaps in specialized situations where the fixed hardware isn't flexible enough.

3) A step before actual hardware can be manufactured, and network cards themselves become a whitebox style business where multiple generic vendors show up because the designs are open-source.

Either is interesting.


A low end consumer CPU can do 10-20 Gbps of AES per core, it certainly doesn't require a big FPGA.


Original motivation is to support optical switching research for datacenter networking applications. The research group web page is here: https://circuit-switching.sysnet.ucsd.edu/ . It is also mentioned in these slides: https://arpa-e.energy.gov/sites/default/files/UCSD_Papen_ENL.... However, the design is very generic and should be interesting to applications outside optical switching. The main point was to get control over the transmit scheduler, coupled with a very large number of hardware transmit queues. There are a number of experimental protocols and similar that could benefit from this vs. implementation in DPDK.

It is still in development; not sure if I would trust it yet for production workloads. We will not be producing hardware; the design runs on pretty much any board that has the correct interfaces, including many FPGA dev boards and commercially available FPGA-based NICs such as the Exablaze X10 and X25.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: