Hacker News new | past | comments | ask | show | jobs | submit login
New AWS C5n Instances with 100 Gbps Networking (amazon.com)
44 points by mcrute on Nov 27, 2018 | hide | past | favorite | 17 comments



If my math is right, full utilizing this instance's network bandwidth makes one capable of streaming video at an AWS billing rate of over $11/second!


You’ll never get that bandwidth out of their network. It’s intended for accessing other machines in the same zone.


And S3


There's a new "Elastic Fabric Adapter" in preview for 100Gbps networking instances: https://aws.amazon.com/blogs/aws/aws-previews-and-pre-announ...

That will allow applications to use a range of supercomputing techniques like https://ofiwg.github.io/libfabric/


I see they're specifically calling out HPC, but I'll only believe they can get real actual HPC performance out of this when I see it. HPC networking (which is 95% Infiniband these days) is just as much about low latency and few hops as it is about high bandwidth. You wire your machines up in exotic topologies and spend more cash on the network than the nodes themselves. You end up with basically the opposite of the elastic philosophy, a behemoth which is bloody fast but inflexible.

If they can show decent scaling to at least 50 nodes (~1000 cores) on a properly constructed benchmark like HPGMG [1], I'll eat my words, but until then I remain skeptical.

[1] https://crd.lbl.gov/departments/computer-science/PAR/researc...


I think it is quite clear there are insane amount of money going into HPC like segment. There was a recent tweet about so much Compute Required it nearly took down two AWS Region. I am starting to think Web Hosting in traditional sense aren't really the target customer AWS looking for anymore. DO seems to be fitting the niche better.


I think you're right there's large amount of compute power being used on simulations/computations of some sort on AWS. But I don't think they are "HPC", which is understood as requiring rapid exchange of large amounts of information between nodes. It's more like a "non-HPC compute cluster" which are also useful and which you find in many places both in academia and industry.


What type of applications would need this amount of bandwidth?


Could easily use this for web scraping. But at amazon prices? Ha!

Amazon gives some examples though "With up to 100 Gbps of network bandwidth, your simulations, in-memory caches, data lakes, and other communication-intensive applications will run better than ever"


Ingres is free though.


Yes but they've recently added some P2W features, which is a bit concerning. Here's hoping that they tone down the IAPs in the coming months, before they retire the old scanner.


You seem like the ideal customer.


Large parallel computation jobs such as machine learning model training that require massive amounts of data would benefit from instances like this.


Clustered processing, where info is being sharing among your machines as the processing is happening. Anything you imagine running simultaneously on racks of machines will benefit immensely from faster interconnects between each machine on the rack.


Molecular dynamics simulations


are latency, not bandwidth-bound.


I performed MD simulations all throughout my PhD, on three different supercomputers as well as on a virtual cluster I set up on Azure. I didn’t say latency wasn’t a bottleneck; but the parent poster asked why additional bandwidth would be needed. It’s definitely necessary, but not sufficient.

I would argue though that for simulations with very long-range force fields or ones that include quantum chemical effects, bandwidth is THE bottleneck. Latency more so for short range force fields for non-reactive potentials.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: