Hacker News new | past | comments | ask | show | jobs | submit login

What's the advantage of doing this over plugging multiple gigabit adapters into a linux machine and adding them all to a bridge?

I'm guessing performance might be better with the hardware, but I don't know --has anyone done tests to show the difference?




It'll be a combination of:

1. Throughput - say you use usb adapters, in a lot of ways usb is a shared bus so you'll run into max bandwidth quickly. This is especially because data will have to go in and then out, all the way to the cpu

2. Latency - because you're using software to do switching, it'll add time to process each packet and send it back out tne right place. You've also go any other interface latency adding to it

3. Power usage - eacj adapter will have it's own full network phy and hardware, which will increase the power draw. Combined with all the extra processing above and now your power usage is even higher. This means you also loae out on hardware offloading and other performance enhancememts that generally reduce power usage because less of the system is involved in mocing packets around

4. Features (potentially) - this will depend a lot on the hardware you choose, some of those cheap gigabit usb adapters i've tried didn't work with vlans and other features properly. But if you say load up a bunch of nice pcie cards with 1 or more ports thst support everything (never had issues with pcie ones) then you can now actually get a lot of features that are otherwise difficult or impossible on simpler hardware (though at that point you're doing routing more tham switching, but thay flexibility is why you'd potentially do this).


We're only talking 1 gbit ethernet here, so you can have multiple of those ports on PCIe.

I have a PCIe card here with two 2.5 gbit on it (don't remember exactly how much it was on Ali but between 20 and 40 EUR) and I can saturate both with iperf3. Since the example only uses 4 ports, it should be easy to make a simple router with just two PCIe cards. But there's probably 4x 1 gbit PCIe out there, too. And if you use 1 gbit fiber, that wouldn't cost much power nor would it need much speed. If your uplink is DSL, you could use a VigorNIC 132.


This is more efficient if most of the traffic will get switched, seperate adapters is more efficient if most of it needs to be routed.


As everyone else is saying, power usage should be less this way. Probably less latency to traverse the switch than a software bridge, too. Switching should continue to function if the host OS crashes, combined with a watchdog and recovery, you could have a more available system where maybe some things don't work for a brief interval, but much better than a software bridge (assuming the switch chip doesn't crash or get stuck, anyway).

It depends on what your goals are though. If you want to inspect all traffic passing through the switch, having 4 interfaces is clearly better. If your host based switch is also doing a lot of communication, 4 interfaces gives you 4gbps from the hkst rather than 1gbps. Etc.

Ex-enterprise quad 1G cards are $15 or less on ebay. I'm partial to silicom quad bypass 1g_ PEG4BPI-SD; the bypass feature can be fun, and they're cheaper cause they're weird (you can mostly configure them to be 'standard nics' once and then plug them into anything without much fuss, but getting there can be challenging. Early ones come with pci ids set to silicom as the vendor and subvendor which makes them harder to use; the -SD cards have intel vendor id and silicom subvendor, so the normal driver will attach.

4x10g ports would be more to manage, and you might not have enough throughput for software bridging, depending on the host system. And quad port 10g cards are harder to find. 2x10G is reasonably priced though, if you're patient.


Your CPU will be in the data path there. Switches do their packet switching on ASIC rather than CPU. So it depends how good your CPU is, and generally that's not an effective use of whatever compute power you've got.


For such a low speed and small number of ports using a hardware switch circuit is not necessary.

Nevertheless, if the designing and building effort is neglected, I assume that the total cost of the hardware might be under $100, which is less than a computer with multiple interfaces would cost.

Still, 1 Gb/s networks are rather obsolete. One could make a managed network switch that is bigger and faster by using only off-the-shelf components for slightly more than $200, e.g. a 6-port 2.5 Gb/s switch can be made with an Odroid H4+ having 2 ports, together with its add-on M.2 card with 4 extra Ethernet ports. Another variant is to use a small computer with an N100 CPU and 4 2.5 Gb/s ports, which can be bought in this price range from various Chinese companies. Similar small computers with 6 2.5 Gb/s ports are a little more expensive, perhaps slightly over $300.


> Still, 1 Gb/s networks are rather obsolete.

Are they? Most consumer, even office gear still is maximum 1000M/port - your average USB-C network/multi-port laptop dongle, most USB-C/TB monitors (and shamefully, Apple's Studio Display which only has USB-C/TB ports, no network), VoIP phones (hell these are usually 10/100 only, with 1000 being reserved for top models), printers, virtually all entry to mid range NAS systems... the list is endless.

Options for more than that tend to be really niche, expensive, or are bring-your-own-module-SFP.

Besides, 1000M is way more than enough for almost all consumer and office needs. Only exception is heavy video and photo editing, if these workplaces don't already use direct Thunderbolt/FC attach.


They definitely aren't, no idea what parent is talking about. They are the standard unless you're talking about a SAN or something. If I had to guess, 92% of the switches in a typical office are 1000M, 5% are 100M, and the rest are > 1000M for switch interconnects or HA server interconnects/SANs.


No they aren't.

You failed to mention the big one: most people by very far have 1 Gb/s network switches to... Connect their machines to this thing called the Internet. Some may have heard of it.

And most people also have not more than 1 Gbit/s up/down.

Do I have switches (with a 's') with 10 Gbit/s SFP+ at home? Yup. Is 1 Gbit/s obsolete: definitely not.


1gig is starting to become a problem - we have increasing deployment of 2gig and 8gig internet to homes in Poland for example, figuring out how to deal with that when most computer gear still comes with 1gig is becoming an issue


$100? I paid roughly $80 for two of these and the price per boards will rapidly converge to ~$17 when increasing quantity.


The PINE A64-LTS alone is $40 + shipping + taxes, so you cannot have a $17 total cost.

I assume that you mean that the card with the switch circuit alone could reach $17, if made in a large quantity.

However, you cannot make a managed switch with that card alone.

If your current cost has been $40 for a card with the switch circuit, adding the PINE A64-LTS + whatever you have paid for shipping and taxes must make the total around $100, exactly like I have said.


I meant the cost of just the switch obviously... It can hook up to whatever Linux machine you have running and I happened to have this one in the drawer.

The point of this was not to make the fastest or cheapest or most featureful switch available, it just had to fit in a case and none of the options have one port facing backwards so it doesn't require an ugly loop cable on the front of the final case.

It's also possible to make a managed switch with that card alone, there is a footprint for a NOR flash chip, if you load the netgear GS105E firmware on that (which is available on the netgear website) then it will be just a GS105E without leds and one port on the back.


1000M networks are THE standard, no idea where you got the idea they were obsolete.


A raspberry pi 400 can handle around 875 MBit when bridging two interfaces. So it’s not even fast enough for two ports let alone full duplex. I doubt a n100 can handle more than three.


An Intel Celeron N5105, on one core, does ~28Gbps locally between two bridged interfaces with iperf.

A Raspberry Pi 4 has a single PCIe lane with a total of 4Gbps bandwith for everything (all USB ports and Ethernet).

Edit: shameless plug, but I just finished writing an article on my new home network, including the router with the above spec/results: https://atodorov.me/2024/07/03/running-a-multi-gig-home-netw...


The CPU is the bottleneck. All switching is done in software with off the shelf NICs.


Yes, my Intel Celeron N5105 had a full core pegged at 100% while doing that iperf and getting those numbers.

But there are 4 whole cores, meaning I can get far more traffic switched and routed with the CPU capacity than all the ports combined can sustain.


Unfortunately gigabit ethernet is far from obsolete.

Yes, there's 2.5 gigabit on some consumer hardware, but it's still kind of rare.

Also who is excited about a 2.5x speedup after 20 years? Nobody cares until we need 10 gigabit internet access (which will probably never happen).


The main reason for 2.5g is for digital video (2.5gbit will allow do 2 1080i 2110 streams), and especially the increasing numbers of wifi6e APs that do >1gbit but nowhere near 10g


> Nobody cares until we need 10 gigabit internet access (which will probably never happen).

For what it's worth, my ISP gives me 5Gbps down/700Mbps up for ~40 euros a month which includes a bunch of TV channels and discounts for Netflix and Disney+.

They also have an 8Gbps down / 8Gbps up plan for ~60 euros, also including a ton of extra things.


Wow, that's pretty cool. Where is this offered?

But have you invested in 5Gbps+ networking gear to actually take advantage of the offer? 10 Gbps Nics have become super affordable (~20$) by the way.


Cat7 cabling is something I encountered at discount stores now


I wouldn't trust a Cat7 cable from a discount store to adhere to the spec though.


Used it more as illustration of market penetration - just like with HDMI cables, you're either ripped off on markup, sold under-spec cabling, or both.


I wouldn't use copper for 10G anyway, use a fibre sfp, far more power efficient.


It depends.

In my case, all rooms are cabled with a weird electrical standard that should get me 10G Ethernet (and does 2.5G without any issue). I'm not going to drill holes to pull my own fibre all around the place when I have perfectly good Ethernet connectivity.

Also, while 10G SFP+ NICs are vastly more available than Ethernet 10G ones, switches seem to be cheaper with 2.5/5/10G Ethernet ports than full on SFP+, unless you buy recycled Enterprise gear which would blow your power efficiency argument out of the water.


Doesn't that depend on the run length? Surely copper is more efficient for just a few meters.


My 10g copper SFPs are far hotter than my 10g fibre ones


> Wow, that's pretty cool. Where is this offered?

Free in France.

> But have you invested in 5Gbps+ networking gear to actually take advantage of the offer? 10 Gbps Nics have become super affordable (~20$) by the way.

The 8/8 plan comes with a router that has WiFi 7 (theoretical max 46Gbps, but who knows in reality, it's not really out yet), an SFP+ port, and 4x2.5Gbps Ethernet ports. The one I have only has one 2.5Gbps Ethernet, so that's the Internet speed at which my home network caps out at.

If you're interested in more details, I recently finished writing an article about my home network: https://atodorov.me/2024/07/03/running-a-multi-gig-home-netw...

And yeah, 10G equipment (in terms of NICs and cables) is quite affordable, but switches still aren't really super affordable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: