Hacker News new | past | comments | ask | show | jobs | submit login

A joke answer and a serious question:

A: "Use BSD"

Q: Why is there such a strong focus on trying to get Linux network performance when (I think) everyone agrees BSD is better at networking? What does Linux offer beyond the network that BSD doesn't when it comes to applications that demand the fastest networks?

ps. I think the markdown filter is broken, I can't make a literal asterisk with a backslash. Anyone know how HN lets you make an inline asterisk?




Tools. Compatibility. Resembles systems already in place (i.e., back when BSD failed to get SMP support "soon" there wasn't an option if your workspace was CPU bound). Numbers (it's popular in HPC/HFT because it's popular).

We're not consuming/using the built in network stacks anyway, we're using the OS as a content delivery system. Get us something we can get to the cores on, we're going to pin our applications directly to those cores keeping the kernel relegated to scheduling tasks on whichever NUMA core is further away from the PCIe bus running into the CPU. We'll have the CPU's we're using pegged in a constant spinlock anyway which would make the scheduler think really, really hard about running tasks there. We don't use realtime kernels as it's better for us to pay the price on the occasional spike outlier than to raise the baseline latency up.

Due to my own unfamiliarity, I don't know what BSD's equivalent to isolcpus is. I don't know how to taskset on a BSD. I don't know if the infinibad/ethernet controller's firmware/bypass software works. I don't know how BSD's scheduler works (not that we usually care, but there are times when one can't avoid work needing to be scheduled, things like rpc calls to shut down an app, ssh if you can spare the clock cycles for key verification, etc).

Would dtrace come in handy? Most definitely. Is that enough for us to abandon what we know works? Not yet.


This is probably the best argument: Familiarity.

However, we're talking about very specific needs: super high performance networking. If you have that specific of a need, wouldn't you want something unfamiliar if it solves the problem best?


If it's truly better and the only difference is remove Linux and install BSD then what is BSD doing that is better/different/messed up that the packets can flow faster in BSD than in Linux?


Talking about unfamiliarity and specific needs: FPGAs are much better suited than CPUs for processing minimum-sized frames at wirespeed. They can still forward all unhandled frames to a CPU. Yes, it's a lot of development effort compared to a CPU-only solution, but considering all the kernel-optimizing-multicore-cleverness from OP I would say we are approaching the break-even point.


Who, in 2015, agrees that BSD is better at networking?

I remember these claims being made in the late 90s, and perhaps they were true back then, but it's been 15 years, and I would be surprised if Linux hasn't caught up by virtue of its faster development pace, greater mindshare, and increased corporate/datacenter usage.

So, in all seriousness: what recent, well argued essays/papers can you refer me to so I can understand the claim that BSD networking is still better than Linux in 2015?


There was a good discussion about it on reddit about 10 months back: http://www.reddit.com/r/linux/comments/2d5wzg/linux_network_...

It was also linked to from HN: https://news.ycombinator.com/item?id=8167126

Both have some pretty good sources (and some not so good sources too)


Facebook made the rounds last year for a job posting that stated the goal was "for the Linux kernel network stack to rival or exceed that of FreeBSD": http://www.phoronix.com/scan.php?page=news_item&px=MTc1NjY


One thing - Netmap, that can give you speeds like ~10mpps. It was first developed for FreeBSD and there was a proposition for Linux to adopt the code, but for some reason they havent. Since then, Linux tries to catch up, but its not like FreeBSD/Netmap is staying in one place either.


No. Netmap is available for Linux too and there are other options like dpdk available on both. But that's not the FreeBSD network stack that's an Ethernet stack. The kernel IP stack apparently scales better in FreeBSD but I have not seen recent hard data. Netflix do stream all their content from FreeBSD though.


Whatsapp too IIRC



http://info.iet.unipi.it/~luigi/netmap/

Shipped in FreeBSD by default, developed for FreeBSD first -- and that's just the bleeding edge side of things.


Some of the very same reasons you give for the proposition that network performance having caught up can be used to argue that it may have slowed down (ie feature creep and bloat). So the other question is: who, in 2015 disagrees and what recent, well argued, essays/papers can you refer us to that might demonstrate that anything has changed at all.


Here's an interesting kqueue v epoll benchmark I picked up somewhere when this topic came up.. http://daemonforums.org/showthread.php?t=2124

Time for a epoll for kqueue swap, and make this performance debate go away for both, Linux and FreeBSD once and for all. No reason for this pissing contest.


Registered I/O on Windows is about three to four decades ahead, conceptually.

(As in, the stuff that facilitates registered I/O is based on concepts that can be traced back to VMS, released in 1977. Namely, the Irp, plus, a kernel designed around waitable events, not runnable processes.)


kqueue allows more sophisticated action to occur with each call.

epoll requires more syscalls to do the same stuff.

that's not responsible for the differences speed, but at this level syscalls are a meaningful expense.


There isn't SO_SPLICE on linux. splice() needs a pipe.


Isn't SO_SPLICE a bit like sendfile() on Linux?


Reading things like http://blog.erratasec.com/2013/02/custom-stack-it-goes-to-11... suggests that it's probably simply the case that the OS itself is sort of a side-issue once you need performance since you're going to be bypassing the normal network stack anyway.

At that point ease of management or package installation probably matters more to developers and it might simply come down to things like driver support and other stability/performance issues where Linux has gotten a LOT of highly-specialized attention from hardware vendors and the HPC world. Back when 10G hardware was just starting to enter the market, we bought cards which came with a Linux driver but it took awhile before that was ported to FreeBSD and longer still before the latency had been optimized as much.


Why is the BSD network stack superior, anyway? I hear it repeated a lot, and I'm just surprised Linux lags with all the attention it gets. Is it something related to kqueue vs epoll?


The easy answer probably has to do with comfort and familiarity - no one wants to be in that situation where their software package isn't ported to BSD or the BSD port is lagging several version behind, or have a BSD-specific bug and have to track it down with the developers. Again, not saying these are necessarily realistic, but it's the thing that comes to mind first.


But thanks to the ports tree and the Porter's Handbook, it's easier to solve this type of problem on FreeBSD than on Linux.

And the chance of a Linux distro having a newer version of something important is unlikely unless a new distro was just released and picked the latest release of a particular software to standardize their package on for the lifetime of this new OS.

Either way -- FreeBSD will continue (or be capable of) getting updates to track upstream and your Linux distro will only be backporting security fixes.


> But thanks to the ports tree and the Porter's Handbook, it's easier to solve this type of problem on FreeBSD than on Linux. > > Either way -- FreeBSD will continue (or be capable of) getting updates to track upstream and your Linux distro will only be backporting security fixes.

In most cases what actually happens is that people use the main distribution repo for the 99% of packages which are stable and when you need something newer you add external apt/yum/etc. sources for those specific projects.

In the Ubuntu world, there's a huge ecosystem supporting this style where you upload source packages and they'll build and host the binary packages for you:

https://help.launchpad.net/Packaging/PPA

This approach gives you the speed, reliability and security benefits of binary packages with the currency of ports and, more importantly, allows you to opt-in only where you specifically know you need new features.


And how can you trust those third party repos? That's the hard part. At least FreeBSD's way guarantees your packages are built from source code that matches the checksum of what upstream has released. If it doesn't match because of a compromise or upstream re-rolled their tarball it is discovered very, very quickly.


All packages are signed using GPG and the source package definition include hashes of all of the dependencies so the only question of whether you trust a particular developer and you are required to add a GPG key before adding a repo. (The only thing which makes the distribution's repository special is that the distribution signing key ships as trusted in the base install.)

In many cases, the repos are maintained directly by the upstream project – see e.g. https://www.varnish-cache.org/installation/debian for what that process looks like.

In other cases, you have to decide whether you trust a particular developer. If not, you can choose to create your own version – which could be as simple as taking an source package, auditing it to whatever level you want, and signing it with your own trusted key.

Look, I ran servers using OpenBSD for years in the 90s and FreeBSD in the early 2000s. I respect the work which has gone into the ports system but the reason to use it is not security and advocacy based on limited understanding will not accomplish anything useful. If you want to praise ports, talk about how much easier it makes it to have the latest version of everything installed — and do your homework to be ready to explain how that's meaningfully better than e.g. a Debian user tracking the testing or unstable repositories.


It is meaningfully better. Try tracking the debian testing or unstable policies when the software you're trying to update is built against a newer version of a system library. Good luck fixing that. Now you have to update everything that also relies on that library. It's a rabbit hole that's not fun to go down. Even if they're GPG signed -- what does that prove? They could have modified the source when they built the package. At least with the ports tree you can easily verify the checksum of the source you're building with matches what upstream is releasing.

Furthermore, third party repos not by upstream or trusted OS developers is a nightmare. I regularly spend time trying to find trustworthy 3rd party repos to get newer versions of developer tools into RHEL 5/6. Sometimes it's just random RPMs on an FTP or rpmfind.net style sites. Not trustworthy at all. And sometimes I can't even build the package myself because the tool refuses to build because the rest of the OS is too old.

Long term release Linux distros make life hell.


The way you said "Debian testing or unstable policies" suggests that you aren't very familiar with Debian (they are distributions or repositories, not policies). If the package you're updating is in testing or unstable, then its dependencies will also be updated as necessary. If other packages with the same dependencies won't work with the updated dependencies, then they will be updated as well, automatically. That is not always necessary; it depends on the library in question, whether the ABI changed, etc.

It sounds like you don't know how the Debian packaging system works regarding security. If you are installing from Debian repos (as opposed to third-party repos), then all binary packages go through the ftpmasters. The packages are checksummed, and the checksums are GPG-signed. Each package's maintainer or team handles building the binary from source. Of course, you can also download the source package yourself with a simple command, and then build it yourself. But if you don't trust the Debian maintainers to verify the integrity of the source packages they build, then you shouldn't be using Debian at all. This is no different than using a BSD. The ports tree could also be compromised.

Third-party repos are always a risk. That's one of the nice things about PPAs: their maintainers can use the same security mechanisms that the regular distro repos use, but with their own GPG keys. Again, if you don't trust the maintainer, I guess you should be building everything yourself. LFS gets old though, right?

And that's another good thing about Debian: almost anything you could want is already in Debian proper, so resorting to third-party repos or building manually is rarely necessary.

For long-term use, you can use testing or unstable or both, which are effectively rolling releases. There is also the backports repo for stable. And if you need to build a package yourself, between the Debian tools and checkinstall, it's not hard.

Debian and Ubuntu are the only distros I use, and for good reason. They solved most of these problems a long, long time ago. Compared to Windows or other Linux distros, it seems more like heaven than hell.

By the way, I'm no expert on BSDs--but do they even have any cryptographic signatures in the system at all, or is it just package checksums? Checksums by themselves don't prove anything; you need a way to sign the checksums to verify they haven't been altered. Relying on unsigned checksums is akin to security theater.


Have a look at Gentoo, it's only been around for 13 years...


I used to use it a lot. It's still probably the only Linux distro I'm happy with.


I have no idea. How many UDP packets can FreeBSD pass to application?

I don't think there is any particular problem with Linux. The core problem of "slow" performance is mostly due to limitations of BSD API (recvmmsg is a good example of a workaround), and due to the feature richness of Linux.

Also, doing the math: 2GHz / 350k pps = 5714 cycles per packet. 6k cycles to deliver the packet from NIC to one core, then pass it between cores and copy over to application. That ain't bad really.


Markdown? This is HN where we use whatever PG* decides we're allowed to use.

https://news.ycombinator.com/formatdoc

*Paul Graham


So in other words it's impossible to have two inline asterisks without a space after the first one. Oh well.


    **


You indented, which means there is no formatting at all. You cheated. :)


The Underhanded HN Contest.


There was a trick wrapping things in equals signs or something like that, but I can't recall...


"Why is there such a strong focus on trying to get Linux network performance when (I think) everyone agrees BSD is better at networking?"

Setting aside the fact that a typical fanboi comment is at the top of HN post, are you seriously contending that since one OS does a thing there's no need for other OS's to do the same thing?


Maybe one factor is subtle differences in system call behavior, cascading down the whole stack?

E.g. on freebsd I recently ran into an ENOBUFS that I've never seen on linux. Although man pages said that they could happen.


Replace "Linux" with "Windows", and "BSD" with "Linux".

Obviously it provides nothing extra, and certainly nothing better, when it comes to a server role. Other than support contracts. And support for 3rd-party apps. And hardware drivers/compatibility. And users that know how to operate it. And brand name recognition. And size of software development community, libraries, tools. And industry reliance on non-portable software (Docker).

Answer to your question: technical superiority is dwarfed by a more popular product.


Are you saying the Linux network stack is superior to contemporary Windows?

If so, thanks for the aneurism. Windows is decades ahead, thanks to a fundamentally more performant kernel and driver architecture, the key parts of which have been present since NT. (And VMS, if you want to get picky.)


> And industry reliance on non-portable software (Docker).

Docker is the new shiny kid on the block, but the industry is far from reliant on it. It's also a bit misleading to call the software non-portable in a Linux v FreeBSD debate - it's not that the software is non-portable, it's that it uses a feature that is not available in FreeBSD. In a contest about bragging rights, that's significant.


Define "better". The very few times in my life that networking performance mattered, I found Linux (2.2+) to be hard to beat as a general purpose stack.

If you need something specific with better performance than that, you should probably look at moving more of the network stack to your application layer.


Linux supports more scheduling/QoS algorithms. Most of the recent (~10 years?) academic papers on packet queuing algorithms implement their algo in Linux.


Exactly! How dare people try and improve Linux when they could use something else instead!


Is there any comparison you can provide? (purely out of personal interest?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: