Why doesn't *BSD have a greater market share? I just set up an openbsd firewall and loved it. Plus I have experience with zfs on FreeBSD-also loved it.
FreeBSD developer here. A lot of Linux' success vs BSD is due to the USL lawsuit in the early 90s. I believe Linus has said he wouldn't have even made Linux in the first place if BSD hadn't been embattled in a lawsuit at the time.
As far as today — Linux has far more developers. It has driver support for a wider array of hardware. It gets more and faster vendor driver support (Intel, AMD, Nvidia, etc; even Microsoft) than the BSDs. It scales better on NUMA and on very high core count systems than any BSD. These are all legit reasons people use Linux instead of a BSD.
Then there's RHEL and Ubuntu. The BSD world doesn't have anything like either. RHEL is a huge boon to the Linux ecosystem — they take money for support contracts, and invest it in improving the GNU & Linux ecosystem for everyone. Ubuntu is a very beginner-friendly distribution that is serious enough for server use. On the FreeBSD side there's maybe TrueOS (née PC-BSD) but it doesn't attract anything like the audience that Ubuntu does. It also doesn't have Shuttleworth's funding.
FreeBSD had a pretty decent run after the lawsuit and was probably more common than Linux in Serious Server Deployments™ for a bit. Fundamentally, though, Linus out-organized and out-managed the BSDs, I don't think it it was just the lawsuit.
In my opinion (mostly Debian user with a bit of FreeBSD playing), people use Debian as a server for the same reason a lot of people like MacOS over Linux - It Just Works.
1. For many years, the only way to install software was ports. Now if you're a full time sysadmin with time on his hands, it's great. But if you just need something up and running fast (and if you don't know your software internals, and don't know if you'll need perl's FLAG_ABC) , it's horrible.
It just feels like Linux in the 90's (been there), where recompiling the kernel/freex86 was a right-of-passage into Linux hackerdom. Nowadays, most of the time it's just not worth it.
2. apt vs ports/pkg. This is actually the biggest thing keeping me on Debian - stable + backports.
If I'm running my server, I want things to be stable. Now I know that there's no other project the size of Debian, which can backport security fixes to two year old software (and sometimes four year old software), but there's nothing like apt update && apt upgrade and 99% of the time have everything update without a hitch.
Yes, FreeBSD is more elegant (why couldn't GNU/RedHat have just modeled ifconfig rather than ifconfig, ifup, ip, etc.). Yes, FreeBSD's man pages are amazing (which is quite important, as there's not as many FreeBSD blogs around), but if you're learning a new system (coming from Windows), Linux isn't that much harder to learn than FreeBSD.
EDIT
And RedHat?
They work like Oracle - You pay them, and they'll hold your hand, and (unlike Oracle) they release their software under an OS license.
If you're a non-tech Fortune 500, that's very important.
Note, by the way, that those two distros have the vast majority of GNU/Linux installs.
> why couldn't GNU/RedHat have just [modified] ifconfig rather than [having] ifconfig, ifup, ip, etc.
This comes down to one of the key differences between Linux and BSD: BSD "owns" its userland—the people who develop the kernel, or some system utility, can literally decide to change something, and then do a global search-and-replace on all usages of that something across all consuming projects. Because all those projects are "part of" BSD in a very literal sense. You can decide that ifconfig(8) should work differently, and bam, there you go, now it works differently. Now the tools that call it and parse its output work differently, too. Everything works differently; but everything still works. Document the new behavior in the man(1) pages.
Linux, meanwhile, is in essence a giant Mexican standoff: nobody can change the interface of the thing they're responsible for, without potentially breaking something someone else is doing that they're not aware of at all. So Linux devs, rather than changing old interfaces for the better, just leave the old interfaces where they are in a sort of "legacy-compat" mode, and build entirely new interfaces that work the way they "should." (And then people start to depend on the details of the new interface, and it all happens again five years later.)
>This comes down to one of the key differences between Linux and BSD: BSD "owns" its userland—the people who develop the kernel, or some system utility, can literally decide to change something, and then do a global search-and-replace on all usages of that something across all consuming projects. Because all those projects are "part of" BSD in a very literal sense. You can decide that ifconfig(8) should work differently, and bam, there you go, now it works differently. Now the tools that call it and parse its output work differently, too. Everything works differently; but everything still works. Document the new behavior in the man(1) pages.
1. Linux doesn't care about ifconfig et. al. It's Debian/RedHat/Arch/Gentoo/Slackware that does. And they can do the same grep across their codebase.
They don't now for the same reason BSD doesn't just run in and change things, because lots of admins have scripts which aren't maintained by BSD which depend on existing config.
2. I would assume that the BSD ifconfig came first, so why didn't existing distros copy their system? Is it a BSD vs. SysV thing?
The package management is definitely what drove me away from FreeBSD in the early 2000s. BSD people loved ports, but my main recollection of it is that I spent a lot of time waiting for things to compile. A lot of time, hours in some cases. I liked how apt worked a lot better, both in terms of speed and the interface.
Nowadays, FreeBSD does actually have more apt-like package management, with a fairly simple high-level interface that installs prebuilt binary packages (properly resolving dependencies among them, etc.), called pkgng. I recently tried it again, and it's nice, exactly what I was missing at the time. NetBSD's story here is good today as well, with pkgin (the high-level interface to pkgsrc) also being quite nice, nice enough that I use it on OSX despite never having even run NetBSD (so I'm not using it on OSX because of a preexisting love of it) [1]. But 15 years ago, apt was clearly better, at least for my uses.
FreeBSD has had a package system (pkg_add, etc...) since... the mid 90s? The packages were just ports pre-compiled on freebsd.org's computer.
The new package system is much better, and more apt-like. Again though, they are ports pre-compiled on freebsd.org's computer.
Most port options can be selected through an obvious curses interface that comes up when you make the port these days. The defaults are usually reasonable. As long as unix software is distributed with important compile-time options, I will enjoy how easily they are managed through ports.
Even better, you can easily run your own pkg repository with binaries compiled for multiple architectures (with build enviroments isolated with "jail") and providing the full range of compilation customization included in the ports tree.
>you can easily run your own pkg repository with binaries compiled for multiple architectures (with build enviroments isolated with "jail") and providing the full range of compilation customization included in the ports tree.
Sure, if you're running a few servers. It doesn't help if you're running one though.
It's a choice. If you can live with binary packages produced by the FreeBSD project, why not use those?
If you want something special, why not set up poudriere and update during the night. Realistically, if you are running FreeBSD in production, you want a spare server to try new releases, so you can just as well use that box to build binary packages.
Debian doesn't always just work, because a lot of packages are very old. I think Ubuntu is (or is supposed to be) more "just works." But I think this is a personal point of view thing.
Also I think the FreeBSD handbook is awesome, even for newcomers. Is there something like that for Debian?
My point is that Debian does not always just work. That is my argument. It very well may "just work" in your use case, but it doesn't "just work" for all cases like your original comment seem to point out.
Also, there is a difference between new, like bleeding edge (e.g. Fedora), or newer (like Ubuntu or Debian Testing).
I don't know if *BSD "just works" either, but that is not what I am arguing here. I am making a point against your statement that Debian "just works."
And you have big amounts of code in other projects. Stuff like OpenSSH and many libraries.
I didn't include the usual suspects, such as non router hardware manufacturers (ARM, Intel, HP, Dell, ...) that use it a lot, but they also use other stuff.
In other words: The market share I think grows if you use big companies. For smaller companies there is a rule of kind of sticking with the most used stuff. And that simply is Linux (among other things for that legal reason, already mentioned here).
Linux runs all of Android which is the most common OS today. But it also runs the biggest clouds in terms of infrastructure. Google and Amazon are completely built on Linux. Linux also runs 95% of the super computers in the world.
As containers eat the world you will see more and more Linux, IMO.
Used both since .96 Linux and for network BSD was far superior but you could feel which was going to win very early so switched once Linux was usable.
For me (network/server guy for a small ISP, with complete decision-making authority in this context), I'd love to run FreeBSD on all my servers and OpenBSD on all my routers (even though I (mostly) make my living from my Cisco knowledge).
Here at home, within a six foot radius of me, I've got a nice new (waaaay overbuilt) workstation, a pair of ThinkPads, some cheap Dell laptop, and a rMBP. The Dell and rMBP are rarely used but I had to decide what to run on my three "primary" machines (and the couple of neglected machines in the garage).
I'd much prefer to run FreeBSD everywhere but suspend/resume on the laptops is a deal breaker for me. I could run FreeBSD on the workstation (which is always running) but then I've got two operating systems to keep up on instead of one, so instead I just run Linux on all of them.
At work, on the servers, my line of thinking is basically the same. I pretty much have to run Linux (over BSD) in a few cases, so do I run FreeBSD where I can and Linux everywhere else or just make things easier on myself and run Linux across the board? Since I'm already running Linux on all my personal machines it makes sense to use the same at work.
> Why doesn't BSD have a greater market share? I just set up an openbsd firewall and loved it. Plus I have experience with zfs on FreeBSD-also loved it.
Lawsuits and subpar driver support in the 90s, basically.
> This is a problem faced by all operating systems - even new versions of Windows. Most of the time, users don't care about the total number of drivers, only if drivers exist for their hardware. There are some omissions in terms of driver support, but FreeBSD supports a wide range of network cards (including an increasing number of 802.11n chipsets), most sound cards, and AMD, Intel and nVidia GPUs.
> Device support is a constantly moving target because we can't tell hardware makers to just stop releasing new hardware for a few years while we catch up. New devices do take some time to support, although some manufacturers do provide drivers themselves, for example nVidia provide drivers for their GPUs, and Intel for their newest network and storage controllers. Other manufacturers provide significant help to FreeBSD driver writers, including Broadcom, JMicron, HP, Mellanox, Chelsio and Solarflare. If you find a device that isn't supported, please let the project know and also notify the manufacturer: the only thing that motivates hardware manufacturers to support any operating system is the knowledge that their customers want it.
This is much, much less of a problem now than it used to be but I remember building a hobby box in the 90s when I wanted to run FreeBSD and I had to be very, very careful about what hardware I put in it so I ended up installing Debian. (Anecdotal and likely possibly coincidence, but Debian had all the correct drivers.)
The net result is no one really switched back to BSD once those issues were (mostly) resolved since too many Devs were bringing out software for Linux by that point. Driver support still lags on the BSDs (although nowhere near as badly as it used to!) so its just easier to use Linux for most people.
The ATT lawsuit stopped BSD development for a year or so and many people moved to the Linux community during this time, because of the uncertainty caused be the lawsuit.
I think the big difference in culture accounts for more than the lawsuit.
FreeBSD moves very carefully. New features are introduced when they are quite mature. For major changes, the old way of working is maintained for quite some time.
Early nineties that showed in hardware support. If you wanted to have a real system, you got yourself a SCSI card. There were some really crap IDE controllers out there and in the FreeBSD community nobody cared for them. So resources is one thing, but basically the FreeBSD community didn't want to spend time getting completely broken hardware sort of working.
(For a long time, partitioning was also a twisted maze. The BSD partitioning scheme was somehow combined with the MBR in weird ways. No problem for a system dedicated to FreeBSD, tricky if you want to shared with Windows).
The Linux community was way more trying to run on everything.
In the same way, the Linux community is much more into shiny, new. Color ls, that would famously break scripts because it also output escape sequences if you send the output to a pipe.
By and large a FreeBSD system looks less cool than a Linux. So FreeBSD attracts the users who know they want stability above everything else.
>The Linux community was way more trying to run on everything.
In contrast to some opinions, I like this sometimes.
For example, would the world come to an end if FreeBSD came with neovim?
If the prompt out of the box showed pwd?
I know that some things are controversial (ahem systemd), but when learning a new system, little things matter and make your system popular.
(And, as a side rant, in contrast to some who like to use haskell on nixos (Which I actually like!) running on an obscure chipset, popularity is good. If someone asked me what Unix should he learn, I'd send him to Linux and not FreeBSD, since it's going to be much easier to find noob help online. Then, this noob will go on to become a sysadmin, he'll recommend Linux because he knows it and will be able to find others who do.)
That's a bit like asking Debian to be more like Ubuntu. And I have seen plenty of software that only works on Ubuntu, on other Linux distros you are on your own.
For an end-user friendly BSD you may want to look at TrueOS (https://www.trueos.org/). They take FreeBSD and then add more sauce to provide a better user experience.
True, hence why so many products are built on top of FreeBSD. But for organizations not shipping Linux binaries the license doesn't change much. Also Linux is a pretty awesome operating system so it doesn't make much sense for lot's of organizations to switch to FreeBSD even if there are some advantages to using FreeBSD.
I do feel like it has a comparative advantage in all of those spaces. ZFS, jails, and pf are all killer features for all of those spaces. The only possible draw back is it requires your IT staff know the technologies. What a company saves on licensing easily makes up for the staff wages though...maybe.
Good to hear! FreeBSD is my favorite OS and I wish I could use it at work, but were a CentOS/RHEL and Windows shop. However the security team has a handful of FreeBSD servers going, so it is used.
>But for organizations not shipping Linux binaries the license doesn't change much.
hmmm.. Android, Chromebook, Smart TV's, Routers, set-top boxes, NAS etc, these all ship Linux binaries, so I don't think it's a big issue.
If you do kernel space modifications which you want to keep as a 'competitive advantage', then going with something like FreeBSD instead of Linux would make perfect sense, otherwise I don't see why the license would matter much.
It is the pain of complying with the license. You have to ensure you have everything in place just in case someone asks for the source code. It isn't hard, but it is effort that nobody wants to do yet you have to. It isn't good enough to say "we use version 1.2.3 unmodified download that from the internet", you actually have to have a copy of version 1.2.3 ready to send out (apparently in case the internet deletes all versions of source code 1.2.3).
Less companies contribute to FreeBSD because its market share is lower than the one of Linux. And I believe first part of Beno's answer is correct - it's because FreeBSD version of RedHat didn't happened at the right moment. RedHat was there when window of opportunity opened for open source OS and was able to boost linux ecosystem by paying Linux developers which led to higher adoption which led to more contributions from corporate world.
GPL does not require contributions. GPL requires access to sources. Not to changes history, only sources. Contributions happen because it's more profitable for company (in long run) to participate in community and influence development than to be a passive actor.
My point is: license has very little to do with market share in this case. It's more about being good enough and having certain amount of luck when the right moment arives.
I think the real reason is that in the mid to late 90s, there were several hardware vendors that focused on Linux. One of them at least went public (VA Linux).
In fear of losing hardware sales, Dell/HP/Sun (their X86 hardware unit)/ IBM all made sure that their servers could run Linux also. This resulted in a lot of hardware support being added very quickly.
FreeBSD never had that "tornado" of uptake from major hardware vendors, and thus, support always lagged, if it was there at all.
A similar amazing project in my opinion is iohyve. It has a fairly large portion of what you need to become a vserver provider in one command, easy to use:
"I've never understood why jails didn't take off. I guess maybe since linux took off and the bsds didn't, but they're just nice and elegant."
The very first VPS[1] provider, JohnCompanies, was built entirely on jail (and FreeBSD 4.x).
At the peak we had over a thousand FreeBSD jails running for customers all over the world.
In the end, fancy provisioning and fine-grained resource tuning (with products like Virtuozzo) won out. Although JC is still operating and still provides jail-based VPS.
The offsite backup infrastructure that was built for JC customers became a standalone company in 2006 and was named "rsync.net".
[1] The term "VPS" had not been coined in mid-2001 so I made up the term "server instance" which didn't stick.
Yes - we just got pull requests into sshuttle as well as some patches to FreeBSD to make ipfw and UDP tunneling work the right way ... I am testing it now (Feb 2017) and will post to -hackers when it is done ...
It's surprising to me as well. I think it's because of the way these technologies were marketed.
FreeBSD jails, Solaris Zones, OpenVZ, and Linux LXC were marketed as fundamental building blocks for improved consolidation and/or improved separation of concerns. These technologies could solve all these problems, but fundamentally they were sold as a transparent abstraction. "To end-users, jails operate and feel just like VMs or real hardware" they said.
Docker was different, docker was fundamentally sold as a higher-level product. Docker is not "just like real hardware", docker was a new way to think about deployment, a new way to think about pre-packaged building blocks.
I think it has a lot to do with marketing and hype.
The BSD community in general is anti-hype, vs. Docker having a for-profit company (dotCloud) behind it. Even it's first version's website looked nice, had a lot of pretty graphics.
It really does more than people think. Now, the following statement doesn't apply to Docker, but I've seen people with great technical understanding seen choosing totally messy toy projects, because they had a well designed website with pretty images and lots of marketing.
I mean to some degree this also works for Docker in the sense that I've seen people having totally wrong expectations on what Docker does.
It's just what good marketing does: Giving people the impression that something is magic.
The BSD communities always had both a strong no-hype stances. They didn't even have cool names, as they have now with bhyve, etc. They also had a strong "keep it simple" mindset, that counters the "there is magic behind it" effect. It is the sysadmin mindset of preferring boringness and no surprises.
The only other bigger open source project I know trying to be boring to some(!) degree is Go. But that only works, because you can always say "it's from Google" if you need to market it.
Of course there are others, but many of them are way less known.
The only other _somewhat_ new software that I can think of that was considered cool without there being a big hype machine and/or a company pushing it from the beginning is Redis. Out of the nowhere, I mean. Of course there is much cool stuff done by people that already have proven to do amazing things.
But I am sure other people can come up with way more.
Having used both plain LXC and BSD Jails before Docker was a thing, and now using Docker after, I can say much of this is true. But there's a fundamental additional point: Docker's marketing has increased its' usage for application development in general, and now it has become technically useful because of the resulting ecosystem they grew around that. Docker Hub is quite useful if you just want to deploy some app quickly -- usually it's already been done for you.
That's an interesting way of looking at it. Docker also came with a way to do things. Jails you were still responsible for having it set up correctly (though you could tar them which was nice).
> That's an interesting way of looking at it. Docker also came with a way to do things. Jails you were still responsible for having it set up correctly (though you could tar them which was nice).
Separate kernel and OS is ideal for containers, IMO. What was thought to be a plus with BSD I believe turned out to be a weakness.
There There!
I think the same way. I feel like LXC is super powerful and flexible at the same time. Docker and all its terminology around(docker file, compose, swarm and many more) just feels like unnecessary complexity..
A few unixy scripts to automate LXC commands and The infrastructure should be set. The less wheels to grease the better at this level.
I think the point of Docker (and maybe Kubernetes) is that its containers are intended to be stateless one-offs, so they aren't (and can't be) managed like a traditional server. Instead of patching a server, you'd build a new image incorporating said patches and launch that in place of the old one. It's not a bad tool for scalable services that can be stateless (i.e., not databases or file servers), even if I personally haven't quite gotten the hang of that workflow yet.
It took off like crazy for VPS hosting, until superceded (although I know of many that still use it).
OpenVZ was popular enough to support the parent company Virtuozzo/Parallels/SWSoft (many name changes over the years) and the commercial Virtuozzo product is still sold: www.virtuozzo.com .
The lack of OpenVZ being accepted into the kernel, is what eventually killed it IMHO. Since the cgroups and other code that underlies Docker/LXC etc. is in every kernel, it was only a matter of time until the default became accepted and then used widely.
I think that's a lesson of history and market share. It takes combining all the right circumstances for something to really take off. Still blows my mind that *bsd isn't used more often.
My only guess is that, like everything, Microsoft beats it for the reasons MS beats anything: legalese.
For many companies software choice is all about who's behind whatever. I.e. They want an outside entity bound to the service in a legal contract. I think that's a huge reason Microsoft owns the corporate IT sphere and why red hat/ubuntu have some standing while *bsd not so much.
To me the separation of kernel and OS with Linux versus all together with BSD makes the container solution cleaner.
Take Google that is rumored to be using the same kernel in their cloud, ChromeOS and Android.
They concievably could have the same kernel from iOT, wearables, phone, tablet, 2 in 1, laptops, TV and cloud.
Google now is using the container functionality in ChromeOS to enable Android. Now if they give access I can run my cloud service on a laptop or a tablet. Instead of spending a fortune for a Swift version and a copy in Java.
But I also could develop once and deploy. Google has the containers like!E this on ARM and X86 and in their cloud on Power.
Now the containers are arch specific but not far from fixing that.
Google needs to allow a second SSD that is walled from the system SSD and give us access to launching containers. We get such storage in something like the M3 with rumored 16gb Samsung Pros but it is flash.
It is just not possible to do the same in BSD based on my very old experience. Has it changed?
The technologies upon which this is based have been part of FreeBSD long before Docker was even conceived. This appears to be a new management tool for existing functionality, along with a praxis for use.
The technologies that Docker was based on had also been part of linux for a long time beforehand. Docker was simply a new management tool for handling them.
Docker for FreeBSD is over a year out of date and not production ready. I tried to use it for some things and it does work, but doesn't support any of the newer APIs in newer versions of docker compose and other orchestration tools.
I really wish the Docker team would have made FreeBSD a first-class citizen, considering the native zfs support in FreeBSD. Currently the only thing Docker runs on natively is Linux. Even with the newest MacOS/Win variants, it still running in a hypervisor.
Docker runs natively on Windows as well as Linux, with no hypervisor. There is a Solaris port being worked on (unless it got cancelled). We would love an upstream FreeBSD port, I have talked to a few people who are interested in working on it. The ZFS side should be fine as there is already support, and the old port should be useful as a basis.
Containers use the Linux kernel. How does a Linux container run native on Windows? Are the entry points mapped? How does Windows enable shared read between containers? With Linux it is the dir path and then inodes. How does Windows pull this off? How does SElinux work?
Windows containers use the Windows kernel. Windows does not have SELinux so of course that is not supported. It runs Windows program not Linux programs, so there is no mapping of entry points. There is lots of docs from Microsoft eg https://docs.microsoft.com/en-us/virtualization/windowsconta...
But a hypervisor is a full VM. LXC, and Windows containers provide that funky process level virtualization.
I guess the point i'm trying to make, there will be no "native" docker for MacOS. You'll always (or for the foreseeable future) have to start a linux VM (mabye windows?!?) to host the processes.
> BSDploy’s scope is quite ambitious, so naturally it does not attempt to do all of the work on its own. In fact, BSDPloy is just a fairly thin, slightly opinionated wrapper around existing excellent tools.
But how is the author going to become rich and famous and be invited to all those conferences doing that? They need to drop this silliness and write the whole thing from the ground up! Get cracking, we want to see AT LEAST 30 000 lines of Go, or 10 000 of OCaml!