* Support has been added for qcow2 images and external snapshots in vmm(4)/vmd(8).
* "join" has been added for Wi-Fi networks.
* Security enhancements include unveil(2), MAP_STACK, and RETGUARD. Meltdown/Spectre mitigations have been extended further, and SMT is disabled by default.
* rad(8) has replaced rtadvd(8).
* bgpd(8) has undergone numerous improvements, including the
addition of support for BGP Origin Validation (RFC 6811).
smtpd.conf(5) uses a new, more flexible grammar.
* For the first time, there are more than 10,000 (binary) packages (for amd64 and i386)." [1]
I haven't taken the time to write this up, but one of the handy things about OpenBSD is you can take a small 16GB USB stick, format it with one small FAT partition and copy the installer to that. Then you boot the installer on your Ubiquiti Edge Router and install OpenBSD to the unpartitioned space.
With a little work you can have your own caching DNS server including domain blocks for tracking sites and if you want privoxy or a squid proxy. It's also easy to set up your own root CA and switch over to certificate-based authentication for wireless clients as long as the wireless base station supports radius.
I haven't published a tech note on it yet because Android still complains about importing self-signed certificates even when you import the root CA.
To be fair: you can do that with probably any unixoid OS. IMHO, it is just much easier with BSDs because a) documentation of how to do it is much easier to access than on linux or the other two(TM) and b) BSDs tend to follow the KISS-principle - meaning: even with the superb documentation one can just read the posix shell scripts to save some time.
I used OpenBSD for some time as a platform for a hobby servers around version 4. It is very stable, has low hardware requirements for a base system (I had running WordPress on 32MB RAM and Pentium 200 MMX to my amusement), has very fast and powerfull firewall / packet filter. It is one of the most elegant operating systems I've ever used. Simple in use but has huge functionality. I second that documentation is one of the best. Must try it again maybe on desktop this time.
I run OpenBSD as my border firewall which it handles very well.
One thing I wish that OpenBSD devs would change in their philosophy is the --help messages. Many commands simply offer a list of switches, as if that's somehow helpful. Sometimes you need the detail in a man page, but a lot of times you don't and it would save so much time and energy to have a succinct list in the --help message itself.
One thing I really dislike about modern UNIXes is their lack of decent manpages in place of standins like --help.
I love the BSDs and especially OpenBSD for their attention to manpages. It's the main reason why I don't use Linux anymore unless I have to.
Adding detailed --help messages would take time away from maintaining manpages, it also presents a duplication of information. If you want to know what the switches do, read the manpage.
Indeed, as an example "ls --help" on Linux prints at least 3 pages of information on stdout. I would rather read top quality man page and have a succinct "-h" message.
...the same way the world has managed to survive for decades without Linux containers in general (let alone specific wrappers around that like Docker is)?
With OpenBSD specifically, you can get 90% of the way there with chroots, standard process isolation, and a bit of shell scripting to handle deployment automation.
Yeah, Docker's cool, but it's really not that hard to run multiple applications/services on the same physical machine while keeping them from clobbering each other or the OS in general (step 1 being to make sure each service/daemon is running under its own minimally-privileged user).
> With OpenBSD specifically, you can get 90% of the way there with chroots, standard process isolation, and a bit of shell scripting
This is a classic case of "THAT HackerNews response to Dropbox" [1].
If it's that easy, why isn't there a prepackaged wrapper with simple switches, rather than leaving developers to fight for themselves among piles of custom hacks?
The problem is not just deployment, the Docker differentiation is simplification of the development pipeline. OpenBSD should seriously look at their story in that area, because it's one of the few where they could still potentially compete (because Docker is still fundamentally a pile of hacks, and pretty insecure too).
Unless, of course, everyone is happy to remain "the little project that could" and crack jokes like the BCHS stack.
OpenBSD should seriously look at their story in that area, because it's one of the few where they could still potentially compete (because Docker is still fundamentally a pile of hacks, and pretty insecure too).
I’m pretty sure the Open BSD developers are completely comfortable with their story. They develop this software for themselves first. If you like it and it’s useful to you you are welcome to it. If not, look elsewhere. That’s been their working philosophy all along and if you ask me that’s what makes it so great to use. Every piece of the system is carefully thought out and organized so it doesn’t suffer from nearly as much feature creep as other systems.
And as far as setting up chroit and isolating processes; it’s not hard and sometimes you don’t need someone else to write a script for you when you can do it yourself in 10 steps or less.
> I’m pretty sure the Open BSD developers are completely comfortable with their story.
They were pretty comfortable with their patching story -- until enough people complained and lo, syspatch(8) appeared.
Beyond the posturing, nobody likes to run a project that nobody else uses; and sometimes even lusers are right.
> Every piece of the system is carefully thought out and organized
I am not saying they should rush out a crap docker clone, but rather a "carefully thought out and organized" docker alternative.
> you can do it yourself in 10 steps or less.
It's still 5x the steps you need with docker. As I said, it's about simple reproducibility rather than just isolation. Even if it were easy to write my own docker-compose (and indeed many people argued the same, when docker first emerged, because it actually was little more than shell scripts), having one well-defined set of tools helps tremendously with kickstarting adoption and to avoid reinventing the wheel every few weeks.
I would be surprised if user complaints were the motivation for syspatch. More likely, the author built something he found useful, and contributed it to the project.
Most of the time on the openbsd email list, when a "user" suggests a feature or asks for a change to something, the reply is something along the lines of "sounds great, where is your patch?"
Exactly this. Unless of course you became an Iridium level donor with the caveat that someone promised to build a Docker clone for you.
Edit: And in response to toyg's comment, here is a link to a video of a talk by the developer in question, who mentions in passing that he builds things for OpenBSD that help him put it into production. This is less than a minute into his talk. Please educate yourself about the project before making ridiculous demands on the devs' time and falsely assigning wrong motives to them. It's unfriendly.
Oh, I am educated enough, don't worry. Which is why I was so surprised to see it finally adopt a solution for a problem that had been pointed out for 20 years, after spending those 20 years replying to everyone that it was just the wrong thing to do in principle.
> It's unfriendly
It's also unfriendly to gaslight away blatant problems, for whatever reason, until they get fixed -- at which point they are admitted as actual problems. Then again, OpenBSD is hardly a friendly project, culturally speaking.
I've read through this subthread with objective concern.
A criticism that seems fair has been presented here in a benign, non-antagonizing manner, and I am very perplexed as to why all comments arguing in favor of that view have been anonymously downvoted wholesale without anything approaching sufficient substantive explanation.
This is not the kind of behavior that (the) HN (community) is respected for.
Let me sum up what I see here.
- Someone argues in favor of Docker, and are downvoted by enough people their comment turned grey. I think this means it's at -5 or -10 or something. So, no explanation, no comments; just downvotes.
- The one reply that goes into a bit detail comes from a traditionalist UNIX standpoint, and is a bit passive-aggressive. (This comment isn't grey.)
- The next reply frames the parent as "THAT HackerNews response to Dropbox", and highlights that the implied simplicity and sense of "only one obvious way to solve this problem" is in fact not implied and that significant wheel-reinvention must (and presumably has) be done "on the ground". Docker's simplicity is highlighted along with its insecurity. This comment is grey.
- The next reply further brushes-off the stated arguments by (passive-aggressively) noting that the project seems successful enough, and maybe that's because they actually have it figured out. (This comment isn't grey.)
- I read the next reply as a gentle reminder of the importance of remaining relevant going forward - and the fact that this doesn't necessarily mean ground-up reinvention. This comment is also grey.
What is going on here?!
A nontrivial number of comments in this thread, and the other OpenBSD threads I've seen, are basically all chanting about OpenBSD's perfection.
Good customer service, good social skills and forward thinking are some of the most fundamental aspects of commercial success. Does open source think it can get away with "no shirt, no shoes" just because it's free? :(
> More likely, the author built something he found useful, and contributed it to the project.
It required a service set up by the project itself. And this after years (decades?) of explicitly rejecting the concept of automated patching (because it supposedly engendered "a false sense of security"). Come on.
There is one defined set of tools. It's called learn to use your OS. Docker is basically "I don't want to learn about init scripts, so I will write thousands of lines of Go code instead".
In my experience, "simplification" and "Docker" don't really go in the same sentence. Yeah, it's (maybe) simpler if you're just plugging things together that are already Dockerized, but if you're writing your own Dockerfiles, none of the actual complexity really goes away. If anything, you're making things even more complex by containerizing things that don't actually need containerized.
"why isn't there a prepackage wrapper"
Because nobody's gotten around to writing one yet, or perhaps because nobody's felt the need to do so yet. Not much stopping anyone from doing so. That's where the "bit of shell scripting" comes in. Writing an rc.d script [1] (using rc.subr to do away with the boilerplate normally associated with the sorts of initscripts normally strawmanned by systemd advocates) ain't any harder than writing a Dockerfile (in my opinion it's actually much nicer/simpler). Neither is creating a user under which your app will run. Hardest part will probably be around deciding what needs to go in your chroot.
Hell, if you include OpenBSD packaging [2] as part of that development pipeline, then tada, you're pretty much there. Install the package, run "rcctl enable your-app-name && rcctl start your-app-name", and you're good to go.
So the trick here would then be to extend that to install and run multiple isolated copies of that package, each instance having its own configuration and chroot. Or perhaps using a single package and writing your service/app such that it does the forking/chrooting for each isolated environment (which is what quite a few OpenBSD-focused daemons already do, from what I can tell/observe).
The overall point, though, is that comparing Docker to Dropbox is erroneous. Dropbox actually was simpler/easier than the "solution" posed by that comment. Compared to the OpenBSD way of doing things, Docker is not; if anything, it's more complexity.
I view it as a trade-off for excellent man pages. Rather than waste time figuring out what the appropriate amoutn of and commands to list in the short help list (especially for commands that might have a lot of and/or complex options), they have very simple rules. List switched with help, list everything and very clearly in the man page.
Since I believe they don't accept code patches without relevant man page patches that explain them if they alter the behavior or add/modify an option, this seems like a sane way to avoid bikeshedding on what is essentially superfluous information, since the "appropriate" amount of info to include is ambiguous.
It's a good GNUism though, that I also wish the BSDs supported. No-one's saying that the BSDs should start supporting GNU style --long-options or switch to info instead of man, but friendlier help messages are always a nice thing.
I'm quite amazed that there exists hardware where they can test this! Maybe there are some embedded systems still using motorola chips?
Anyway, OpenBSD is great. I'm running it on my router and it also powers my 96 mb ram dual pentium pro 200 mhz computer from the 90s :) That computer also has a quantum fireball 20 gb disk as it's main storage, another thing I am amazed that still runs..
> Because Simultaneous MultiThreading (SMT) uses core resources in a shared and unsafe manner, it is now disabled by default. It can be enabled with the new hw.smt sysctl(2) variable.
Is this on all architetures or just Intel's Hyperthreading? I'd imagine that other CPU's with hardware threads (especially the 4 and 8 way Sparc T series) would be quite hobbled in terms of performance with this change.
My biggest turnoff with OpenBSD is the more complicated package management if you want to have new versions and security updates beyond the versions packaged with the release. As far as I know you either have to stay on the bleeding edge with -current, build packages yourself, or trust a third party (M:Tier) to build for you, who last I checked were behind on firefox builds. I'd love to someday run it on my laptop though.
That your configuring your system to be on these update cycles is very probably why your system is on these update cycles. This should have been abundantly clear.
For firefox at least, I don't think security patches are backported for OpenBSD, nor are newer versions. As the browser is the main way I view remote content, not being fully patched there is important to me.
As far as I know that will update to the latest packages built for that release, but those are not the latest versions of those packages. OpenBSD doesn't have the resources to rebuild every port as a package for -stable.
I run OpenBSD on my router and it's great. It was refreshing to not need Google for figuring out how to set things up, because everything is in the included manual pages, which often do a great job explaining new concepts. Want a quick intro to OSPF? man ospfd
I don't think I'll run OpenBSD as a desktop OS unless performance drastically improves, but it's staying on my router for the foreseeable future.
Interesting, I've been running Linux for 15 years, yet I can't understand how to upgrade from 6.3, which I just installed on a cloud server, to 6.4 by reading the official document.
(Specifically the part "instruct the boot loader to boot this kernel" because it says to type in the file name during the boot process, which is not exactly easy on a remote machine.)
https://www.openbsd.org/faq/upgrade64.html
I have a lot of respect to OpenBSD devs when people don't contribute back much even if they use OpenSSH everyday but a bit more friendliness doesn't hurt to let people try it out more.
Keep reading. The upgrade guide has two parts: the first part that assumes the common configuration (you have console access and are using the OpenBSD boot loader), and the second part[1] in the event that you do not. If you have console access and can boot from iso, you can also use the cd64.iso image to boot into the install kernel and follow the common upgrade procedure on the console.
There is an in-between case where you have a console, but do not control the boot loader (so cannot boot bsd.rd) and cannot boot an iso. It sounds like maybe you are in this situation? In this case, you can still follow the upgrade directions as if you do not have a console[1]. Alternately, sometimes when I am in this situation I just download the install kernel (bsd.rd), move it where the boot loader is hard-coded to look (/bsd), and then reboot. The boot loader will boot the install kernel and you can follow the usual / common upgrade procedure on the console.
There is also autoinstall[2], which can automate the upgrade procedure for you and reduces upgrades to just rebooting into bsd.rd and waiting a bit. There is a bit of effort to create the response file, etc., so this may be overkill for a single instance but is very useful for upgrading fleets of machines quickly.
Yes, I was struggling to do this from the remote shell but I had remote access to the console to specify the ram disk to get through the upgrade process but I'd consider I'm one of the lucky ones when providers like AWS doesn't have that feature to access boot process in real time.
You can try it right now inside virtualbox, and 20 minutes later you realize that it is has all you need. After a few days, you'll notice that you spend most of your time inside the virtual machine, and then it makes sense to turn your setting upside down and work directly on the saner system.
It may have all that you need, but that sweeping generalization is far from true for everyone.
For example, things that I need that it does not have (last that I checked) include: filesystem-neutral nmount(), POSIX RT signals, the "new" 1990s dynamic PTY allocation system, a KDGETMODE ioctl on wscons, waitid(), fexecve(), and ACLs.
And then there are the things that would make life easier to not have to bodge around: const-correct ncurses API (available in ncurses since 1997), const-correct login_cap API, const-correct sysctl(), and no multiple evaluations in EV_SET().
It would be very welcome for it to gain all of these, but until then there is no "it has all that you need" realization on the cards.
How well does the inverse works - running VMs in OpenBSD to use Windows, Linux? Is it for example possible to run Virtualbox with good performance in OpenBSD?
I'm tempted at using OpenBSD as OS, but need to run things like MS Office. A VM is probably the easiest way to do that.
From what I've gathered (and experienced last time I've tried it), Windows support is nonexistent right now. That might change eventually, but OpenBSD and Linux (and I think NetBSD?) are the initial targets.
QEMU is available in packages/ports, though; while almost certainly slow, it's a start. VirtualBox on Linux requires a kernel module, so unless someone manages to port that to OpenBSD (which would translate to adding it to OpenBSD's kernel, which doesn't support loadable kernel modules anymore), that one probably ain't an option.
> How well does the inverse works - running VMs in OpenBSD to use Windows, Linux? Is it for example possible to run Virtualbox with good performance in OpenBSD?
As far as I know this is not possible. You can always do it with qemu but it wouldn't be practical.
If libreoffice is not appropriate for your needs, you may try googledocs inside the browser.
MS Office is just one of many applications that I need to use. I use loads of EDA tools that either only exists for Windows, or Linux.
I have been a user of LibreOffice since it was called StarOffice. It is dog slow, still has an ugly UI, crashes etc. But for many things it has been and is good enough. (and keeps getting better). Unfortunately, customers wants to use Word-specific files, Excel-specific files with macros and whatnot that just don't work in Libre. Google does not handle these files correctly either. It is not a fault of these alternatives, but a world where people don't realize that they are being locked in. They use the features of the tools they have to solve their problems.
MacOS actually. In which I have Windows in a VM (and a few Linux versions), which works really well. The MacOS BSD userland is something I use extensively. But when the next laptop update cycle comes I can't see myself getting a new MBP.
So what I was hoping to do, and the reason for asking about the state of running VMs in OpenBSD is of course to use it as the main OS, and run the other OSes I need to use in VM. Basically as I do today. I would be close to home in terms if userland experience and gfet good (better) security.
BTW: I found this presentation by Mike Larkin from March 2018 about the state of vmm:
Apart from the fun and learning experience, I don't really see a point in running OpenBSD in a VM. In fact I have a OpenBSD 6.3 in a VM today. Playing around with pledge has been very interesting.
As a heavier way of running a VM then what you get using the hardware virtualization support in the CPU the way most modern VM systems do like VMware, Virtualbox, Parallels etc.
OpenBSD just isn't performant enough for me. I can't honestly say that it has what I need when I get multi-second system freezes when opening or closing tabs in chromium (though this was under 6.0)
Battery life under OpenBSD is atrocious compared even to Linux.
It's a great server OS but it's not ready for laptop daily driver use unless your laptop just sits plugged in all day.
I use it as a daily driver for my laptop. It's quite performant. I also boot a linux distro sometimes on that machine, and don't notice much difference in battery life between the two.
They've been adding drivers like crazy for a while now, too, so there are many more hardware choices than there were a couple years ago.
Is there an OCI runtime for one of the BSDs available? I saw some commit messages in the containerd repository mentioning OpenBSD, but afaik running containers on BSD is still not officially supported?
Linux is a depressing mess after you've used OpenBSD. Such a high quality system, with stellar documentation. It's unfortunate that Linux has become so popular even though the BSD's are so much better. A bad historical accident. Damn you Linus...
"Quality" is a very subjective idea. Yes, docs are easy to keep consistent over long stretches of time if the speed of evolution is slow and a small number of like-minded assiduous gatekeepers can vet every change from a small number of contributors, but since the speed of evolution is slow, OpenBSD for example still has a giant lock protecting most of the kernel -- this despite nearly 2 decades of multicore chips shipping pretty much everywhere.
Any growing project will always suffer from consistency problems -- humans just aren't capable of scaling to the point where a single hierarchy can consistently manage a monstrously large system. Linux doesn't have a 'base system' quite like OpenBSD: coreutils, libc, bash, and 20 other similar packages all come from a large variety of differently minded maintainers across the world.
Viewed from a macro scale, OpenBSD's consistency breaks down immediately upon starting to install stuff from the ports collection - sure the base system makes a great firewall and basic HTTP server, but to use anything popular you pretty much immediately end up with ports. And the quality standards there are identical to Linux land, because it's the same code.
Also, I think FreeBSD eliminated the giant lock ~10 years ago and maintains similarly good documentation. It’s just a matter of priorities, and SMP scalability/high performance has never been on the top of the OpenBSD list.
Yes, and goals. OpenBSD is much more conservative with regard to features, and focuses on security, and that results in a more secure system overall. If you need to eek out every last percent of performance, use something else. If you want to worry less about security on a system that is fairly exposed (e.g. a firewall), then it can be an extremely good fit. For example, here's a comparison of security advisories for the main OpenBSD, FreeBSD and Linux projects (that is, not separated and exported items like OpenSSH).[1][2] While I'm sure these lists have their problems, they are interesting. Specifically, the number of exploits column...
On the other hand some of this can be explained by linux being a higher value target with more security research around it. If I find a remote exploit in the linux kernel I have a few billion targets, but if I find one in a *BSD I probably have a few million, tops.
I thinks its a greyscale and have always been. Silly things like floppy (that noone uses) on fbsd used to attach with [GIANT LOCKED] for the longest time (if not still) but it doesn't matter. You would not call fbsd still under GL just because 1.44MB floppy drivers don't get any attention. So its not a binary on-or-off on any OS, its a process of unlocking one part at a time, out of hundreds, thousands of small subsystems and drivers without causing races. OpenBSD is most certainly behind, but there is no nice line over which you can say "Ah, now its not GL anymore", since there will be some part somewhere for some arch where you don't care that still retains the lock, and there were parts from day #1 that was completely unlocked on obsd like the syscall getpid() in all its triviality and race-free-ness.
One thing Linux has going for it is a bunch of popular features for developers' desktops:
* Electron apps - VSCode, Atom, Slack, almost every universal desktop app that gets released today. Individual ports of apps to FreeBSD exist, but there is no way to automatically build Electron apps for any of the BSDs.
* Good desktop virtualization - KVM, VirtualBox (okay FreeBSD has these in theory), VMware
* First-class Docker and desktop container support (FreeBSD jails exist but there is no container ecosystem like there is on Linux)
You can still run Firefox, mail clients, vim/emacs, Unix utilities and LibreOffice on any of the BSDs just fine. They're lacking other niceties however. And that's a bad thing in my opinion - although it's mostly not the fault of any of these projects. Some people think BSD is better for lacking those options, but I can't live without them for one.
It's unfortunate: Linux and the BSDs used to have more or less the same application support. Anything Unix-y ran on anything Unix-y. There was nothing stopping you from having an OpenBSD desktop almost identical in function as, say, a Xubuntu desktop - one that looked much cleaner on the inside. But once broader commercial interest started happening for Linux, the BSDs were mostly left by the wayside in application support.
There is no point competing on desktop apps, even Linux has a very hard time there. BSD variants have always lagged behind in that area, you could have probably written the same comment 15 years ago by just moving Openoffice etc to the “Linux only” group.
The areas where OpenBSD can compete (in addition to the ones it already fights in, like routing and network-edge roles) are cloud deployments and orchestration. If it were as easy to run OpenBSD for development as it is a containerized Linux, people would pick the more secure choice. They could also make prebuilt appliances for the most sensitive components in a deployment (databases etc).
Post hoc ergo propter hoc. All the software you mention is relatively recent (with the exception of VMware arguably) and Linux was ahead of the BSDs in adoption long before these applications were created.
Now what exactly created this divide to begin with is a matter of debate but it took place mostly in the 90's and the very early 00's. By the mid 2000's it was clear that BSDs would have a really hard time ever catching up with Linux, especially for non-server applications.
Paradoxically it might be partially why BSDs are so clean and tidy: fewer features, fewer contributors and of course they're making a complete operating system instead of just a kernel or just a distribution.
FreeBSD remains my favourite OS for servers, it's rock solid and a joy to administrate. Unfortunately for the desktop I've given up almost a decade ago, driver support is just too lackluster, especially (as you mention) for proprietary software that can't be easily ported.
That didn't seem to work the same way for say.. mail server software?
Sendmail, postfix and qmail all had BSD-ish licenses, and those three covered quite a majority of all opensource mail serving at the time of BSD-vs-Linux "in the same early years"
Drivers for hardware that is better supported in Linux compared to FreeBSD - 90% of which is graphics - are usually licensed as MIT or BSD, not GPL, even though they are shipped as part of the Linux kernel.
Docker trends do favour to Linux, while the virtualization in FreeBSD IMO is still a hard wall for many people. Jails, ezjail, bhyve, chyve, etc. Requires amount of works to get it just right. While docker runs a container is straight forward. I don't mean which is better but definitely *BSD is less attractive than mainstream Linux.
It's not fair to compare the two. I love OpenBSD for use cases where I care about correctness, stability, security even at the cost of performance or the having the latest feature set. Firewall, DNS, HTTPD and the like.
However, when it comes to running scientific applications and squeezing out last bits of performance or servers where people expect stuff to "just work, and if doesn't do apt-get blah", it's Linux that takes the cake.
FreeBSD and ZFS - the best I've worked with for Network Attached Storage. Can't believe it is just free.
Linux is terribly balkanized what with all the competing distros. There are no standards. This is one reason why Linux has not taken off on the desktop outside of tech circles.
systemd. It violates almost every *nix tenet out there, especially "a program should do one thing well". It's has a few benefits, but the negatives outweigh these, namely more and more programs outside of the base system now require systemd. This should never be. I and millions of others agree that an init system should be able to be tweaked as text files. Not happening now. The logs are stored as binaries when they should be plain text. Debugging is more difficult. A program should do one thing well. Full stop. There is always BSD and Slackware, and I don't think Slackware will adopt systemd, as their user base doesn't want it. Slackware is the oldest currently-developed Linux distro out there and the vast majority of users want it to remain true to its roots while still advancing.
There is also GNU Shepherd[1] on GuixSD[2], a declarative init system, without systemd's million-plus lines of shitty code[3] and massive scope creep into everything that you can (but probably don't need or want to) do on a computer.
Quite. Especially now with their hideous CoC. That thing is straight out of the pits of PC hell. Many people think Linus will never again assume command now that he's stepped down for his so-called infractions. Gone now is the merit-based stuff and now it's all based on PC garbage. I miss the old days of IT, especially now.
Is OpenBSD suitable for use on a laptop? I have a Dell Latitute 7370 running KDE Neon, and it runs really well. All hardware is supported, battery life is on-par with Windows, etc. Would OpenBSD work well on this laptop?
I'd like to hear about this as well. OpenBSD has been on my list of "to try" for quite awhile but the app/package thing has been a big question mark for me.
I _really_ enjoy being able to apt-get or brew install pretty much any of the applications out there and am a bit worried about how that experience would be on OpenBSD. I guess the best way to find out would be to try it eh? :)
As that model has an intel chipset it will probably work fine. The main problem will be wireless because of Intel have the micro-code blob. IIRC you will need want to look at this:
Driver support will always trail behind Linux, so it depends.
Personally, I’d be more interested into having openbsd as the standard for cloud deployments, a place currently inhabited by Ubuntu derivatives. If one could get the declarative goodness of Nix, the popularity of docker, and the reliability / security of openbsd, the world would be a better place.
No reason Nix shouldn't be able to run on OpenBSD right? Nix works on macOS quite well. It would obviously take some work to get it to configure the built in OpenBSD services but it seems like it could be worth the effort.
I have three laptops, two run OpenBSD, the other one Debian stable. Not sure how well battery utilization compares (don't need battery for more than an hour usually). Both work fine, and from a user-space perspective are very similar (esp. if you automate configuration and dotfiles). OpenBSD is about a magnitude less work to configure, especially if you have a non-trivial network setup (and some things, like IPv6 LLA aliases in /etc/hosts, Debian stable does not even support). If you are reliant on garbage like NetworkManager and Gnome/KDE auto-mounting the story is probably different, but that is just a good opportunity to learn better ways of working on Unix.
I don't know about that specific laptop, but I was running OpenBSD on my work laptop (ThinkPad T470) for awhile and it worked reasonably well (though I'm more tolerant of things like suspend/resume not working right).
I ended up having to switch to Slackware, though; lots of stuff I have to use for work that simply doesn't run on OpenBSD. If vmm gets to the point where I'm able to run Linux desktop apps with reasonable performance (and ideally a reasonable degree of integration with the rest of the system) I might try switching back.
Thinkpads are VERY well supported - pretty much everything works, except bluetooth and fingerprint reader. I just ordered a Thinkpad X1 Carbon 6th gen as a replacement for aging Macbook to use as a main home machine.
You can search NYC*BUG board, where users submit their dmesg(8) output, e.g. there is a submission[1] for Dell Latitude e7270.
It's lacking a lot of the hardware support that Linux has. I wanted to try it out a year or 2 ago and it isn't compatible with any of the 4-5 desktops and laptops I have, but common Linux distros install on all of them with no issues. The only way I could get OpenBSD to install was as a VM.
I mean, there's no denying that the Linux ecosystem is a mess from a cohesion perspective, but I think it's a bit unfair to come to this conclusion without taking more into account. As far as server operating systems go, nothing is more battletested than Linux today, and nothing really has a more complete offering.
OpenBSD is great, of course, but my favorite thing about open source is the cross collaboration. Rather than dump literal decades of constant refinement to Linux and it's ecosystem because OpenBSD is much more elegant seems unwise. There are things Linux and the general Linux stacks could learn from OpenBSD (and should.)
I don't think the world would be better if OpenBSD were in the position Linux is in today (though I have a hunch the security story around FOSS stacks would be better.) I think there is a lot of good that Linux can do with its more fast-and-loose organizational structure.
If you are talking about Linux in terms of distributions, some of that mess carries over to OpenBSD as soon as you start installing packages since a lot software depends upon that messy infrastructure.
Of course, a base OpenBSD installation is wonderful to work with and the damage can be minimized through the careful selection of packages.
Companies extend the BSD OSs with proprietary additions, then abandon the work and it gets lost. With Linux, everyone is forced to play nice and release under the GPL, and the work gets to live as long as people value it.
Hence the Linux kernel snowballing and taking over the world, whereas the BSDs have not, despite being at least as strong technically.
The better Linux gets, the more people target it, and the virtuous cycle continues.
Additionally, as it grows bigger and bigger, the more of the proprietary competition it crushes underfoot (sorry Solaris), and the more traction FOSS gets.
I actually created an account as this is one of those things that gets repeated a lot and IMO it simply isn't true.
1. The BSDs adoption was severely hurt in the 90s by the AT&T lawsuit, it basically stopped several years of development while the lawsuit legal status was clarified. Linux and the GNU tooling didn't have that problem. If the lawsuit never took place it is doubtful whether the Linux kernel would have got as much interest as it did at the time.
2. If you don't keep up contribute your changes back to upstream (whatever the license) eventually you will be left behind and have to maintain your own incompatible version. It is in your interest to send upstream patches.
3. GPL code gets stolen all the time and put into propriety software. I've worked at loads of places that have just straight out cut and pasted GPL code into their own product (usually this is done without management's approval). Large projects such as the Linux kernel companies can't really get away with it. However a lot of companies don't build software for the masses, most build bespoke software that is only deployed on one or two servers on a company intranet and the general public will never see it. A lot of developers will just straight up steal code (not caring about the license) from wherever. A surprising number of companies still don't even use source control, let alone bother reviewing code.
4. Companies do contribute back to BSD licensed projects, however this is normally financially not through patches.
> software that is only deployed on one or two servers on a company intranet and the general public will never see it.
I thought that, in this case, it's the company itself who is the "user" of the software and isn't obligated to do anything to/for/about upstream (since there is no stream.. they're not re-distributing it). In that sense, they're not stealing anything, just using what was, explicitly, free to use.
I don't understand what compilation has to do with anything, if the company (user) in question isn't distributing the software outside the company.
How is it any different from an individual making changes to GPL software and using it (compiler or interpretted) on a personal computer? Surely that individual isn't obligated to share anything, either.
> The real point to take away is that any modifications will never reach upstream.
The GPL doesn't mention, AFAIK, any such concept. I thought the point was freedom for users of software, not implied benefit to some "upstream" programmer.
IOW, since the GPL is about the user, it's about protecting "downstream" and, without redistribution, there's none to protect.
Maybe I didn't make it very clear. Each time I observed it the company I was contracting for was selling it to a 3rd party (where it was installed on premises) as a proprietary product.
>The GPL doesn't mention, AFAIK, any such concept. I thought the point was freedom for users of software, not implied benefit to some "upstream" programmer.
The OP specifically said that one of the benefits of the GPL is people had to contribute back because they have to make the code public. As we have discovered they don't.
> Maybe I didn't make it very clear. Each time I observed it the company I was contracting for was selling it to a 3rd party
Indeed, that wasn't at all clear. The comment to which I was responding only used the word "company" (singular and plural), without any modifiers, which I read as describing the same party.
That clears up some of my confusion, since that's an obvious violation (assuming source code wasn't available to those same 3rd parties, which you also didn't explicitly state).
> The OP specifically said that one of the benefits of the GPL is people had to contribute back because they have to make the code public.
Such an assertion (which I see in neither ancestor comments nor the article) still seems mistaken, so perhaps it's a strawman?
The GPL, IIUC, is meant to protect the user, aka downstream, not provide benefits to "upstream". If the binary itself isn't made public, then the source code need not be, either (though I suppose the user/customer in your scenario would have the freedom to choose to make it public, they have no obligation and little, if any, incentive).
> Companies extend the BSD OSs with proprietary additions, then abandon the work and it gets lost. With Linux, everyone is forced to play nice and release under the GPL, and the work gets to live as long as people value it.
Yes, you're reading me right, but I skipped over the question of software which is never released publicly - mmt is correct that the GPL does not require public release of works, instead it prevents you from releasing the binaries while withholding the source.
'Secret' purely internal use of modified GPL software is not a violation - if the modified software is never distributed publicly, there's no issue.
(The Affero GPL licence is different in that regard, and was developed as a response to the software-as-a-service trend, but we're discussing the plain old GPL.)
Imperfect enforcement is a valid point, but the terms of the GPL are effective at least some of the time. Major technology companies do not want copyright scandals, even if plenty of fly-by-night companies are willing to risk it.
> 'Secret' purely internal use of modified GPL software is not a violation - if the modified software is never distributed publicly, there's no issue.
As the parent pointed out, purely-internal isn't what he meant. Distribution can be non-public, which is distribution nonetheless. Such distribution would require availability of source, but that availability wouldn't be public, if the original distribution wasn't public.
The parent seems to be focusing on "theft" (GPL violation) by relatively-unknown companies, which didn't necessarily occur. It's plausible that it did, but, even if the violation were corrected, since that correction doesn't require public release of source code, is likely irrelevant to the overall discussion.
You seem to be focusing only on publically-released software, which may or may not be the majority (by whatever measure).
I have no "side" in this, just trying to understand the points, which I've failed to grasp. Are you talk past each other?
The event did occur. I saw the source with my own eyes.
The point that you keep on ignoring is that the OP said "companies have to contribute back". One of my points is that they don't even do it though they legally should.
License arguments wasn't the point of my response. The point is that people will abuse goodwill and pretending that it doesn't happen is naive.
> The event did occur. I saw the source with my own eyes.
You didn't say so, and, even now, you're only implicitly saying there was a GPL violation. The details are important, in order to further understanding.
> The point that you keep on ignoring is that the OP said "companies have to contribute back".
I'm pretty sure I'm not ignoring it, because it didn't happen. That's likely the source of my confusion. You've certainly said so repeatedly, but I'm missing where anyone else in the conversation has said so (hence my thinking it's a strawman).
> One of my points is that they don't even do it though they legally should.
This does sound like you are, again, saying there are circumstances where contributing "back" is legally required, which is the assertion that prompted my own original response. I don't believe those circumstances ever exist. The only obligation is providing (contributing) source code forward. Only when "forward" is the public at large does that end up being, as a side effect, "back".
> The point is that people will abuse goodwill and pretending that it doesn't happen is naive.
I doubt anyone here is actually naive enough to believe it never happens, but there may be a belief that it's rare or exceptional. Without large-sample-size evidence, this can be short-circuited to the usual cynicism vs. "people are basically good" argument.
> This does sound like you are, again, saying there are circumstances where contributing "back" is legally required
NOPE. The context is the original posters words. We are talking about that and I am saying that contributing back doesn't happen magically because of the GPL.
I suggest you learn to keep the context of the argument in mind rather than keep focusing on being pedantic.
Since those words never mentioned contributing back, I was, understandably confused.
> I am saying that contributing back doesn't happen magically because of the GPL.
I don't see where anyone was saying otherwise, ergo you're arguing against a strawman.
> keep the context of the argument in mind rather than keep focusing on being pedantic
That only works for making a (counter-)argument, not attempting to understand the argument(s) in the first place.
In the instant case, I'm now convinced that any disagreement was based on a flawed premise, or there was no disagreement at all. My understanding of hwo the GPL functions (and is intended to function) remains unchanged.
To preface the rest of my response. One of my points why the GPL isn't magic is that developers will just "steal" code if is easier and most companies don't bother checking whether the code is violating licenses when supplying to a third party.
> Yes, you're reading me right, but I skipped over the question of software which is never released publicly - mmt is correct that the GPL does not require public release of works, instead it prevents you from releasing the binaries while withholding the source.
The fact it doesn't prevent you from doing that. It only really prevents large companies that people are watching.
Violations happen all the time. They just happen on smaller GPL projects.
> 'Secret' purely internal use of modified GPL software is not a violation - if the modified software is never distributed publicly, there's no issue.
The impression I get is that GPL advocates like yourself seem to think that the unwashed developers that work on proprietary code don't understand the GPL and have to be constantly told how a software license works. You aren't an enlightened individual because you understand a software license. I understand the license and the arguments about it just fine.
There is an issue with stuff not going back to upstream. If a defect fix only happens in downstream that is generic enough that it should benefit everyone then only downstream benefits, this things don't get contributed back and there is no improvement of upstream.
> Imperfect enforcement is a valid point, but the terms of the GPL are effective at least some of the time. Major technology companies do not want copyright scandals, even if plenty of fly-by-night companies are willing to risk it.
The flyby night companies as you put it are the majority, not the minority. If it isn't a big project most companies won't get found out.
Again GPL doesn't magically make people contribute back, which was my original disagreement with your comment.
> Again GPL doesn't magically make people contribute back,
That's my understanding, as well, but that was never asserted, only "play nice", (re-)"release under the GPL", and "gets to live as long as people value it" as you were able to quote upthread.
This seems like contributing forward, not back, or downstream, not upstream.
The eventual effect, for publically-released software, usually ends up being an upstream contribution, but that's not automatic.
I'm not an advocating any particular license, but it does seem like you're responding to a strawman that nobody in this subthread (GPL advocate or not) has argued.
If that were true, BSD-licensed projects like PostgreSQL, LLVM, Xorg, or Apache would die off long time ago, replaced by their GNU-licensed counterparts. Yet we're witnessing the exact opposite happening.
Licenses don't work like you think they do. In fact, they work backwards: the decision whether to release the source or not doesn't depend on the license, it's the license - and thus the choice of existing software to base your work on - that depends on the decision on whether not to release the source.
BSD makes it possible to release your changes if - and when - you see fit. GPL - doesn't. That's why companies like Sony or Juniper couldn't base their products on Linux. Sure, Sony doesn't give back - but eg Juniper does.
Using the GPL doesn't guarantee you that your project will take over the world. Using a BSD licence doesn't automatically doom your project. But in the case of the Linux kernel, its use of the GPL appears to be the reason for its success - it's not that it always had compelling technical advantages over BSD.
This is Torvalds' idea, not mine [0] (though he doesn't speak to Linux-vs-BSD directly)
> Sure, Sony doesn't give back - but eg Juniper does.
But in aggregate, Linux has taken over the world, and BSD hasn't. The 'snowball' effect is real.
Then again, it didn’t help GNU Hurd, or othe GPL-licensed systems. So my guess - given a number of examples - is that it’s not the license that helped Linux.
Linux is no RTOS, nor does it have a pure microkernel architecture. It can't do anything where latency guarantees are needed (hard realtime) nor high assurance, nor any actual semblance of security.
Sure, Linux doesn't have a total monopoly in every domain. Sony used FreeBSD as the basis of the OSs of the PlayStation 3 and the PlayStation 4, for instance, and the work they did will never be contributed back.
I don't see any proprietary Unix seriously competing with Linux any time soon though.
The OSes that I have listed have all BSD style licenses, nothing proprietary about them as such.
As for competition with proprietary UNIX, probably not against Aix or HP-UX as they survive on maintenance contracts for big customers like banks and telecommunication companies, but Linux is still no match for high integrity computing OSes, many of which are micro-kernels with POSIX userspace available as possible API.
> The OSes that I have listed have all BSD style licenses, nothing proprietary about them as such.
Yes, I know - but the software which ends up getting deployed/used/sold will be proprietary forks, right? That's the whole point: the licence is friendly to proprietary forks.
> Linux is still no match for high integrity computing OSes
> And thus, you should be skeptical of any claims of Linux providing security.
When did I say Linux has perfect security?
I presume the point you're trying to make is that different operating systems, with different architectures, can beat Linux in various regards. Of course this is correct. But for a general-purpose multi-platform Unix-like OS, Linux is king, and will be for the foreseeable future.
"Selected highlights include:
* Support has been added for qcow2 images and external snapshots in vmm(4)/vmd(8).
* "join" has been added for Wi-Fi networks.
* Security enhancements include unveil(2), MAP_STACK, and RETGUARD. Meltdown/Spectre mitigations have been extended further, and SMT is disabled by default.
* rad(8) has replaced rtadvd(8).
* bgpd(8) has undergone numerous improvements, including the addition of support for BGP Origin Validation (RFC 6811). smtpd.conf(5) uses a new, more flexible grammar.
* For the first time, there are more than 10,000 (binary) packages (for amd64 and i386)." [1]
[1] https://www.undeadly.org/cgi?action=article;sid=201810181400...