For me, who's daily driven FreeBSD in the past, and switched back to it again recently. FreeBSD serves as a refuge from systemd, and the only BSD that is a fairly drop in replacement for linux in terms of software compatibility, as well as the only game in BSD town in terms of support for modern hardware.(though it does significantly lag Linux in this regard still, so YMMV).
As to why I use it over the various systemd free linux distros? Well, there's a couple things. First lot of those distros, like Artix linux say, actually have smaller communities than FreeBSD(I'm guesstimating based on the activity level in their irc channels). The Linux community might be much, much larger than the FreeBSD community, but it's also extremely fragmented.
triggerwarning, hyperbole incoming. Don't bother correcting me, it's a polemic, not a scientific paper
Secondly, for someone like me, who's been using various unix like OSes for two decades, FreeBSD is just a nice, batteries included, well integrated system. Things like jails, Dtrace, ZFS, Bhyve, pf etc. All being in the base install means they're just better integrated with the kernel, and eachother. Most of those things exist for linux, or have equivalents, but they're not all part of the same project. Obviously Dtrace and ZFS originated in Solaris, but they've been made first-class citizens. There's a harmony to FreeBSD that Linux distros lack. Documentation is also very good, all accessible via manpages(no GNU INFO...). And, as I mentioned briefly before. It doesn't have a lot of the cruft that's been added to linux distros over the years(though some of it is available in ports if you want it). In FreeBSD, my experience is actually useful. Things I remember how to do from 5 years ago, 10 years ago, 15 years ago, still work. If I'm on some modern, plug and play linux distro, I have no idea what's going on under the hood any more. All I know is it's not what was going on 5 years ago, which isn't what was going on 10 years ago, which isn't what was going on 15 years ago. The amount of pointless churn going on in the linux space is ridiculous. When I started using linux, what I loved about it was that it was transparent. I could change anything. The system was easy to understand. Yes, it was janky, but it was understandable jank, whereas Windows was janky in an opaque way. 20 years later, Linux is still janky, but nothing is understandable, at least not to my greybeard brain. Systemd takes over a new daemon every distro upgrade. DNS resolving now involves 4 different daemons with 15 different configuration files, there's two display protocols, both broken in different ways, /etc is full of long files written in strange, alien languages, and every file has its own bespoke language. There seems to be 54 different ways to make any change to your system, and all of them are somehow unsatisfactory in a unique way. I just can't, anymore. Enough already.
The mains issues with Linux is it’s just the kernel, and anything is developed in their corner without taking account of the rest. Also, I tend to think the Linux folk in general seem to want to reinvent the wheel every 6 months, where FreeBSD and BSD in general have tendency to make things better from previous work in comparison
Yes I know, but maybe my initial message wasn’t clear enough.
But for me the fact Linux is just the kernel doesn’t make the previous criticisms invalid. The first concerning the development of the different components in sort of echo chamber where no one seem to communicate with each other is directly taken from the Linux Kernel philosophy, the maintainer have expressed in multiple time they don’t care what happen outside of the kernel, in contrast with FreeBSD developers for example
The second point is more towards distribution I admit
To make my long-winded point more concretely, the core diference is really just that there are "so many" Linux developers.
Linus has a pretty firm hand on the tiller of Linux evolution. I counter "don't care what happen outside of the kernel" with his many, many public "never, ever break userland" rants. And many kernel devs and maintainers are employees of companies like Intel, Red Hat, Google, IBM, and AMD that absolutely care about coordinating kernel dev with the bigger picture.
Something like 250 devs contribute to FreeBSD each year. For just the Linux kernel, the number is closer to 5000. There are just way more people working on way more stuff. It is not a surprise to see a more significant halo of chaos around Linux. Coordinating the Linux kernel is herding cats and, even when everybody eventually lines up, there are going to be periods where it seems like everybody is talking past each other.
And while the Linux kernel does have a "release early, release often" mantra, it also touts "trust but verify" and has a strong meritocracy and hierarchy. So I am not sure "no one seem to communicate with each other" is fair. Not just anybody can drop whatever they want into Linux. We also need to remember that shipping the Linux kernel is not the same as shipping a Linux distro (operating system). Actual Linux distros bring kernel versions in according to the philosophy of the distro. Many are very stable and conservative. Others are a whole lot less so (but that is users choice).
Isn't this more telling though? that with vastly less developers they've built a system comparable to linux? This is what happens when you have direction.
"The mains issues with Linux is it’s just the kernel, and anything is developed in their corner without taking account of the rest."
I hear this a lot when people talk about FreeBSD but I am not sure about it.
A LOT of the core Linux ecosystem comes from Red Hat developers for example. If I look at RHEL as an operating system, they have a definite vision for the OS, they take a long-term view, and they invest in development to get it there. My guess is that Red Hat alone employs more devs than work on FreeBSD.
Red Hat contributes heavily to the kernel, the core C library (glibc), the userland (GNU utils), the system supervisor (systemd), the compiler (GCC), the desktop environment (GNOME), the GUI framework (Wayland now, Mesa, etc), the sound system (pipewire), the hypervisor system (KVM, libvirt), and the container system (podman and Flatpak). Red Hat heavily influences the direction of all this stuff with a common vision and they work to implement it as a cohesive expression in their distro. This is a broader swath of what makes the operating system than FreeBSD considers its scope and it is all built to work together.
If you use RHEL, you know it is very stable (static). When Red Hat makes changes, they tell you about them years in advance.
I honestly do not think you can say that FreeBSD is more cohesively developed or better documented than RHEL. FreeBSD arguably has less control over key aspects of the OS than Red Hat does.
I am not advocating for Red Hat here by the way. I am not even a RHEL user. I use Chimera Linux which rejects quite a lot of the Red Hat vision including SystemD and pretty much the whole GNU system (userland, glibc, gcc).
My point is that Red Hat is truly a maker of their own destiny and their distro reflects their vision. They want to move to SystemD. They introduced DRM and KMS instead of the traditional Xorg driver model. They want to move to Wayland. They have heavily embraced the OCI container model. It is all part of their vision and design.
Pragmatically, FreeBSD has to create tools like Linuxulator. FreeBSD is adding support for OCI containers. FreeBSD is adding Wayland support and, as popular desktop environments abandon X11, may have to move to Wayland as the preferred display server. Even the FreeBSD utils have added many options over the years to be compatible with the userland that Red Hat developed. Was 'ls --color=auto' a FreeBSD design? In other words, the Red Hat agenda drives the evolution of FreeBSD (but not much the other way around).
So sure, FreeBSD is more stable and cohesive than the universe of Linux distros. But even BSD has fragmentation. GhostBSD is close to FreeBSD but not quite and would be more different if they had more devs. DragonFly BSD certainly has its own agenda (and again, is held back more by bandwidth than solidarity). The free-for-all in the Linux world is an expression of its size and collective innovation. But how much of this you want as a user is up to you. As many have said, you don't use "Linux", you use a Linux distro.
Again, my main distro is Chimera Linux. The whole point of the name is that it pulls together things never designed to work together (including the FreeBSD userland on Linux). And yet, the Chimera Linux dev team has a very strong vision of what they want their OS to look like and they work very hard to build that into a cohesive implementation. This includes keeping the system and the code small and understandable. It is a goal that you can sanely build the entire system from the ground up. That is why Chimera uses a BSD userland and does not use SystemD. But while they want to keep things simple, they also want "modern" features.
They choose components that fit their vision. Where changes are required, they make them. Where they deem good options not to exist, they invent them (eg. Turnstile, cports). As a user, I get that "solid, cohesive, well-designed, intentional, and heavily curated" experience that FreeBSD users talk about. More to the comment above, Chimera reeks of "looking to preserve tradition while striving to make things better". Of course, it is also still a niche distro with a tiny community (at this point). As somebody said above, FreeBSD may be a better choice for this and other reasons. But Chimera Linux is still Linux and that has its advantages. The box I am typing on uses bcachefs and Distrobox. For me, it is perfect.
Anyway, I apologies for the length. When you talk about FreeBSD vs "Linux", you really have to choose a specific Linux distro for the comparison to be meaningful. Depending on which one you pick, the statements made by @MrArthegnor may or may not hold. At least, that is my view.
I both agree and disagree with comments about Linux choas and churn. That is true of the overall ecosystem of course. But any given Linux distro can be thought of as its own operating system.
You can choose a Linux distro that reflects your own preferences in terms of pace of innovation. Sure Arch has 100 package updates a day and 30 ways to do everything. However, RHEL (or its compatibles) is not that way. You can go 10 years without changing your config files. Precisely because there are so many distros with so many different curated experiences, you can find a Linux distro that matches your own preferences.
And yet all Linux distros give you the hardware support and things like the OCI ecosystem that only the Linux kernel can provide.
Given the above, I wonder sometimes why you would choose FreeBSD over a Linux distro. But your statement that FreeBSD has more users than many Linux distros is a good one. It is also true that, while distros like Arch or Debian have more software in their repos than FreeBSD, the FreeBSD ports collection has a much larger selection than most distro repos. So, overall, FreeBSD does achieve a nice balance. So, that makes sense to me.
Exactly. I'm in Norway, and here, to watch the English Premier League legally, the price this latest season was over $70 per month. Keeping in mind the fact that most people will maybe watch one game(their team's game) a week, the prices are just getting absurd. There's no way to buy single games, subscribing to games for a single team, etc. To watch the 38 games your team will play last season, the only available option was to buy access to all 760 games. The company holding these rights is struggling financially, layoffs and all. Because their subscription numbers have plummeted
They've already crossed the threshold where this is no longer profitable. The next licencing deal will likely be so expensive, no Norwegian or Scandinavian company could possibly be able to turn a profit from it.
Of course, the CEO of the company has been in the media talking about how IPTV funds criminal networks and such nonsense*, calling for bans, yadda yadda. They're not listening to the market at all. Just using illegal streaming as a scapegoat. And I've decided, as long as this is how it's gonna be, they're not seeing a single dime of my money.
* I find the concept absurd. No matter where we spend our money, some of it ends up with criminals and various other despicable people, who will use it for evil. No one has the ability to prevent this. There's no reasonable expectation in current societal and economic structures for the consumer to somehow keep track of all their money once it leaves their wallet. This is no more the case for IPTV than it is when I buy a burger from some hole in the wall, which unbeknownst to me is a money laundering front. Or when I buy some chocolate and most of the money ends up with some white rich guy and not the children in Africa who harvested the cocoa. The whole argument is so intellectually dishonest and morally pathetic it pisses me off. And I don't even pay for IPTV.
That's all well and good for this particular example. But in general, the verification can often be so much work it nullifies the advantage of the LLM in the first place.
Something I've been using perplexity for recently is summarizing the research literature on some fairly specific topic(e.g. the state of research on the use of polypharmacy in treatment of adult ADHD). Ideally it should look up a bunch of papers, look at them and provide a summary of the current consensus on the topic. At first, I thought it did this quite well. But I eventually noticed that in some cases it would miss key papers and therefore provide inaccurate conclusions. The only way for me to tell whether the output is legit is to do exactly what the LLM was supposed to do; search for a bunch of papers, read them and conclude on what the aggregate is telling me. And it's almost never obvious from the output whether the LLM did this properly or not.
The only way in which this is useful, then, is to find a random, non-exhaustive set of papers for me to look at(since the LLM also can't be trusted to accurately summarize them). Well, I can already do that with a simple search in one of the many databases for this purpose, such as pubmed, arxiv etc. Any capability beyond that is merely an illusion. It's close, but no cigar. And in this case close doesn't really help reduce the amount of work.
This is why a lot of the things people want to use LLMs for requires a "definiteness" that's completely at odds with the architecture. The fact that LLMs are food at pretending to do it well only serves to distract us from addressing the fundamental architectural issues that need to be solved. I think think any amount of training of a transformer architecture is gonna do it. We're several years into trying that and the problem hasn't gone away.
> The only way for me to tell whether the output is legit is to do exactly what the LLM was supposed to do; search for a bunch of papers, read them and conclude on what the aggregate is telling me. And it's almost never obvious from the output whether the LLM did this properly or not.
You're describing a fundamental and inescapable problem that applies to literally all delegated work.
Sure, if you wanna be reductive, absolutist and cynical about it. What you're conveniently leaving out though is that there are varying degrees of trust you can place in the result depending on who did it. And in many cases with people, the odds they screwed it up are so low they're not worth considering. I'm arguing LLMs are fundamentally and architecturally incapable of reaching that level of trust, which was probably obvious to anyone interpreting my comment in good faith.
I think what you're leaving is that what you're applying to people also applies to LLMs. There are many people you can trust to do certain things but can't trust to do others. Learning those ropes requires working with those people repeatedly, across a variety of domains. And you can save yourself some time by generalizing people into groups, and picking the highest-level group you can in any situation, e.g. "I can typically trust MIT grads on X", "I can typically trust most Americans on Y", "I can typically trust all humans on Z."
The same is true of LLMs, but you just haven't had a lifetime of repeatedly working with LLMs to be able to internalize what you can and can't trust them with.
Personally, I've learned more than enough about LLMs and their limitations that I wouldn't try to use them to do something like make an exhaustive list of papers on a subject, or a list of all toothpastes without a specific ingredient, etc. At least not in their raw state.
The first thought that comes to mind is that a custom LLM-based research agent equipped with tools for both web search and web crawl would be good for this, or (at minimum) one of the generic Deep Research agents that's been built. Of course the average person isn't going to think this way, but I've built multiple deep research agents myself, and have a much higher understanding of the LLMs' strengths and limitations than the average person.
So I disagree with your opening statement: "That's all well and good for this particular example. But in general, the verification can often be so much work it nullifies the advantage of the LLM in the first place."
I don't think this is a "general problem" of LLMs, at least not for anyone who has a solid understanding of what they're good at. Rather, it's a problem that comes down to understanding the tools well, which is no different than understanding the people we work with well.
P.S. If you want to make a bunch of snide assumptions and insults about my character and me not operating in good faith, be my guest. But in return I ask you to consider whether or not doing so adds anything productive to an otherwise interesting conversation.
Yup, and worse since the LLM gives such a confident sounding answer, most people will just skim over the ‘hmm, but maybe it’s just lying’ verification check and move forward oblivious to the BS.
People did this before LLMs anyway. Humans are selfish, apathetic creatures and unless something pertains to someone's subject of interest the human response is "huh, neat. I didn't know dogs could cook pancakes like that" then scroll to the next tiktok.
This is also how people vote, apathetically and tribally. It's no wonder the world has so many fucking problems, we're all monkeys in suits.
Sure, but there's degrees in the real world. Do people sometimes spew bullshit (hallucinate) at you? Absolutely. But LLMs, that's all they do. They make bullshit and spew it. That's their default state. They're occasionally useful despite this behavior, but it doesn't mean that they're not still bullshitting you
Again, I didn't write this, but in general, to take a chess engine and apply to another game the main things you'd have to change are the board representation, and you'd have to retrain the neural net(likely redesign it as well). The tree search should work assuming the game you're going to is also a perfect information, minimax game. Though it could also work for other games. There's a good chance there's prior work on applying bitboards(board representation) on whichever game that is. Chessprogrammingwiki is an invaluable resource for information about how engines like this work. Godspeed.
The first non trivial chess programs were 'playing' in the late 40s(with pen and paper CPUs). Some of these include features you'll still see today.
https://www.chessprogramming.org/Claude_Shannon proposed two types of chess programs, brutes and selective. Alpha-beta is an optimization for brutes, but many search chess programs were selective with heavyweight eval, or with delayed eval.
Champernowne(Turing's partner), mentions this about turochamp, "We were particularly keen on the idea that whereas certain moves would be scorned as pointless and pursued no further others would be followed quite a long way down certain paths."
Not the author, but probably very poorly. This seems more like a proof of concept, it's written in Python, has a very basic tree search which is very light on heuristics. And likely the NN is undertrained too, but I can't tell from the repo. In comparison Stockfish is absurdly optimised in every aspect, from its datastructures to its algorithms. Considering how long it took the LeelaZero team to get their implementation to be competitive with latest Stockfish, I'd be shocked if this thing stood a chance.
Of course, beating Stockfish is almost certainly not the goal for this project, looks more like a project to get familiar with MLX.
Please tell me this is sarcasm. I mean, I know people love to extrapolate current LLM capabilities into arbitrary future capabilities via magical thinking, but "infinite context" really takes the cake.
IANAB, but from what I do understand. It depends what you mean by different genes. Information wise, DNA is a string of base 4 digits(nucleotides) in groups of 3 digits, these groups are called codons. Each codon corresponds to a specific amino acid*. A protein is made up of a bunch of different amino acids chained together. The gene determines which amino acids are chained together and in what order. This long chain of amino acids tends to fold up into a complex 3 dimensional structure, and this 3 dimensional structure determines the protein's function.
Now, there are a couple ways a gene could be different without altering the protein's function. It turns out multiple codons can code for the same amino acid. So if you switch out one codon for another which codes for the same amino acid, obviously you get a chemically identical sequence and therefore the exact same protein. The other way is you switch an amino acid, but this doesn't meaningfully affect the folded 3D structure of the finished protein, at least not in a way that alters its function. Both these types of mutations are quite common; because they don't affect function, they're not "weeded out" by evolution and tend to accumulate over evolutionary time.
* except for a few that are known as start and stop codons. They delineate the start and end of a gene.
I think this is something in some assembly formats too? I remember seeing it once and wondering if maybe that's where the idea of ending lines in C with semicolons came from since at least in the examples I saw in school, a large number of lines had trailing comments with a description of what the operation was doing.
IDA uses ; for comments in its disassembler view, but it looks like C-style // single-line comments and /* comment blocks */ are also accepted by certain tools: https://en.wikibooks.org/wiki/X86_Assembly/Comments
As to why I use it over the various systemd free linux distros? Well, there's a couple things. First lot of those distros, like Artix linux say, actually have smaller communities than FreeBSD(I'm guesstimating based on the activity level in their irc channels). The Linux community might be much, much larger than the FreeBSD community, but it's also extremely fragmented.
triggerwarning, hyperbole incoming. Don't bother correcting me, it's a polemic, not a scientific paper
Secondly, for someone like me, who's been using various unix like OSes for two decades, FreeBSD is just a nice, batteries included, well integrated system. Things like jails, Dtrace, ZFS, Bhyve, pf etc. All being in the base install means they're just better integrated with the kernel, and eachother. Most of those things exist for linux, or have equivalents, but they're not all part of the same project. Obviously Dtrace and ZFS originated in Solaris, but they've been made first-class citizens. There's a harmony to FreeBSD that Linux distros lack. Documentation is also very good, all accessible via manpages(no GNU INFO...). And, as I mentioned briefly before. It doesn't have a lot of the cruft that's been added to linux distros over the years(though some of it is available in ports if you want it). In FreeBSD, my experience is actually useful. Things I remember how to do from 5 years ago, 10 years ago, 15 years ago, still work. If I'm on some modern, plug and play linux distro, I have no idea what's going on under the hood any more. All I know is it's not what was going on 5 years ago, which isn't what was going on 10 years ago, which isn't what was going on 15 years ago. The amount of pointless churn going on in the linux space is ridiculous. When I started using linux, what I loved about it was that it was transparent. I could change anything. The system was easy to understand. Yes, it was janky, but it was understandable jank, whereas Windows was janky in an opaque way. 20 years later, Linux is still janky, but nothing is understandable, at least not to my greybeard brain. Systemd takes over a new daemon every distro upgrade. DNS resolving now involves 4 different daemons with 15 different configuration files, there's two display protocols, both broken in different ways, /etc is full of long files written in strange, alien languages, and every file has its own bespoke language. There seems to be 54 different ways to make any change to your system, and all of them are somehow unsatisfactory in a unique way. I just can't, anymore. Enough already.