Hacker News new | past | comments | ask | show | jobs | submit login

Red Hat was primary reason it took me a lot longer to adopt Linux than it should have. Headbanging experiences with dependency hell and things not working as expected left me extremely discouraged. It wasn't until I dabbled a little with Solaris 7 and finally found Slackware that I realized that Linux could "just work". IMO Red Hat's success was primarily based on the critical mass of support behind it, not because it was the best distribution.



I agree, but they've certainly come a long way. When I started with Red Hat 6.2, the dependency hell of installing RPMs from random websites and using the (almost never functional) up2date left me with an awful taste in my mouth as I jumped ship to Debian and the "just works" experience of apt-get.

After switching to Fedora because we're using CentOS at work, I've come to like it. DNF and yum are fine replacements for APT. A PPA analog (copr) is just a 'dnf copr enable user/project_name'. 'dnf history' shows every transaction on my system and makes it easy to undo installations.

The only things I don't like on an out-of-the-box Fedora installation are the stupid, touchscreen sized title bars in GNOME 3 and SELinux - which is fortunately easily disabled.


>The only things I don't like on an out-of-the-box Fedora installation are the stupid, touchscreen sized title bars in GNOME 3 and SELinux - which is fortunately easily disabled.

What does one do to fix the title bars?


  cat << EOF > ~/.config/gtk-3.0/gtk.css

  /* Make window title bars more compact.
   *
   * From: https://unix.stackexchange.com/questions/276951
   */

  headerbar entry,
  headerbar spinbutton,
  headerbar button,
  headerbar separator {
      margin-top: 2px; /* same as headerbar side padding for nicer proportions */
      margin-bottom: 2px;
  }

  EOF


Debian and apt-get are the final Linux distribution / package manager I will ever use. Fifteen years of compiling dependency hell, never again.


I think the point is, there is no more dependency hell.


Exactly. Using both in production, zero dependency hell. yum and dnf are great.


You're missing out. Nix is a fantastic package manager with features that apt (and similar) simply do not have. NixOS is not perfect yet, but it's very usable, and quickly improving.


Can't say this enough. You write a config script for your OS and run it.

Need to set up device #25?

Upload and run. You're guaranteed to get the same system as device #24,23...

You can keep it in source control, etc.

If it (or guix - no systemd FTW) takes off, I can't imagine using another distro again.

Unfortunately, though, it doesn't have enough maintainers yet (a year or so ago nginx was some 4 or 5 versions behind, and they definitely don't do backports/LTS).


The biggest problem Nix[OS] has right now is documentation. The wiki is deprecated, but still the only source for explanation of several key parts of the system. nix-shell is fantastic, but hard to learn simply because there is no specific place to learn. Nix language really needs some clear documentation so a user can understand how to get from "I don't have X" to "I have a working default.nix for X, and X installed in my environment."

GuixSD seems nice, but seems to have no method for installing non-free software, which is, unfortunately, necessary for many setups.

One thing I would like to have is a Nix/Guix wrapper for Debian packages (and possibly other packages/distros), so I could take advantage of Debian's robust ecosystem, and NixOS/GuixSD's totally functional environment.


My problem with documentation isn't so much that the nix language is so hard, but that a nix's package configuration's is (obviously) not the same as upstream.


Stop disabling SELinux, though. It works fine out of the box and even though I never needed to on my workstation, it's easy to fix issues.


Before yum and dnf, dependency hell is a serious issue indeed. After that it's still a problem if you use multilib (sometimes updates get pushed for x86_64 but the i686 packages are missing) but I wouldn't really call it a dependency hell anymore.


Seriously. Weekend long marathons of dependency chasing (in the snow, uphill both ways) to finally get config/make/make install to run were not only a thing but disturbingly commonplace. These kids don't know how good they've got it. God I feel old.


At least you had a package manager with a reasonable number of packages. My first distro as a main OS was Puppy Linux because it was a small download (remember dialup?) Actually at the time I didn't even have internet in my apartment, so installing software went something like this:

1) Identify a really interesting piece of software, like OpenFOAM, with (unbeknownst to me) lots of dependencies

2) Take a half hour walk to the local library with my shiny 4gb flash drive (89 dollars, a Christmas present to myself).

3) Download the .tar.gz of the software.

4) Walk home.

5) Unpack the .tar.gz and run ./configure. Watch it fail.

6) Walk back to the library to download the missing dependency. Walk home.

7) Goto 5.

I ended up very irritated that configure always fails on the first missing dependency, instead of comprehensively listing the missing requirements in one go. Things did get a bit easier when I learned to scour the documention for any libraries referenced, but of course reading the docs still often required a trip back home to unpack the tarball...


It feels so good to not have to look at that book with the guy on a horse, to get networking going every time I installled a new distro, or sometimes upgrading. Especially because I usually didn't know what I was doing to fix the issue, I didn't even know what I was looking for.

Thanks to everyone who did read that book and others cover to cover (or wrote them), so that my linux install just works.

https://en.wikipedia.org/wiki/Linux_Network_Administrator's_...


That's no horse, that's a nag!


You could recreate the experience for younger colleagues using a slackware install and deliberately not using any of the more advanced tools (slackpkg+ &c) and hiding the existence of slackbuild scripts.


i think all package managers (not just redhat's) have come a long way since then.

Take a look at the zypper SAT solver (from opensuse) that powers Redhat's DNF package manager today - https://en.opensuse.org/openSUSE:Libzypp_satsolver

The package management of today is a very different beast altogether. And we are on the cusp of the next generation - snap and flatpak.


> we are on the cusp of the next generation - nix and guix.

Fixed that for you.


Have you considered the possibility that your poor experience with Red Hat was due to the fact you were new to Linux?

Encountering dependency hell is usually a sign that you don't know what you're doing.

Red Hat "just works" and has always enjoyed the reputation of being the most bulletproof distro if you could afford it.


> has always enjoyed the reputation of being the most bulletproof distro if you could afford it.

I've always heard that said about Debian more than about Red Hat (though Red Hat certainly is pretty stable).

Red Hat has a lot more success in businesses though because you can get contractual support; which may not only be useful if you don't want to get the skills in-house but also because your own customer may contractually require it.


Debian also has an impressive reputation for quality. In many cases it has achieved, through its vast user and developer network, what Red Hat could only achieve through paid staff and commercial resources.

But this is not true in all areas, especially in some aspects that matter to companies such as training/certification and having good up-to-date documentation.

And as a volunteer-driven project, I don't think Debian can ever be as responsive to end-user problems or requirements as a commercial product can be.

But it definitely gives Red Hat a good run for its money. For example, the Debian LAMP stack has long been and still is the gold standard.


Debian is truly a solid alternative. Ubuntu, less so. They're shipping an impressive amount of new features, even on LTS releases, but their QA is nowhere as good as Debian or Ubuntu's.


Back when Solaris 7 was a thing, dependency hell was very much a thing.


UNIX vendors didn't normally ship releases or updates that introduced dependency issues. Part of their job was doing testing and quality assurance, that's what you were paying good money for. Most such issues were introduced locally.

A good sysadmin would know how to avoid dependency hell. For example this might include taking basic precautions such as testing changes in a chroot or development/testing environment, before rolling them out.

I'm not saying the problem didn't exist, only that it wasn't an actual problem if you knew what you were doing.


The problem with dependency hell was more that RedHat back then didn't have such a thing as a "rolling release" or "testing" in Debian speak. So while "RedHat 6" worked fine, trying to upgrade (say) OpenOffice would require that you download the rpm from OO's website, which would require that you upgrade Gnome, which would require that you upgrade X, which would require that you upgrade glibc, and down the rathole. Considering that most people then were still on dialup, it was horribly annoying.

Also, RedHat didn't ship with (relatively) much software (install everything was an option). Now you've got to configure; make; make install. Now you've got two parallel installations of libraries and software.

Oh. And the RPM database would die periodically (rpm anything would hang), requiring a reinstallation.

Since moving to Debian over a decade ago, I think I had to configure; make; make install something only once (it was an old and unmaintained Java library on SF.net). Almost everything (open source) is in the Repos.


I maintain several RHEL and CentOS clusters.

I never install anything from source. Packages from yum repositories only.

Upgrade OpenOffice with a downloaded RPM? No. The point of RedHat is stability for enterprises. If you want the latest version of everything, RedHat is the wrong distribution.


This was RedHat Linux, not RHEL. This was _way_ before the RHEL/Fedora split.


Aha -- I saw RedHat 6 and read "RHEL 6"


As someone who used Redhat since 5.0 - the RPM database never died for me. But then, I've never used 'rpm --force', which many did and I suspect was the major factor in RPM database death.


I think it was more of a problem on the BerkeleyDB side than on the side of data. BDB was giving an impression of quite fragile thing back then, but it was (and AFAIK still is) the only source of data about installed packages.


Except this really has nothing to do with rolling releases, or the amount of software the vendor ships in the official repositories. Switching to Debian won't solve the problem either.

There is no distro today that provides packages for all the software you will ever want, or the specific version that you need. At some point, you will resort to installing software from outside the officially-supported sources, whether from experimental or user-maintained package repositories, or from a third party in binary or source form.

Until recently, this was an operation that wasn't guaranteed to be easy or straightforward or risk-free. In the worst case, it could even screw up your system in ways that are time-consuming to diagnose and fix.

In the example you gave, I would conclude that OpenOffice didn't package their RPM well, because it ended up driving RH6 users down the rabbit hole. At the very least, they could have unpacked all the files under /opt and provided static binaries, or included all the libs in the archive. Many packages still do that today, such as Vagrant, which installs under /opt/vagrant and includes its own Ruby interpreter there.

Nowadays there are efforts underway to make installing custom software safe and easy, projects like: flatpak, OSTree, appimage, and snap. Hopefully we can reach a point where you can install whatever version of whatever software you want without breaking anything.


We're talking about different levels.

Generally, nowadays you keep a system relatively up to date using apt-get update and upgrade, pacman or yum. Back then all you had was RPM (the equivalent of dpkg).

The attitude was "you want new versions? Go to upstream, download the .RPM and install".

Now, OO (don't remember if it existed back then) depends on certain versions of (say) GTK.

What do you do?

Go to gtk.org and download the rpm.

Rinse and repeat


If that's what you call dependency hell then you've had it easy!


What timeframe are you talking about? I remember it didn't "just work" for me when I first started experimenting with it (V4.2, IIRC). Yes, I was new, but it wasn't the easy-install experience that it now is, for sure.


I started at around the same time -- Red Hat Linux 4 era. Yum didn't exist yet. I was also quite new to Linux, but I'm reasonably sure there really wasn't something that would do dependency resolution for you. You either had to find an RPM that had the right version, or you had to find a tarball somewhere and run configure and make yourself.


Installing software on an OS, shouldn't be something that requires ANY experience. What exactly is the point of an OS without the software running on it? If it's not immediately obvious how to do any of it, for a total noob, to a veteran. Then it's a garbage OS in my opinion.


That's missing the point of Linux entirely. Your average end user isn't using Linux as their day-to-day desktop environment as it requires some skill to maintain. They are using Windows/OSX because they don't have to focus on the platform and are able to carry on with their workflow applications, needing minimal back end maintenance. On the flip side, many sysadmins will not run back end operations on a Windows box as you simply don't have the same level of control, configurability, and customization that you can achieve with a properly set up Linux solution. And calling any OS requiring knowledge to operate and maintain garbage is downright ignorant, as this entire thread and article proves otherwise.


Except all I've been hearing about for the last 20+ years, is that THIS YEAR is going to be the year everyone uses Linux on the desktop. That's not ever going to happen with how archaic it is to do basic tasks.


> Red Hat's success was primarily based on the critical mass of support behind it, not because it was the best distribution.

May be true, but they're given so much distribution independent code back, that I'm really glad they made it. It even seems that they contribute more to desktop Linux (GNOME, PulseAudio, systemd, kernel devs etc.), than Canonical does, (they seem to focus on Ubuntu-specific solutions mostly), which is pretty neat for a server vendor.


IMO the distro that actually made me realize that after some good configuring Linux can just work is Gentoo. After Portage no package manager was ever good enough for me.


I had the exact same experience -- Gentoo gave me control, understanding, confidence. Gentoo/Funtoo gave also come a long way since early days.


Dependency hell is why I became a Mandrake and later a Mandriva user. URPMI was groundbreaking.

I didn't give up Mandriva until near the end, when I jumped ship to Ubuntu.


mageia is still around


My biggest gripe with Ubuntu has been duplicated in Mageia, I'm not likely to switch back.


No surprises there. While just about every other package format out there is some variant of a tar-ball with added meta-data the RPM is a custom binary format.

I know someone did a blog or article on the RPM format internals, but damned if i can find it with a quick search. I just get a whole bunch of Fedora and Red Hat links...


GDIt, why didn't i try a HN search in the first place...

https://blog.bethselamin.de/posts/argh-pm.html

seems to be what i was thinking about earlier.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: