Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's OT but how many people actually use RHEL? Or CentOS (more likely).

I'm not a fan of systemd, but have really loathed CentOS/RHEL compared to Debian/* for years.



> It's OT but how many people actually use RHEL? Or CentOS (more likely).

I'll ask the opposite question how many people in Enterprise don't use RHEL/CentOS?

> I'm not a fan of systemd, but have really loathed CentOS/RHEL compared to Debian/* for years.

Those two are orthogonal. A lot of people loathe Windows and Oracle yet they are raking in billions every year in licensing.

Then there is Government as well, try selling them a production running in Ubuntu or Arch. It can be done but it won't be very easy. There RHEL is king as well.


> It's OT but how many people actually use RHEL? Or CentOS (more likely).

A majority of Telcos, Banking, Military, Stock Exchanges, Medical and large commercial enterprises

Why?

Because when a system starts having issues at 4am dealing with x amount of transactions per second and the shit is going to hit the fan, they want top class support on hand and not to be awaiting on someone replying on a irc channel / mailing list.

I personally am an Arch user on my home machine and work laptop, yet servers that run commercial workloads and have SLAs tied to them will always run RHEL for the reasons above.


I used to run CentOS in Dev/Test and then Red Hat in Prod. However, they have a habit of removing packages that compete with their commercial offerings, which forces you to either compile from source or to install from other repositories - for example, they removed Apache ActiveMQ because they offer JBoss AMQ. This starts to become annoying becuase all of a sudden you have multiple update processes to keep everything up-to-date, whereas previously, you had one. After being bitten by this at least twice, I shifted everything to Ubuntu. It's a shame - both CentOS & Red Hat are slow to move, but very stable. I was happy paying them support too ... but not when they kept making things more difficult.


As a general rule if the end customer was finance/bank their Linux flavor was RedHat (someone to yell at when it broke). Other tech companies / web types it was Ubuntu. This was my experience in over 4 startups where these were are end customers. Your mileage may vary.


Pretty much my exact experiences. Banking and large corporations: RedHat, because they buy support contracts to have guaranteed security and stability updates, and because they want some assurance that whatever (open/closed) corporate software they use will run on their Linux systems. Debian and especially Ubuntu the last years for smaller "dev" companies because the devs like to be close to the open-source ecosystem.

I've never met an individual (not corporate server) or end-user (desktop) running RHEL though.

Back to topic, I've been following Btrfs... At one point it seemed destined to become the default for most Linux systems, but I'm not sure where it is headed now.


I'm running RHEL with free developer license on my home server for learning and some home tasks.


Statistical fluke. ;)


CentOS is extremely popular among startups. It lets you directly use RPMs, which many software publishers distribute on their own (not the case for debs, often), and gives you an upgrade path to buy RHEL when you want someone to yell at and not throw away all your automation.

I’m not trying to start a flame war here (honest), but among most “usual web” ops folks I know, the opposite of what you’re suggesting seems to hold and Debian/Ubuntu are looked at as an odd choice. It’s probably hire #1 going with what they know, more than anything, and it could very well be selection bias regarding the people I know. I’d love to see stats.

I’ve been working with CentOS or OEL in my own roles for several years now, and I’ve never chosen it. (Not saying I wouldn’t, just that it’s been there when I get there.)


There are just as many publishers that supply debs and not rpms. It really depends on what you're after.

But if you look at the usage, north america is generally more redhat-family and europe is generally more debian-family, even in the SuSE patch that is Germany...


Oracle Linux is the only direct upgrade path for CentOS (afaik).

A full reinstall is required to move from CentOS to RHEL. Oracle has a procedure to directly convert CentOS, RHEL, and Scientific Linux into a supported Oracle Linux system.

KSplice is the most common reason why this conversion is required and mandated. If downtime can no longer be tolerated for upgrades to openssl/glibc/vmlinuz, this is the only available path.


I'm also under the impression the government tends to use RHEL when using FOSS.

I could be wrong, but I picked that up years ago somewhere.


In my experience Debian seems to be more common in Europe. Here there is a mix of RHEL, Debian, Ubuntu and some CentOS. With banks and finance preferring RHEL.


The E stands for "Enterprise", so there's a hint about the target audience. You can get support for a 10 year old RHEL version after you entangle it to your stuck in time IT infra :)


It seems like "Enterprise" is synonym with "Technical debt"


More accurately, it's a synonym for "Allows my line-critical COTS to run predictably"

If you're not in a tech business, and are therefore using other people's software, then you want to be on the same platform that they were during development which would have been years ago. If it's Linux, then it's something like RHEL.


We run one CentOS 6 server at work (which uses upstart, not systemd). The initial reason was that it needed to run on Hyper-V, and the Debian VM I had set up first, kept losing its network connection every couple of minutes, which was especially annoying since I wanted to run Nagios on that system. ;-)

So I did a bit of digging and found out that CentOS ran on Hyper-V just fine. I installed it and had no real reason to complain since. The selection of packages is rather spartan compared to Debian, but there is a third-party repository called EPEL that makes up for that.

Some things are different from Debian, of course, but nothing big, and the system has given me no problems whatsoever on the performance and reliability fronts.


EPEL doesn't get security updates.

Security support for a wide range of packages is also a reason to prefer Debian over Ubuntu, since most of the Debian-inherited packages ("universe") are excluded in Ubuntu.


EPEL gets security updates just like fedora does. Packages are usually maintained in the same gut repositories. However there is no guarantee that you will receive security updates as there is no support contract. But you have a similar problem in Debian if the maintainer isn't keeping up


Debian doesn't have this problem: the security team is collectively responsible for making security updates and frequently creates updated packages without maintainer involvement.

I'm not familiar with the Fedora process, but they seem to have a security team and a system of security advisories, which EPEL does not appear to have. Doesn't sound like the same at all.

Sure, most of the time packages in EPEL (and in Ubuntu universe section) will eventually get security updates, but there is no promise or organization of timely security updates.


EPEL is part of the Fedora project and shares the packaging infrastructure.


> the security team is collectively responsible for making security updates and frequently creates updated packages without maintainer involvement.

How can they do this if a packaging (and especially backporting) a fix requires deeper knowledge of the package which probably only the maintainer has?


Not even the maintainers really understand what they are doing :

https://github.com/g0tmi1k/debian-ssh > There was an #ifndef PURIFY there for a reason. It's because the openssl authors knew that line would cause trouble in a memory debuger like Purify or Valgrind.

Where a debian maintainer screwed the RNG of OpenSSL to make valgrind happy. This made any key generated on a debian or ubuntu system from 2006 to 2008 very easily breakable.

Downstream should never touch packages beyond backporting fixes made by upstream.

Here's another example of upstream vs downstream conflict in debian :

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=477454

Or PHP developers being fed up with both RedHat and Debian messing with their runtime on whims :

https://derickrethans.nl/distributions-please-dont-cripple-p...

This is why I heavily support the desire for a new packaging system targeted at developers: snaps, flatpak. The downside of having multiple copies of the same libraries pale in comparison to giving back power to upstream. Distro maintainers are routinely modifying codebases they don't understand. Allow us to have a standard installation process that can install packages.. made by the developers themselves, upstream. Just like all other operating systems do.

And Debian, unlike RHEL/CentOS, packages a lot more than they can even reasonably maintain. The vast majority of packages in a Debian stable are insecure, the security team simply cannot handle the large amount of software outside of the truly core stuff (kernel, web servers) :

https://statuscode.ch/2016/02/distribution-packages-consider...

If you aren't supposed to use the packaged wordpress, phpmyadmin or node, why is debian distributing those packages? Debian by distributing these things in their repo encourages the naive first time linux user to install them through their facilities.


>It's OT but how many people actually use RHEL?

Hopefully a lot because, like it or not, RHEL puts a ton of work into the linux kernel and other userland linux tools. We need them to stick around.


As do others and they were one of the main systemd backers. Still doesn't answer the question.

If I'd left out my preference, probably would have had fewer downvotes.


I was working at a place a few years ago where I had to move a whole bunch of applications, running on Ubuntu Hardy circa 2008, to new servers (well VMs), as the servers were being decommissioned. A lot of the applications were running on Ruby 1.8 which newer versions of Ubuntu don't provide packages for, and it's not very easy to compile it yourself due to dependencies on old versions of OpenSSL. We upgraded some applications to newer versions of Ruby, however some weren't worth the effort or risk (before Ruby 2, even a patch level change likely had breaking changes).

Although other systems were running Ubuntu, we decided to go with CentOS 6 for these systems - mainly because it was preferred by the current era of sysadmins. It also had packages for Ruby 1.8 or at least the correct dependencies (I can't remember exactly). And even better, although it was released in 2011, it is still supported until 2020.

If I was going to do the same today I'd just use Docker, but it was fairly new back then.


Redhat is worth billions of dollars. Someone must be using Redhat.


Tulips were sold for ridiculous prices, yet nobody used them.


That would be the P/E ratio. Luckily Red Hat is a public company so you can know both the P and the E. :)

Revenues in FY2017 were almost $3 billion per year.


are you suggesting that RHEL has been a bubble?


No. I only proved that the logic in the upper post was not sound. You make out of that what you want.


In enterprise installations people only install software that they can receive support for. So the biggest server farms probably all run something like RHEL or SLES.

Of course for private usage it's a little ove rthe top. But RH also offers solutions there. There's an upstream OS with all the cool stuff: Fedora. And a downstream OS that is just as stable as RHEL but free: CentOS (not sure if I get the name right. There are so many CabcOS out there nowadays).


Amazon Linux is CentOS 6.something with new kernels and security fixes for the associated packages.


Which is fine if you are only running on one cloud provider and don't have any local infrastructure. But I want my one Linux flavour to run anywhere ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: