Hacker News new | past | comments | ask | show | jobs | submit login
CrowdStrike broke Debian and Rocky Linux months ago (neowin.net)
294 points by taubek 58 days ago | hide | past | favorite | 147 comments



What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.

But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.


> it is still more robust than software created by multi-billion dollar corporations

OSS software has few to no profit incentives. It it written to do something, not to sell something. It also has little time pressure. If a release slips, there is no impact to quarterly numbers. Commercial software is not an engineering effort, it is a marketing exercise.


To remove the technology part.

When Komatsu decided to go after Catapiliers market, the set quality as their first strategic intent. They then made sure that later strategic steps were beholden to that earlier one.

XP/Agile manafesto emphasized 'working software' which in theory was to have a similar intent.

But the problem with manafestos is that people package them and sell them.

Agile manafesto signatories like Jeff Sutherland selling books with titles promising twice the code in half the time don't help.

OSS has a built in incentive to maintain quality, at least for smaller projects.

Companies could, but unfortunately management practices that make people quite successful becoming habits that are hard to change even when they want to.

Hopefully these big public incidents start to make the choice to care about quality and easier sell.

The point being is that quality is still an important thing for profit oriented companies, but it is easy to drop it and only notice it after it is too late.

Showing that it aligns with long term goals is possible, but getting people to do so is harder.


But interesting that with no time pressure, OSS does not miss lots of features of commercial products. Often their are even ahead.


I disagree. OSS is only ahead if there is no money in it, like new programming languages. Whenever it is profitable, OSS projects just cannot compete with professionals working full time.

OSS is usually reinventing the wheel free from commercial pressures (Linux, GNU, Apache). Or they are previous commercial products (Firefox, LibreOffice, Kubernetes, Bazel).


Can you explain why apple, google and microsoft all use a fork of a browser made by KDE?


They use a fork of a browser engine and there is no money in building a browser engine.

The money is in building a browser around the engine because there you can inject tracking and try to make your product unique.


>Whenever it is profitable, OSS projects just cannot compete with professionals working full time.

Windows is profitable, but Linux is competing well on servers.


> OSS projects just cannot compete with professionals working full time.

Many OSS projects have professionals working full time on them.


Not when it comes to video/image/vector editing. DaVinci Resolve, Adobe and Affinity are still miles ahead of FOSS creativity tools like The GIMP.


Try Krita, Darktable, Scribus and Blender.

You're comparing household name with household name. Commercial software has a marketing budget, but free software spreads more by word-of-mouth (or association with a big and processional organisation like GNU), so that's an apples-to-oranges comparison. GIMP isn't very good, as free software image editors go: Script-Fu, plugins, or UI familiarity are basically the only reasons to choose it these days.


Is there any feature in Krita, Darktable, Scribus, or Blender, which Adobe products do not have? It certainly is the case the other way round.


I'm curious as to which ones they do not have compared to Adobe products.

The only one I can think of is proper material layer painting in Blender, you can get there with addons but haven't found one that's as good. Genuinely the only thing that I miss, and I do this full time.


Darktable has some features that RawTherapee doesn't, and vice versa. I imagine that some of that stuff isn't in the Adobe software. (I've heard that recent versions of Lightroom have removed local file management support, which both these programs still have – though don't quote me on that.)

Krita has a lot that Photoshop doesn't: https://docs.krita.org/en/user_manual/introduction_from_othe... .


You obviously haven’t compared GUIs, the most hodgepodge mix of we have that feature! in a sea of confusion and disrespect for interface standards.


Linus would disagree and there’s a reason why the kernel keeps a 1000 ft wall up at all times. He would outright reject majority of user land code with good reason. It’s a miracle that anything outside of the kernel works and it shows very often. People seem to forget how often distros shit the bed on every update and how frequently people have to rebuild the entire fucking thing to get it to work again. Updating is an incredibly stressful procedure for production anything. I haven’t even gotten to audio and video - that’s a separate Fuck You ft. Wayland mix. So no, Windows is the gold standard in many ways, precisely because it’s mercilessly fucked from every which way and manages to run every day across a bajillion machines doing critical things. I don’t care about who is being financially compensated and who isn’t - the depth of decision making shows itself in the musings of Raymond Chen and others, and that level of thoroug thinking is very rare, even in the OSS world.


Tbf, internally windows may be in a similar situation, it’s just not in the open. So there could be some visibility bias.

IMO the difference is, it’s usually pretty easy to excise offending code from the Linux ecosystem, but not on windows.

Don’t like Wayland? Stick with X.

Don’t like systemd? Don’t use it.

Dont like cortana or recall? Tough, it’s gonna be on your machine.


Sure, but can you realistically run an up-to-date server today without systemd? Especially for these organisation that runs stuff like CrowdStike.


Devuan? Void? MX? Guix?


Do sysadmins actually run any of them for large systems like stores, hospitals, airports etc?


The point is that they can if they have the want/need to.

Sysadmins at the places you listed use windows because that’s where the software support is and Active Directory exists.


Alpine Linux. I'm sure they do run that one.


I left Alpine off of my list because the only place I've ever seen it used is inside of containers, which usually don't run their distro's init system at all.


It's a matter of a single click to disable cortana, recall or anything you don't like in Windows, with tools like w10privacy.


Sure, but that’s a whole other tool.

Cortana and recall are just examples. Microsoft (or OEMs) can put anything they want in the OS and make it difficult to remove.

It’s harder to do that kind of stuff for the Linux foundation and the kernel team.


is it? where Linux = Redhat or Ubuntu in the real world, Ubuntu managed to force snaps and advertising for Ubuntu pro down everybody's throats and the Linux foundation was utterly helpless against that.


Sure, but that’s Ubuntu.

If one was fed up enough with Ubuntu, they could switch to Debian or mint and all their programs would still run and their workflows likely will not change too much.

But for windows you’d have to switch to osx or Linux, neither of which is going to easily support your software (unless it happens to run great under wine, but again that’s a different tool)


I feel like the argument of "don't like X, use Y" is often missing the point when people are expressing their pain with OSS. I find X painful because of reason A, B, C so I take the advice to switch to Y, be happy for a half day before I try to do anything complex and find pain points A', B' and C'. It's often a carousel of pain, and a question of choosing your poison over things that should just work, just working.

Just as an example, I spent a couple hours yesterday fighting USB audio to have linear scaling on my fresh Debian stable install, and I'm not getting that time back ever. Haven't had that sort of issue in more opinionated platforms like Windows/MacOS in living memory.


Linux is a more complicated and more powerful (at least more obviously powerful) tool than windows or macOS. Daily Linux use isn’t for everyone. It can be a hobby in and of itself.

The knowledge floor for productivity is much higher because most Linux projects aren’t concerned with mass appeal or ease of use (whether or not they should be is another discussion)

Debian, for example, is all about extreme stability. They tend to rely on older packages that they know are very very stable, but which may not contain newer features or hardware support.

The strength is extreme customization and control. Something that’s harder to get with windows and even harder to get with macOS.


Repeat after me: Debian STABLE does not mean Debian WORKING, it just means that major versions of most of the software will not change for this release. There are many things in Debian STABLE that are not working and that will continue to not work until next major release (or longer). I think STABLE in Debian name is the biggest mislabeling of all time.


The things I build because I'm paid to, in ways I disagree with because MBAs have the final say, are terrible compared to my hobby and passion projects. I don't imagine I'm entirely alone. I hope there's always communities out there building because they want useful things and tools for doing useful things that they control.


I think deadlines are probably also a big factor. Many OSS developers build their projects in their free time for themselves and others. So it could be a passion project where someone takes more pride in their work. I'm a big advocate for that actually.

Much commercial software feels like it is duct taped together to meet some managers deadline, so you feel like 'whatever' and are happy to be done with it.


Also, pressure from product owners and business can affect delivery negatively.


I do not think "amateurs" is a good description about the people writing the code - most will be highly technical people with lots of experience. And "loosely coordinated" can be applied to many "corporations" as well.

I think it matters that people coding in open source do it because they care (similar to your idea but on the positive side). If you want to make something nice/working/smart you have more chances to succeed if you care than if you are just being paid to do it (or afraid that you will be embarrassed)


If you get payed you're a professional otherwise an amateur, at least in the original meaning of the words.


in this case would a programmer with a day job that also hacks on linux in their free time be a professional at work and an amateur on anything they do independently? Or really any sort of engineer, contractor, person who makes stuff, etc?


Just my interpretation: that programmer is a professional. They are paid to do programing. They are still professional even when they are working on their hobby project, because it is not a function if they are paid for that particular code, but if they are paid for any coding at all.

If that programer would go and coach their friend’s basketball team for free they would be an amateur coach, but they are still a professional programmer even while coaching.


I agree - and in that case I'd bet a lot of Linux and open source etc is written by professionals to some degree that's likely significant.


Yes, they wouldn't be considered doing the Linux stuff in a professional capacity.


If we're doing originalism, an amateur is someone who does not engage in any wage-earning labor.


> it is still more robust than software created by multi-billion dollar corporations

Well, in the industry usually fewer eyes are looking at the code than in open source. Nobody strives to make it perfect overall, people are focused on what is important. Numbers, performance, stability, priorities depend on the project. There are small tasks approved by managers, developers aren't interested in doing more. Bigger company works the same, it has just more projects.


> What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

This is such a gross mischaracterization.

Linux enjoys stability and robustness because multi-billion dollar corporations like RedHat and Canonical throw tons of money and resources at achieving this end and turning the loose collection of scripts plus a kernel into a usable OS.

If they didn't, Linux would have single-digit adoption by hobbyists and these companies would still be running Solaris and HP/UX.


> Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

I've committed sins in production code that I would never dream of doing in one of my published open source projects. The allure of " no one will ever see this" is pretty strong


>What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

I mostly agree, but I think that there's a delayed effect for OSS projects falling apart. Most of these projects are literally just 1 or 2 people coding in their spare time with maybe a few causal contributors. The lack of contributors working on extremely important software makes them vulnerable to bad actors (e.g. the XZ backdoor) or to the maintainers going AWOL. The upside is that it's easy for anybody to just go in and fix the issue once it's found, but the problem needs to happen first before anybody does that.


It's a hunch, but I feel like open source has more churn and chaos & is multi-party. And that mileau resembles nature, with change evolution & dynamics.

Corporations are almost always bound into deep deep path dependence. There's ongoing feature development upon their existing monolithic applications. New ideas are typically driven by the org & product owners, down to the workers. Rarely can engineers build the mandate to do big things, imo.

Closed source's closeness is a horrible disadvantage. Being situated not alone & by yourself but part of a broader environment, new ideas & injections can happen, works to reduce the risk of cruft maladaption & organizational mismanagement & malpractice. Participating in a boarder world & ecosystem engenders a dynamism, resists being beset by technical & organizational stasism.


this is one of larry wall's 3 virtues of great programmers:

hubris: the quality that makes you write (and maintain) programs that other people won't want to say bad things about.


Resiliency of OSS varies a lot between projects and even between parts of certain projects.

For example Linux ACPI support was pretty flaky until Linus pushed for no breakages in that area.


I'm trying to agree with you here, not shame you, but I do think there's something to the idea that you just shouldn't write code that you wouldn't want to be public. In the long run, it's a principle that will encourage growth in beneficial directions.

Also proprietary code is harder to write because you can't just solve the problem, you have to solve the problem in a way that makes business sense--which often means solving it in a way that does not make sense.


When prototyping something to see if a concept works, or building something for your own private use, you really shouldn't waste time trying to make the code perfect for public consumption. If later you find you want to open source something you wrote, there will inevitably be some clean-up involved, but thinking of writing the cleanest and most readable code on a blue sky project just hampers your ability to create something new and test it quickly.


Leave it broken, that's fine, just don't leave it misleading. And leave hints for how a passer by might improve it in the future.

Probably you'll be that passer by in the future, and you'll thank yourself. Or you won't, and someone will thank you.


but that's the reality of writing software.

the problem is that no matter how sincere the promise of "I'll clean it up and release the code" is, it rings very hollow because few people realistically actually ever get there.

if a developer is so afraid of judgement that they can't release the code to something they want to release, we have a cultural problem (which we do), but the way forwards from that is to normalize that sharing code that is more functional than it is pretty is better than future promises of code.

as the saying goes, one public repo up on GitHub is worth two in the private gitlab instance


I don't know, out of the software crises I can remember out of the top of my head, 2 (Heartbleed and Log4Shell) are from FOSS.


But way less damage.

How many computers crashed?


how many ransomware attacks have you been a a part of remediation for?


I admit none, but isn't that just because ransomware frequently comes into a network through user error which usually is using a Windows machine?


> Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

There's something to it. Anecdote of one: at one time management threatened^Wannounced that they planned to open the code base. I for one was not comfortable with that. In a commercial setting, I code with time to release in mind. No frills, no optimizations, no additional checks unless explicitly requested. I just wrote too much code which was never released (customer/sales team changed its mind). And time to market was typically of utmost importance. If the product turns out to be viable, one can fix the code later (which late in my career I spent most time on).


> much of it code and lashed together by amateurs for free

Per FOSS survey, most of it is written by professionals and they get paid for it.

Then, most of the rest is written by professionals who do something on the side. The amateur for free thing is mostly a myth.


Yep. Some of the garbage I've seen out there is shocking. It scares me.

Then I try and get fractional scaling working on Wayland with my NVidia card and want to gouge my eyes out with frustration that after a decade I still can't do what I can do on a closed source thing that came free with my computer. Actually make that 25 years now. The enterprise crap while horrible, actually mostly does work reasonably well. Sometimes I feel dirty about this.

Quality is therefore relative to the consumer. The attention is on what the engineers care about with Linux, not the users I find. Where there's an impedance mismatch there are a lot of unhappy users.


> after a decade I still can't do what I can do on a closed source thing that came free with my computer

I don't know the specifics, but there's a good chance that your issue is ultimately because Nvidia wants to keep stuff closed, and Linux is not their main market - at least for actual graphics, I guess these days it's a big market for GPU computing. So it's the interface between closed and open source that's giving you grief.


I don't think this is nVidia issue. Wayland/nVidia woes are primarily about flickering, empty windows and other rendering problems. I may be wrong, but I believe HiDPI support is mostly hardware-independent issue.


If it isn't a hardware independent issue, they really fucked up Wayland.


Here's a decent summary that tries to explain why things are bad: https://news.ycombinator.com/item?id=40909859


Oh wow. They really did fuck up Wayland.


It doesn't work properly on Intel or AMD either. It just sucks worse on NVidia.


In a window manager or KDE(x11) you can use nvidia-settings you can click advanced in monitor settings and selected viewport in and viewport out. If you set the viewport out to be the actual resolution and the viewport in to be some multiple of the actual resolution you can get fractional scaling and if you make the factor chosen a function of the relative DPI of your respective monitors you can make things perceptibly the same size across monitors. That's right fractional scaling AND mixed DPI!

You can achieve the same thing with xrandr --scale and its easier to automate happening at login.

You can also achieve pretty good fractional scaling in Cinnamon (x11) directly via its configuration. You enable fractional scaling on the right tab and suddenly global scale is replaced with a per monitor scale. Super user friendly.

Also your copy of Windows was just as free your computer was. You paid someone to configure windows acceptably for you and Microsoft and various OEMs who make windows hardware split your money giving you something usable in return.

You then decided that you wanted Linux on it and now you are the OEM which means you get to puzzle out integration and configuration issues including choosing a DE that supports the features you desire and configuring it to do what you want it to do.


I have been using only 4k monitors in Linux for at least a decade and I have never had any problem with fractional scaling.

I continue to be puzzled whenever I hear about this supposed problem. AFAIK, this is something specific to Gnome, which has a setting for enabling "fractional scaling", whatever that means. I do not use Gnome, so I have never been prevented to use any fractional scaling factor that I liked for my 4k monitors (which have been most frequently connected to NVIDIA cards), already since a decade ago (i.e. by setting whatever value I desired for the monitor DPI).

All GUI systems had fractional scaling already before 1990, including X Window System and MS Windows, because all had a dots-per-inch setting for the connected monitors. Already around 1990, but probably already much earlier, the recommendations for writing any GUI applications was to use for fonts or for any other graphic elements only dimensions in typographic points or in other display independent units.

For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems. There have always been some incompetent programmers who have used dimensions in pixels, making unscalable their graphic interfaces, but that has been their fault and not of X Window System or of any other window system that was abused by them.

It would have been better if no window system had allowed the use of any kind of dimensions given in pixels in any API function. Forty years ago there was the excuse that scaling the graphic elements could sometimes be too slow, so the use of dimensions in pixels could improve the performance, but this excuse had already become obsolete a quarter of century ago.


> For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems.

Parent comment was talking about Wayland, and Wayland does not even have a concept of display DPI (IIRC, XWayland simply hardcodes it to 96).

You're correct - in theory. In practice, though, it's a complete mess and we're already waaay past the point of no return, unless, of course, somehow an entirely new stack emerges and gains traction.

> There have always been some incompetent programmers who have used dimensions in pixels

I don't have any hard numbers to back this, but I have a very strong impression that most coders use pixels for some or all the dimensions, and a lot of them mix units in a weird way. I mean... say, check this very website's CSS and see how it has an unhealthy mix of `pt`s and `px`es.


don't romanticize the situation too much, open source software is almost entirely written by professional software developers, mostly at their day jobs


For larger projects there's still usually the benefit of having developers/maintainers from multiple institutions with different goals.

Linus is not gonna merge some hacky crap just because somebody's boss says that it must be merged for the next TPS report.


At least this one you can argue that they can't test every Linux and Unix flavour. But windows..


The tradition of writing tests in OSS projects plays a huge role here.


but...it's all the open, and it might have more bugs, but the bugs get fixed faster


Relevant comment from yesterday's Crowdstrike mega-thread:

"Crowdstrike did this to our production linux fleet back on April 19th, and I've been dying to rant about it." [1]

Continues with a multi-para rant.

[1] https://news.ycombinator.com/item?id=41005936


> ...weeks later had a root cause analysis that didn't cover our scenario (Debian stable running version n-1, I think, which is a supported configuration) in their test matrix. In our own post mortem there was no real ability to prevent the same thing from happening again -- "we push software to your machines any time we want, whether or not it's urgent, without testing it"


Not just a relevant comment; this HN thread serms to be a primary source for the article.


I used to work in this space, and I always had the nagging question of "is any of this stuff actually useful?"

It seems a hard question to answer, but are there any third party studies of the effectiveness of Crowdstrike et al. or are we all making our lives worse for some security theater?


It’s like trying to study the effectiveness of antivirus. But you already said it. As long as it produces consumable metrics a c-level can ingest, then it’s worth it. Because really, how does it make sense to add something so invasive? Anyways in the 90s, antivirus makers also wrote viruses. They’d go on to flood networks with their creations, but magically block infection for their subscribers.


Have you seen it actually stop anything? (I'm sure the company that made the tool used it too, right?)

If I make a WWW-wide question of "has anybody seen it?", somebody will appear. But the number of people that got a security flaw caused by those tools is huge, and the people that got stability and availability problems because of them is basically the number of people that use them.


I worked on something different, but we integrated with Crowdstrike and such.

Maybe someone could do a study of like breaches in Fortune 500 companies that use an EDR vs. those that don't, but they probably all do at this point.


I would imagine any study like that would also be just packed with confounding factors.


Product quality is on freefall: from aircraft to software. Lack of QA is the norm nowadays as everyone just care about the extra penny.


ive said this before but we have min-maxed our economy to optimize for profit. We may be entering the reaping phase of that now.

its the same reason we cant make enough artillery shells for ukraine or onshore chipmaking or build ships.


> ive said this before but we have min-maxed our economy to optimize for profit. We may be entering the reaping phase of that now.

Expanding the scope beyond the economy, one could certainly make the claim that the Age of Consequences is upon us, and that William Gibsons "Jackpot" isn't far off either. We're increasingly and collectively impacted by the fallout from decades of bad decisions.


Age of consequences, indeed.

It really does feel like we (humanity) are on the precipice of something. We're smack in the middle of an era that entire books will be written about. I really don't like thinking about the decades to come and what kind of world our grandchildren will have.


Who would imagine that optimising for profit and profit alone would result in such a fragile ecosystem?

Sorry for the sarcasm.


Boeing enters the chat


It’s also the proliferation of not-really-technical people calling the shots and even developing at technical programs.

15 years ago most people in the industry actually wanted to be there except for perhaps a few exceptions.

Now it’s just like any other job with the majority of developers not giving a shit about what comes out which turns out to be shit.

I haven’t ran in to a senior or lower developer that actually bothered to test their junk in at least a few years.


The economic consequences for doing things wrong are less than the profit made.

Until that changes, nothing else will.


I’m not even sure it’s more profitable to do things wrong it’s just easier and more advantageous for individual managers


Why is it advantageous though? Surely the behaviors of managers are incentivised by something.


Because the benefits are reaped well before the full cost becomes evident. By the time everything catches fire, the person responsible has retired into their Mediterranean villas and will never come back to fix what they caused.


CYA incentives:

If I don't install one of these system (and global IT) killing EDR systems, and I have a breach _I_ am responsible.

If my company requires it, and I install it and the entire network falls over, responsibility is passed to the EDR vendor. Everytime the EDR platform in my org kills an app, much of the reaction internally is "Oh well, at least we are protected. Let's open a support ticket."

"Security" software has been troublesome since the first AV platform was released. But the personal risk for management to not deploy it is high.


Yeah this allows outsourcing both risk and responsibility. The institutional risk that you take in exchange is acceptable because it lowers personal risk


Generally individual managers who make these decisions are acting as short sighted as companies they belong to. Along with that, Company interests and individual manager interests don't always align.

I've been plenty of places where individual manager comes up with some grand plan, implements it, gets praise and leverages into new job. Meanwhile, that plan never makes it past MVP and is massive tech debt that will weigh down the company but they don't care.


They are higher, actually, especially in this case and otherwise by definition. The problem is that most of the cost ends up externalized.


Indeed. Company should go close to bankrupty in incident like this. But CrowdStrike pays fractions.


Penalty 1: stock buybacks forbidden

Penalty 2: separate the company into two, separating financialization procedures from manufacturing ones.

Penalty 3: greenlight a union by default.


> Penalty 3: greenlight a union by default.

This should be a fundamental feature in any functioning society that expects (or wishes) to remain functional.


Penalty 2 is the same as closing the company down


No, why? If there's a company that provides actual value, why should splitting off the financialization part kill the part that provides real value? It's maybe the same as closing down the financialization part of the company, but if so, what loss to society?


The financialization (profit-making) is the only reason anyone with any power bothers to keep any of it running.


One could say that most of the companies outside U.S are non-financial


This is not about product quality.

This is marketoid head-shitting influencers selling garbage to people who have no idea what the fuck they are doing other that compliance box ticking to cover their paranoid vendor induced psychosis. They have no idea about threat modelling, no idea about how an operating system even works. Using the CIA triad, they traded off AVAILABILITY entirely, CONFIDENTIALITY partially by sending every fart a system makes to some cloud company for some false sense of INTEGRITY that the vendor does not even guarantee. In fact I've found fuck all evidence that the product actually does anything at all in a correctly layered security architecture.

This is the diametric opposite of a security proposition. A house of cards built on lies and incompetence. Literally this product is the definition of malware. It egresses data you have no control over. It has built in remote command and control that can take you out. And it runs completely arbitrary code in ring 0 (clearly written by a blind monkey)

I called this ENTIRE thing in March 2023 as a critical business risk and the above people steamrolled it. Literally I have the objections recorded and I'm going to use it at this point to steamroll them the fuck back.

It doesn't matter if it's made of shit if people buy it. It happens to be made of shit though.

I am done with this now. I am so fucking done.


Yeah it’s sad. Some companies are better than others, and it’s personally my favorite part of software engineering, but a lot of large companies cut it at the first opportunity.


There was also this report of CrowdStrike injecting a buggy DLL into Windows applications, which could cause the app to crash through no fault of its own:

https://x.com/molecularmusing/status/1808756095860543916


That's completely different. The injected DLL is an optional, off-by-default feature that admins are specifically warned to check for compatibility against their fleet before rolling out. It has four levels of increasingly unstable hooks it can apply, and there are repeated warnings about using it in the UI and documentation. E.g.:

> Extended User Mode Data (XUMD) allows the sensor to monitor information in running processes by loading a library that can hook various user-mode APIs.

Some endpoint telemetry can be gathered only through user-mode hooking. XUMD provides a flexible way to provide information about which APIs a process is leveraging. This information feeds a variety of prevention mechanisms that are available to the sensor based on the accumulated behavior observed.

Unlike Additional User Mode Data (AUMD), the cloud can dynamically modify XUMD visibility without a sensor update.

Supported prevention policy settings for XUMD:

Disabled: The extended visibility, detection, and prevention capabilities of XUMD are disabled. The hooking library is not loaded into processes.

Cautious: XUMD is enabled with high-confidence hooks that are accessible to detection and prevention logic. Performance and compatibility impact at this setting is expected to be negligible, but we recommend testing this setting in a staging environment before deploying it to production.

Moderate: XUMD is enabled with high- and medium-confidence hooks that are accessible to detection and prevention logic. This setting can result in performance or application-compatibility impact but provides expanded visibility. Performance impact at this setting is expected to be negligible, but we recommend testing this setting in a staging environment before deploying it to production.

Aggressive: XUMD is enabled with high-, medium-, and low-confidence hooks that are accessible to detection and prevention logic. This setting can result in significant performance or application-compatibility problems. This setting is not recommended for production environments without significant prior testing in a staging environment.

Extra Aggressive: XUMD is enabled with high-, medium-, low-, and experimental-confidence hooks that are accessible to detection and prevention logic. This setting can result in significant performance problems or application compatibility problems. This setting is not recommended for any production environment but might be appropriate for penetration and stress testing in specific limited deployments.

Because XUMD is loaded in user processes that were not developed with it, negative interactions with other software might occur. This is most common when other security products are installed. In certain software environments, conflicting software might crash, fail to start, or suffer degraded performance. In these scenarios, move a test system into a policy where XUMD is disabled, reboot the host, and then retry the software. If the issue is resolved, open a Support case and request assistance in resolving the conflict. Support can assist in diagnosing and resolving these issues between XUMD and specific software.

To determine which processes have loaded the XUMD DLL, run the following command at the command line:

tasklist /m csxumd*


This is all a consequence of firms being able to contract out of consequential liability.

Perhaps we should render such clauses unenforceable, as we do with contracting out of consequential loss of life.

Or at least limit them.


That's what that European cyber security directive tried to do, right? The one that everyone was mad about until they carved out an exception for FOSS?


> The update proved incompatible with the latest stable version of Debian, despite the specific Linux configuration being supposedly supported.

> The analysis revealed that the Debian Linux configuration was not included in their test matrix.

This is suspiciously close to actual fraud. They declare they support configuration X, but they actually do not do any testing on configuration X. That's like telling me my car will have seatbelts, but in no place in manufacturing it is ensured the seatbelts are actually installed and work. I think a car maker that does something like that would be prosecuted. Why Crowdstrike isn't? I mean, one thing if they don't support some version of Linux - ok, too many of them, I can get it. But if you advertise support for it without even bothering to test on it - that's at best willful negligence and possibly outright fraud.


“No one noticed” which is a cute way to say that Crowdstrike suppressed the media noticing. The day of the bug, the HN post had comments about how people tried reporting the issue months ago.

Even the article is written as people noticing. So who didn’t notice? Or were the issues not popular enough to be not ignored?


Crowdstrike suppressed the media noticing

You think they sent checks to the NY Times, WaPo, and major networks or something? Media doesn't care if some servers crash unless it is noticeable in the world at large (like the airline groundings).


I remember something about it, but, at the time I thought that very few people would install such software in Linux (and, indeed, very few companies do).

The blast radius was minimal, probably smaller than a bad Nvidia driver.


My company installs falcon on servers!

I know because our apt mirror takes much longer now to sync. That's because the crowdstrike agent is using all the CPU to scan every .deb package.

And they're ar-tar packed, so unless there's a special algorithm to scan for them, nothing will ever be found in them anyway.


Is anyone here using Crowdstrike, what does it do? I see it referred to as an 'anti-virus'? I have it installed on my work laptops and I see it as a keylogger and activity monitor. "I got nothing to hide", but still bothers me when some corporate super users spy on me.


It gets you a sign-off from the security, compliance, and legal teams.


This right here


Instead of calling out AV/EDR solutions as malware and spyware, I have a better solution: Stop using your workstations for private stuff. They belong to the company, and they are a liability since you use them to access the company environment and could cause damage if actual malware would find its way onto it. Use a separate device for private stuff if you want privacy.


Yes, you should be using a separate computer for personal stuff, but you should still be calling out that spyware for what it is too.


Anecdote from SWIM: Employer corp supposedly has CS deployed on all endpoints. Been getting away with just running it in a VM with restricted resources and not hearing anything about it. Did notice the VM failing around that time.

Also heard from an ex-coworker in another large corp where IT just gave up on enforcing compliance for Linux endpoints. I wouldn't be surprised if some IT admins effectively adopt a "don't ask don't tell" policy here: If you can figure out the alternative stack by yourself without causing noise or lying about it, you're on your own. It'd certainly make sense if the motivation for enforcement is largely checkbox- compliance.

I wonder just how widespread this kind of admittedly malicious compliance is and how much it contributed to the April incident not being bigger news...


What is cloud strikes unique selling point? Genuine question, because I'd never heard of them before this.


“Dear CTO, do you want to be seen as the person who didn’t take every measure to stop hackers from stealing your data? Then buy our stuff”

Something like that.


Close, but instead of an email, it’s probably a conversation during a game of golf at an exclusive private club.


It's an auditor, from insurance or some financial service, with a list of 2 or 3 companies, 1 of them being famously invaded since the turn of the century (once, continually).


“It’s a nice IT infrastructure you have at XCorp. Would be a shame if something happened to it”…

Or

“You really don’t want to be the last one in the finger pointing chain. Come on, sign here and you can point to us. Our lawyers will deal with the mess”.


> “It’s a nice IT infrastructure you have at XCorp. Would be a shame if something happened to it”…

That happens to be the Cloudflare sales pitch too.


At face value they sell it as a security product. In reality it's a tool for employers to spy on employees, control what they do on the device, etc.

The unique selling point in a sense is plausible deniability about the true purpose of the software.


Really weird to see HN fail to explain a simple software question.

Crowdstrike Falcon specifically is their AV offering, the selling point is the ease of deploying and managing their agents alongside the rest of their security platform.

Many compliance frameworks either require or are interpreted as requiring AV, so regardless of utility this type of tool is necessary for many orgs. Hopefully this whole debacle will shine a light on that assumption, but that’s a separate conversation.

Deploying AV agents in bulk is a huge pain in the ass and most companies that make it easier aren’t going to be cheap. I imagine C-suites are more likely to approve expensive RFPs if they’ve heard of this company that sponsors a major F1 team.


Are you a salesman for crowdstrike?


If that’s the conclusion you’ve drawn from me questioning their entire business model and arguing that their selling point is advertising on race cars, I don’t know how to help you.


Is it possible that these events had less impact because the damage was less / more easily fixed due to the nature of the OS?

Or perhaps because the admins of Linux systems are typically more knowledable about how to run their platforms, and not just install them?

Or is it due to sheer numbers of enterprise software running on Windows?


Not just Debian and Rocky, but RHEL too. https://access.redhat.com/solutions/7068083

I ran into this doing CentOS7 to Alma9 upgrades. The bug was in RHEL, Alma and Rocky and any other distro derived from RHEL. I had a VM go into an endless reboot cycle and the only way to get back in was to boot to an emergency rescue console and disable falcon-sensor.

The problem was something to do with eBPF, and one of the workarounds was to tell falcon sensor to use kernel mode and not auto or user (bpf) mode.

We don't allow automatic updates on hosts, however, so thankfully this was contained, but it certainly begs the question of just what testing Crowdstrike are doing.


Huh, this story sounds familiar. I read a HN comment the other day telling this same story. They didn't just turn a random HN comment into a news article, did they?

Yup. They did. At least they cited it I suppose.


Need a “crowdstrike.sucks” or maybe a general “it.sucks/{company}” to gather all of these company misgivings.

Avoid this company at all costs. Move to competitors which offer the same “audit passing” requirements


Funnily enough crowdstrike.sucks is already registered through CSC, the same fancy corporate registrar which handles crowdstrike.com.

The people behind .sucks have a great racket going, every big brand has to pay $300/year for their name dot sucks just so nobody else can get it.


They also own “clownstrike.com”. The fact they proactively registered it says a lot.


Proative marketing culture > proactive testing culture


Probably the relationship is more causative: A decided lack of a proactive testing culture feeds back into the org through sales interactions but only makes it as far as the discretionary credit card of someone in the marketing group


They've been called that way before they made Jun 19 the International Bluescreen Day.


Wow, $300/year is wild. In this influencer age, negative attention is the cash grab.

<researching how to setup TLD, maybe set it up as a DAO>


> CrowdStrike should prioritize rigorous testing across all supported

They should not.

Testing costs money and they aren't selling their product to a company that needs or wants it on a competitive market. Their business model is based on shoving the product down the throat of enterprises due to compliance and therefore they have zero incentive to invest any money into quality.


I think someone noticed. And was thinking: People wont be happy to fix this and I am not allowed to fix either. Well, it might be just like the 3000 other rare issues that would one day break the world's IT. Who cares...


In the end, because of regulatory pressure, the only pressure that matters in a commercial environment, there will be three supported OS's: a windows one, a mac one, and probably one, and only one, Linux distribution, or flavor thereof. Everything else will be toast in a commercial environment. For Linux there might be this AWS one, and that Google one, but they'll be close. And, in order to satisfy regulatory requirements, they'll be very, very close. Commercial organizations have bosses and, more ominously, regulators. We, and they, need a checkbox checked. So let's not fool ourselves with thoughts of freedom and liberty. There's a real world out there.

CrowdStrike screwed up, but there's more chance that a 1000 linux's go to one than 1 CrowdStrike goes to zero.


> ( ... ) experienced significant disruptions as a result of CrowdStrike updates, raising serious concerns about the company's software update and testing procedures

To me the issue isn't CrowdStrike's testing procedures. To me the issue is why does Debian depend on CrowdStrike? Does anyone understand this?


Debian doesn't depend on CrowdStrike. CS provides Linux clients for organizations that want to deploy it with their Linux workstations/servers.

From the article:

> In April, a CrowdStrike update caused all Debian Linux servers in a civic tech lab to crash simultaneously and refuse to boot. The update proved incompatible with the latest stable version of Debian, despite the specific Linux configuration being supposedly supported. The lab's IT team discovered that removing CrowdStrike allowed the machines to boot and reported the incident.

Debian is as at fault for CrowdStrike's incompetence and negligence as Microsoft is. That is to say, not at all.


Right, thanks for that. My fault for skipping every other word.

This had really soured my day for a bit there, but you brought it back.

Debian is one of those things that I consider a source of stability in my software life, together with git, wikipedia, and openstreetmap. Believing that it depends on some dodgy company really put me in a bad mood.


Indeed. Debian is the BSD of the Linux world, CentOS/RHEL being the Solaris.

As usual, this was a self-inflicted pain by companies wishing to check yet another box to externalise liability.


Debian doesn't depend on CrowdStrike. That's about people who installed it on such systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: