Hacker News new | past | comments | ask | show | jobs | submit login
"Any sufficiently bad software update is indistinguishable from a cyberattack" (twitter.com/leighhoneywell)
180 points by peter_d_sherman 3 months ago | hide | past | favorite | 120 comments



The real thing I've learned from this is that most malware doesn't get ring 0 access, but these "antiviruses" solutions can shutdown the whole critical infrastructure everywhere. I've never seen malware that infected millions of devices in seconds and made them unusable for days and maybe weeks to come.


Giving someone else [edit: remote] root access to your computer is a bad idea. For Windows users it is bad enough with Microsoft. Adding another one is just adding to the risk.

Centralizing power is asking for abuse.


>Centralizing power is asking for abuse.

Not particularly. Whatever you do, it can be (will be?) abused. Centralizing power means that a centralized power abuse will happen, like in the case above. In a distributed power system, an attack could occur that targets most entities successfully, but they won't respond efficiently because of lack of centralization. Like computer viruses in the old days.


That's every kernel driver. Microsoft is slowly moving drivers to user space (most USB drivers, audio drivers, printer drivers, ...) but it's a long walk.

And you are still left with the GPU drivers.


How much of GPU drivers are left in kernel for Windows? Updating GPU driver or having desktop crash on Windows and it recovering reasonably seems common enough experience.


> And you are still left with the GPU drivers.

It'll be interesting to watch how nVidia responds to this move. Their drivers are black magic at the moment.


Ye I meant remote. Sorry for the unclarity.


So this is an argument for only using first-party (Microsoft) security software?


I'd rather[1] give "Microsoft and only Microsoft" the ability to remotely update my system, than "Microsoft and whatever fly-by-night 3rd party 'security' companies manages to sell their malware to my boss."

A big problem here is that Microsoft has normalized the idea of rando third-party software having access to ring 0 and things running at the kernel level. This is one of the reasons many Linux people argue against opaque third-party blobs running in the kernel. This should not be as routine as it is in the Windows world. Apple wisely ditched kernel extensions.

1: EDIT: Ideally, I'd rather Microsoft not have access to remotely update my system, either, but that ship seems to have sailed long ago.


Ye I guess so. I'd argue to use some Linux distro though to remove the root remote code execution CVE.


You can buy RCE malware for Linux too - including crowdstrike. If you want a third party to manage your systems then it doesn’t matter what third party you choose

Personally I’d go for diversity over any specific solution. That’s a rare thing in “enterprise” world. I suspect companies with strong shadow IT which provides business value for far better than those top down enterprise led ones.


'Being ignorant is not so much a shame, as being unwilling to learn.'

Back in the day, you could install windows XP and be infested in minutes https://en.wikipedia.org/wiki/Blaster_%28computer_worm%29


Sasser, too.


not millions but lookup shamoon. Wannacry was at 300k+ machines, it wasn't millions because one guy registered the kill switch domain within a couple of hours.

Ring 0 is bad but malware do get SYSTEM all the time, they can inhibit booting at that point just the same.


Blaster infected millions, among Sasser.


I saw it a couple of days ago…


Most people do not (deliberately) give malware root access to their PC


Most enterprises do


Depends what you consider malware, apparently.


> I've never seen malware that infected millions of devices in seconds and made them unusable for days and maybe weeks to come.

Yes, well, that's because we have built much more resilient systems, of which anti-virus is a substantial part of. You could start with the Morris worm, but a more recent example would be the Stuxnet cleanup efforts (largely done with AV).

AV is (thankfully) becoming less and less relevant, but it's still a valid layer of defense for many platforms.


Yeah, this logic is absolutely deranged and 100% stems from a hate of forced AV on work machines or something.

Pretty much every common modern-day security defensive is reactive, in that you can point to a point in time where we didn’t have it, and can pretty easily see the consequences of that.


I Would also like to add:

“Any security product which has a rootkit, remote command and control and egresses data is indistinguishable from malware”.


I wonder if the root cause of this is the notion that one can tack "security" onto a system or inject it into a system, instead of it being a holistic perspective, with appropriate use of sub-components and rules.


The proximate cause is companies handing over total control of their systems to opaque security racketeering quacks. And the root cause of why a company would do that gets right to the heart of the reason why "security to check the boxes" is the phrase that's been going around in the past few days.

Any security that isn't done layer by layer in depth must be "tacked on" and try to know everything about a system at once and adapt in situ. Which is of course impossible on any given machine, you say. "But what if we leverage the power of the crowd?" said someone.


Many compliance frameworks require tools like crowdstrike. If you don't have endpoint detection, no SOC2. No SOC2, and you'll be excluded as a vendor from many places


In particular, CrowdStrike uses rootkit technology to prevent local admins from being able to uninstall it.


Yep. It hides itself from lsmod and the sysadmins on Linux as well.


Except for owner consent, which in the case of corporate machines is unambiguously and irrefutably the corporation, as much as everyone here seems to despise that reality.

All these “blah blah blah is indistinguishable from malware” things aren’t profound, smart, or even witty. They’re spouted by the peanut gallery that has the luxury of not being responsible for deciding whether or not to use one of these systems.

Needing to explain to techies that ‘informed consent matters’ speaks to an utterly saddening stereotype.


Techies are the only ones who can give informed consent and we're constantly over ruled by the risk department, because the glue eaters over there think that a sleek presentation means the saleswoman on the other end knows what she's talking about.


> Except for owner consent

There isn't any. None that is meaningful. Sure, you can trick someone into 'signing' something, out of desperation and confusion. But the average person has no capacity. It's not that people are stupid, they are simply not informed nor are they ever entreated to anything that even looks like an actual contract. This is the colossal elephant in the room of digital tech.


Crowdstrike isn’t installed by the average person. It’s selected and installed by an organization’s IT and/or Infosec teams. Just like everything other enterprise security software.

Those teams 100% have the capacity to make an informed decision.


> 100%

Not sure about that. Groups of professionals don't appear better at navigating this space than individuals. I'm sure you've sat in such agonising meetings too. Common experience: They're hellholes of group-think, risk aversion, inertia, legacy constraints, resistance to change, pressure to reach fast decisions, duress or undue influence from salesmen and 'partners'.

Have you ever seen a company of any size actually sit down, open-mindedly weigh up a real and serious evidence-based long term security plan built around risk analysis, a full network and service overview, with all real software options on the table and all stakeholders present. Companies made up of well educated people with impressive job titles are as vulnerable to pitfalls and shortcuts as anyone else. They just operate, and fall victim to scams, on an organisational scale. Crowdstrike and other protection rackets are a way to make a problem go away, not to face its complexity head on.


For sure. After something that looked like a data breach (but turned out to be a hilariously funny glitch caused by a Chrome update that suddenly started translating one part of an app into Romanian) I was in on a lengthy pitch meeting for a similar endpoint security package from a company larger and more recognizable than CrowdStrike. After which I told the CEO of the company I worked for hell no there is no way we are putting this on all our machines and giving these idiots root access. They have no clue what they're talking about. Most of these machines don't even face users and they're talking about checking for suspicious links in emails employees open.


No they don’t. Most barely understand what they are proposing or the risks associated with the mechanisms being introduced.


... unless it is approved by MS. :)


The US government has given it's stamp of approval, and a big push to install such solutions.

    FedRAMP-authorized: CrowdStrike's cloud-delivered solution meets the strictest federal standards.
    DoD IL5-authorized: CrowdStrike's solution is approved for use on the Department of Defense’s (DoD) Impact Level 5 (IL5) systems.
    JAB High-ready: CrowdStrike's solution is validated, tested, and certified for use in hybrid, multi-cloud environments, meeting the Joint Authorization Board (JAB) High requirements.
    OMB Memo M-22-09: The Office of Management and Budget (OMB) mandates a Zero Trust security approach for Federal Civilian Executive Branch (FCEB) and DoD systems.
    OMB Memo M-21-31: The OMB directs investigative and remediation capability improvements.


Well now we know those agencies are either morons or in on something we don’t know about.


I'm going with option 3, taking money from lobbyists without giving a f** about the consequences to national security.


From national security perspective this sort of backdoor is extremely great idea as long as you think you are only one with the keys. Get to inject your own stuff any system with approval of some secret court... What is better than that?


What are they in on? The fact that bad actors are constantly targeting government infrastructure and that this kind of antimalware is a key part of the tools for defending against it?


If you run a properly designed operating system your anti-malware will not need ring-0 access. See mac OS which has now deprecated kexts altogether and will only load them if you explicitly turn off system integrity settings.


I don't think this follows. Those vendors are third parties and reach for whatever they can get. Yes, if microsoft didn't allow kernel extensions then crowdstrike would run as SYSTEM in userspace, but that doesn't tell use whether they need it or not, it only tells us that they want it.

Based on other comments it can run as kernel module or as eBPF filters on linux. So I guess to them it's a less invasive/more power tradeoff which they'll take whenever it's available.


Mac OS only doesn't need driver loading because they know exactly which hardware it runs on and link those drivers into the kernel. This is not applicable to Windows systems.


Windows has DTrace and eBPF available. They chose not to use it.


I’m wondering if a bar DTrace or eBPF expression/filter could cause a blue screen. I’ll bet it could be done.


Got my answer from the (currently top) answer here:

https://news.ycombinator.com/item?id=41030352

eBPF can cause Linux kernel panic


eBPF:

> The program does not crash or otherwise harm the system.

https://ebpf.io/what-is-ebpf/#verification


That’s a pretty big claim. If they have software that can guarantee a program will never crash it would be revolutionary, and could probably solve the halting problem.


There is a perspective that the architecture of much anti-malware in general and this anti-malware in particular actually introduces new back doors where there weren't any before.

So while anti-malware might have some merits, on balance much of it would be a detriment to security, from this perspective.

People with this perspective are feeling spectacularly validated today!

As often, it's wise to have a nuanced view of course.


I simply don't understand one thing: why such company didn't have/used staged rollout?

it's normal that from time to time someone will fuck up, but if they would rollout this update to only 1% and see that nothing that updated is coming back and bailed out this would be literally 100x less disruptive that what happened


I’ve heard quite a few, unfortunately hearsay, reports that CrowdStrike’s engineering practices are not good. Stories I’ve heard:

• Developers building binaries on their own local systems (identified via metadata) that are then pushed out globally. Inference: 1 xz-insider = global ring0.

• Customer success teams sending 50 page reports on why they can’t do staged rollout (“respond to 0days”), but reality is they’re unwilling to invest in the eng infra to do so.

• When they broke Linux endpoints a few months ago, “post mortem” was ignored and nothing remedied.


> “post mortem” was ignored and nothing remedied.

Losing billions of dollars in stock market cap, recovery costs, and possible law-suits tends to focus the mind on these things.


> Losing billions of dollars in stock market cap

Still up 20% ytd from an already high valuation, so they have a long way to fall.

If they don't fall all the way to zero (spoiler alert: they won't) they got off easy.


I've been wondering the same, then wondering: is it possible that they did do a staged rollout, but the cadence was insufficient? The only way for them to receive feedback would be out of band, so they might not have budgeted for the reaction time they should expect.


They'd also have feedback in the sense of "an update was pushed out and none of the updated machines have given us an ACK of 'health: OK' after rebooting".


Given the fact that the software itself allows basically unfettered remote access to the "endpoint" machines it's installed on, one would think they'd get more than just an ACK when one boots up. But yeah, that would be a reasonably low bar to check on a few dozen devices before bricking everything in the world.


it was a data/content update, not code, so more than likely it may not crash immediately. how long would the test vm/container wait before it gives the green light on uptime? The buggy code would have to process it after some time/trigger and other factors for the bsod to show.

They may have not done any testing at all because they were adding a named pipe to be monitored, kind of like adding a domain or ip to be monitored.


my impression is, they don't test content updates the same way they do code updates or at least not in the same scale and test cases.


You will not notice good cyber-attack, it is completely stealth. Control over machine has a value, so attacker may even patch security holes (to prevent other attacks), and remove other malware!

There is a story from early 2000s, when some guy from East Europe did network administration, in exchange for fast connectivity and warez hosting. "Customers" had no idea, and he kept their servers running for a few years longer.


Do you have more Infos on the story?


Now that this is being discussed and nvidia is finally phasing to an open source kernel implementation can we get MS and mainstream Linux distributions to shun away kernel extensions like Apple did? Software like crowdstrike, anti cheat and all operate like rootkits and basically own your machine. Go user space or go home.

Allow it but make it hard. Put lots of warnings. Require enrolling custom keys.


Linux already considers kernels with proprietary modules tainted. And with secure boot you can lock it down so that root can't load kernel modules. It's up to the distro and user to decide whether they want to allow it or not.


Issue here is that there is no reason for organization that mandates these sort of solution to not mandate them in what ever distro they will use or allow on their machines.

In the end unless system is entirely locked down by the vendor without any compromise that will happen... And most of the time these customers will choose something else that is not entirely locked so they can have their security thingy...


> nvidia is finally phasing to an open source kernel implementation

Like, for real - as in, Debian would be happy to include it in their main repo? Or lip service? (Assuming you meant their graphics drivers)


https://developer.nvidia.com/blog/nvidia-transitions-fully-t...

Not upstreamed yet so I’m guessing it will take a while to reach Debian.


Wow. Exciting! Fingers crossed


My recollection is that when they first started doing this a couple of years ago, they did it by pushing as much as possible from the driver to the device firmware, which is still not open source. I assume that the open source driver is in Debians repos by now.


That doesn’t fix the issue being discussed at all though. All you’re doing is making it harder for the corporate IT departments that are going to install this anyway.


I don’t think so because doing so will mean those IT departments will lose OS support which is more critical. This will force those “benign rootkit” companies to implement their software properly to remain on the market. For reference crowdstrike on current MacOS is a network extension.


You mean shun closed-source kernel extensions? Because that open source Nvidia driver IS a kernel extension. So is my NIC driver.


Shun out-of-tree kernel modules. The point is now that Nvidia is going open source, it'll be possible to get its driver into the mainline tree.


Yes, please shun away kernel extensions for all real-world production settings. You'll probably always need to run kernel-level anti-cheat malware on your gaming rig, but a production machine is not a toy and should not be subject to the same standards.


That's bullshit too.

I can build a webcam with an rpi that will play CS better than any human and only communicate with the gaming PC through a mouse and keyboard.

Anti-cheat systems need to look for movements that are super human, no rootkits required.


Not to mention the fact that if one of those companies get infiltrated the attacker will have kernel level access to millions of computers. There is also the fact that some of these anti cheat software are bundled with popular games produced in China.


> Anti-cheat systems need to look for movements that are super human, no rootkits required.

That won't be effective for wallhack/ESP-style cheats.


Those kinds of cheats should be fixed by the server not sending information to the player's client that the player isn't supposed to know.


That would be extremely limiting to do because of the constraints imposed by networked gaming; which requires that some data be locally available to the game locally to allow for immediate feedback.

Just imagine if a web page had to send a network request to control a toggle or to show the contents of an accordion; that's what the game would turn into if this were to be done.


Despite popular opinion, it's the gamers which are requesting anti-cheat. Game companies couldn't care less if there are cheaters. If Windows shuts that down and cheating becomes easier, Windows will have a problem with gamers.


Any anti-malware software will become malware itself. You become the enemy you fight.


It’s easier than that: do not trust non open source software no matter how many “seals of approval” it has. Now let’s spread the voice so that in 10 years companies start to think the same way.


Yep, because open-source code never has issues.

So many idealists looking to make this a “closed-source bad!!!” thing and in the process muddying the waters enough to take attention away from remedies that might actually work.

All while they sit there getting paid $500k/yr to write closed-source software at FAANG or a startup, which to them is Technically Okay because they work on some sort of SaaS product, thereby alleviating them of the economic realities of Everything Being Open-Source.


Who said anything about open source not having issues? I talked about trust. Open source can be trusted simply because the code is scrutinised by many if the software is that important. CS cannot be trusted by anyone, because you simply don’t know how they develop their software. Yes, I do work for a private company because otherwise I cannot pay the bills. Companies on the other hand do have the privilege to choose what kind of software they can use (unless the regulation says otherwise, which is in itself something to fix too, but I do lack knowledge in that field to suggest anything)


You have a weird definition of "trust." I keep asking this, but how would open sourcing the software prevent the global rollout? "Open source" doesn't mean "no auto-updates." Strictly, it doesn't even mean that you are legally allowed to modify the source code to make it not update automatically.


>Strictly, it doesn't even mean that you are legally allowed to modify the source code to make it not update automatically.

Most definitions of open-source (e.g. OSI) include rights to modification. Without modification and distribution rights, programs are normally referred to as source-available instead. This is a common complaint on this very forum when companies misleadingly market their SA code as OS.


Right, but being able to "trust" because it's "open source" makes me think trust comes from the ability of read the source code, not modify it.


That requires trusting that the available source is actually what's running on the machine, which is not much better than trusting that a closed-source program is correct. Open-source software is more trustworthy not only because it's inspectable, but also because you can decide to run precisely the code you see, so there's also accountability for the code that is run.


Well, we live in a world where we allow this kind of things to happen. Imagine if we live in a world where you still have to manually push and pop the stack (because functions are deemed too academic and not optimized exactly the way I want it). That is what I feels like watching critical infrastructures written in C or these older languages.

Having to manually push and pop stack is not something to proud of, only do that out of necessity. There's a very reasonable option between C or C++ and Python (or whatever is deemed slow). Using C doesn't prevent you from designing a shitty application that uses shitty algorithms.

"Ooohhhh I get it, this pointer is no longer valid because the array have been reallocated because that function have this side effect after that commit", he said as he gave out a small smirk at the hilarity of this bug. I wonder if he would find it as amusing if his coworker regularly rolls his sleeves and randomly regularly writes assemblies directly in the source code because compilers aren't smart enough to perform a particular parameter passing optimization that he really loves to do.


The US government agrees with you than unnecessary use of very unsafe languages is a serious cybersecurity problem. [0][1]

A slight aside: Rust is typically held up as the obvious safer alternative to C/C++, as if Ada never existed.

[0] https://stackoverflow.blog/2024/03/04/in-rust-we-trust-white...

[1] https://news.ycombinator.com/item?id=33560227


I recall Ada being strongly pushed, but at the time you needed a top of the line workstation ($60K - 100K in early 1990s dollars) and the per seat license for the compiler and other tooling was another $100K. And that was aside from the much longer/harder path to getting to a solution. We had two teams implement a system - one in C and one in Ada. The teams didn't know they were in competition with one another, but the much less experienced C team completed their solution so much faster than the Ada team, that they just dropped the Ada effort as unrealistic for use outside of specialized aerospace.


I agree that the late arrival of a serious Free and Open Source Ada compiler did tremendous damage to Ada's adoption. I still think it reflects poorly on the software development community that Ada has been almost completely ignored despite that obstacle having been resolved decades ago.

> The teams didn't know they were in competition with one another, but the much less experienced C team completed their solution so much faster than the Ada team, that they just dropped the Ada effort

I realise you've given an abbreviated account, but it implies that:

1. No regard was paid to the relative quality of the 2 solutions

2. They ignored that initial development effort tends to be outweighed by ongoing development and maintenance

3. Many programmers are already familiar with C but not with Ada


I love Rust, but Rust is not particularly good either because "panics" are accepted in the Rust community.


Panics are safe though (they're a controlled crash). The safety we're discussing is not related to program stability.


A blue screen of death as caused in this case is also a controlled crash, in fact. The processor has fired an interrupt indicating invalid memory access and piece of windows code does some emergency logic, namely, dump memory so you can maybe diagnose it later, and reboot.

The reboot part happens because the system is assumed to be in a bad state and allowing it to continue would possibly corrupt data, or in the worst possible case execute exploit code.

This panic handler runs in the same privilege as the faulty driver and can itself be prevented from running correctly. Notably file system drivers are required to function correctly to write the memory dump. If they, or filter drivers attached to them, also fault, well, fun times.

You can have faults in an interrupt handler too, for example trying to access paged memory in a page fault handler. That'll trigger a double fault handler and if you fault in that, the processor will perform a reset and not bother even notifying software. Luckily the double fault handlers and other such cases are usually solely the preserve of OS vendors.

I have no particular point except to illumate what's going on and that processors (in this case x86 terminology is used) and that actually recognizing and aborting from an invalid state is exactly what's happening here and what rust memory safety does. In spite of the disruption that's better than silently corrupting data.


Forgive my ignorance, but to your last point about Rust aborting an invalid state… Isn’t Rust considered more memory safe because it catches many mistakes at compile time and not runtime?


> Panics are safe though (they're a controlled crash).

Here's Linus's commentary on that:

https://lkml.org/lkml/2021/4/14/1099

> I think that if some Rust allocation can cause a panic, this is simply _fundamentally_ not acceptable.

> Allocation failures in a driver or non-core code - and that is by definition all of any new Rust code - can never EVER validly cause panics.

Panics are not acceptable in countless contexts. Plenty of things need to be written to keep working through entire categories of errors. The casual attitude of Rust developers towards error handling is one of the many reasons people have trouble taking it seriously. Reliability and robustness is generally more important than language memory safety for almost all contexts.


There are indeed many cases where errors need to be recovered from and the subject of one angle in secure rust code training was quite literally "don't just panic, don't blindly unwrap or leave errors unhandled because that'll kill your thread/process on failure, you should still code for failure cases". If you do, you are coding denial of service bugs.

But, in the incident in question, the code is fundamentally not correct. Spatial memory safety violations, or in plain English "trying to call functions or use data that isn't at addresses your code or data lives at" fundamentally is an error. There's a missing part of the state machine to detect and stop before just exploding. In userspace this is a segfault and your process dies. In kernel, you get a bugcheck and the whole system reboots.

There are scary alternatives. The first, in kernel, is that you suppress all invalid writes and allow the errant code to keep writing, until it hits some other data. The system stays up, but you have out of control data writes so who knows what that's doing.

The second is that the execution flow of the process can be hijacked, i.e. Sergey Bratus' weird machines, or in plainer language, owning kernels in critical infrastructure. This is usually undesirable.


Panics in a user-space application are likely safe and the correct thing to do.

Panics in a real-time system or a kernel are quite possibly not.


In a hospital system nobody cares whether it was the kernel or the application that caused people to die.


This incident, the blue screen of death, is exactly the same as a panic.


The problem here was that the kernel process got a fault, so a panic wouldn't have made a difference.


Panics are only safe if you have an OS to catch you. They are definitely not safe in the CS context.


Some of the most reliable software I use is written in C. Software I use for decades and which never crashed for me. I can't say this about much other software.

Clearly, you can write shitty software and this is also far too easy in C, but there are also many tools and techniques one can use to write reliable software in C. Rust is also a tool that can be used to write more reliable software, but it is not clear to me why this should suddenly make the big difference. People who want to write bad software can also sprinkle "unsafe" everywhere, and I would guess that this one happens a lot more once Rust is adopted more. And in the CrowdStrike case, Rust's default behavior to panic might not have necessarily prevented the problem.

Having said this, regarding security against malicious hacking, memory safety is indeed one important component and Rust clearly has an edge there. In terms of general software bugs, it switching to other languages than C/C++ is certainly not a magic bullet and I often doubt that it would even an improvement in most scenarios.


> Some of the most reliable software I use is written in C. Software I use for decades and which never crashed for me. I can't say this about much other software.

After several years of efforts. I have written several apps in JavaScript, Rust, C++, Python. Of these, only Rust is the one that I can safely NEVER look back and be satisfied. Everything else just left me wondering if I have missed an enum or something stupid like that.

> People who want to write bad software can also sprinkle "unsafe" everywhere, and I would guess that this one happens a lot more once Rust is adopted more.

People actually _dont_ like to use unsafe (some people do, surprise, surprise, they are usually C folks). Most people that I see usually wants to get as quickly as possible out of unsafe. Why not? Believe it or not, people actually don't want to deal with tagged enums, or parse JSON themselves, or care about how do i properly set up SIMDs. They just use correct APIs so they can sleep at night instead of staring at stack traces.

Rust is not perfect, true, but it is way, way better than C. How many software have been written in other language, and then the authors decided to write it back in C? If this does happen, it is usually something that is very well defined already. I haven't heard anything significant recently. I think there's a place in C as it provides a very stable and reliable API.

> switching to other languages than C/C++ is certainly not a magic bullet

its definitely a magic bullet that kills a LOT of problems tho? No one is saying it fixes everything. If we keep using C or C++ for the next 1000 years, we will have the same sets of problems. Things will NEVER improve.


This is a lot "in my opinion better or worse". I disagree.

And yes, it happens that stuff is written in C. But it is not discussed every time on HN such as the many "XY written in Rust" articles, because the the later apparently is still newsworthy.


It took corporate America this long to realize this. I once worked at a facility that handled classified information, and the shit they addled my PC with... it was clear that the idea was not security, i.e., preventing attacks. It was having an audit trail so they knew whom to call on the carpet/arrest if/when attacks eventually occurred. It relied heavily on backdoors into the system so that the CPU state, RAM, network, disk, etc. could be snapshotted and analyzed at any time -- and of course, once you have a backdoor, so does an attacker. Granted, my computer was not cleared to handle actual classified info, but what was on it was still sensitive from a national-security standpoint.

When the Clownstrike incident occurred, the phrase "fucking for virginity" popped into my head to describe what software like that does in the name of "security".


Isn't that just a reformulation of Hanlon's Razor [1]

[1] https://en.wikipedia.org/wiki/Hanlon%27s_razor


More a reformulation of Clark's third law:

https://en.wikipedia.org/wiki/Clarke%27s_three_laws


These statements always feel a bit circular to me. Sufficiently bad for what? Sufficiently bad to be indistinguishable from a cyberattack. So, "a software update bad enough to look like a cyberattack looks like a cyberattack". Well, yes.


Software updates and cyberattacks are, at first glance, categorically different things. The idea that a sufficiently bad update is indistinguishable from a cyber attack is a statement that they're not, in fact, categorically different.

E.g. supply chain attacks have become a hot topic in the last few years. This event suggests that your threat model for supply chain attacks should include catastrophic vendor cockups.


From a risk management perspective they’re different things, and are managed differently. This tweet is just a dumb rehash of an old platitude that’s trying to bandwagon some social media engagement. Nothing interesting to see here, unless you’re itching to dump on CS some more.


It's just a play on any sufficiently advanced technology is indistinguishable from magic. I don't think you need to think about it too much.


One of my favorites personal anti-virus anecdotes in enterprise environment is when its file watcher looped with development environment and IDE file watchers and the machine was consuming maybe 60-70% of RAM and CPU in idle state.


Clearly the problem is that software with the system access, many security packages need, makes it nearly indistinguishable from malware and is an obvious risk factor in the supply chain. Of course having the cause and supposed solution for the safety and operational capabilities be the same piece of software leaves open quite a few questions.


"Any sufficiently bad software update is indistinguishable from a cyberattack"

Who says that?

Probably a wise lawmaker.

Who's law is this?

Leigh's Law?

Sounds right to me :)


Sometimes we forget that cybersecurity is not just about protecting against hackers. There is a bunch of threats from the inside – and they don't have to be malicious. Failing hardware, bad software updates, human errors are also threats that needs to be considered.


Intent matters, but that's only one half of a problem. Improperly implied permission is an equal issue.

There are bad software updates that are not malicious, just inept, or an unfortunate accident that cause havoc. I think the Crowdstrike event was such a mishap.

And there are plenty of software updates that are plain malicious but hiding behind the "legitimacy" of an update. I'm thinking here about Amazon deleting the 1984 book, or printer 'updates' that lock-out third-party ink etc. These are really violations of computer misuse acts and ought to be prosecuted - because they are indeed vandalism indistinguishable from a attack.

Maybe they're worse than a cyberattack, because they are harms that abuse a privilege.

Because companies have not been prosecuted but allowed to get away with this sort of crap for decades were in a sticky situation now.

There's a whole spectrum of intent between sincere security updates that go wrong and spiteful for-profit sabotage. People need educating that if you allow anyone remote access to your computing property, no matter what their credentials and bona-fides, they are in a position to massively abuse that trust. Just because someone sold you some hardware or software doesn't mean they continue to have your best interests at heart or any rights to interfere with your property.

All software and hardware should by law come with the ability to lock out the original vendor, supplier and to reliably stop egress and your device from "phoning home with telemetry".

The customer is buying a product not a relationship.


no, it is not. beyond overall poor software choices, there is no notion of persistence in a bad software update.


Any sufficiently bad data leak is indistinguishable from a cyber attack.


Yeah they are, as in this case.


Everyone knows the only thing that can stop a bad guy with a rootkit is a good guy with a rootkit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: