Hacker News new | past | comments | ask | show | jobs | submit login

Absolutely this - most ransomware attacks are pretty unsophisticated. You don't need privilege escalation, or an exploit. You can carry out the attack using just basic user permissions. You are exploiting a basic "problem" of most modern OSs (that apps run "as" the user executing them) - the user/group permission model ceases to work in 2021 with non-expert users. Portal-based access to individual files via secure OS-provided portals (i.e. like on Android/iOS/flatpak) help to prevent apps needing access to every file on the filesystem, but until those are widely adopted, it will be increasingly difficult for "normal" organisations to prevent ransomware attacks.

You can prevent ransomware fairly simply by following best practice, and taking some steps that most companies will feel are excessive (but effective), such as whitelisting binaries, preventing running of any binaries not on that whitelist, and keeping that whitelist up to date on a regular real-time basis. Nobody wants to spend the time doing this, so they leave it a "free-for-all".

Exploiting user-level access is just the natural escalation now that getting good exploits is more costly and difficult. Now attackers will "make do" wiht what they have. IT can win the battle, but with inconvenience, friction, and increased costs in IT.

There's important businesses that are "critical infrastructure" still using Windows 7 on their corporate day-to-day let-me-check-my-emails-and-browse-the-web laptops, without extended support. Organisational inertia and a lack of recognition that they need to pay for the technology that enablers their business leads them to this position.




> most ransomware attacks are pretty unsophisticated

As weird as it sounds, this is both correct and incorrect at the same time.

It is correct, because ransomware is not particularly sophisticated by today's standards. Couple of decades of R&D has made the building blocks robust and uninteresting.

It is also correct in the sense that the attacks used to breach systems are unsophisticated. A vulnerability is published for an internet-facing system, and in just couple of days the underground toolkits are already (ab)using it.

It is incorrect in the sense that the crews who breached the systems are not the crews who deploy ransomware. Computer crime has evolved to a fully functioning economy, with high specialisation among its participants. Crew A reverse-engineers patches, updates their vulnerability exploitation engines and goes on to breach systems. (In a race against time, because there are other crews doing the same.) They then sell access to crews B, C and D.

Crew B are after financial information and will exfiltrate anything that can be sold to morally ambivalent hedge funds. They may also grab R&D material, because corporate espionage is a thing. Crew C will grab all the personally identifiable data and have intimate knowledge how to best monetise it for various types of fraud.

Crew D will deploy the ransomware, because they have all the sophistication you need to run their extortion operations at scale. These days this includes the ability to handle massive volumes of off-site backups, because why not. "Pay up or we leak it" is a perfectly valid extension to their business model.

The gangs I referred to as "Crew A" are known in the industry as Access Brokers. There are of course other operators too who work in a more asynchronous fashion, such as money launderers.

The economy powering the criminal enterprise markets is certainly sophisticated. And while most of the technology in use doesn't qualify for using that word, the internal operations these gangs run certainly do.


A really good point - we should distinguish the sophistication of the attack and the attacker. These are clearly highly organised and sophisticated attackers, many working in shifts etc.

If it gets in through an access broker, you're definitely looking at a sophisticated outfit of attackers.

I guess I'm approaching this as the defender - if the malicious code isn't exploiting anything needing patched (other than decades-outdated assumptions of a threat model where any binary has the ability to act inseparably "as" the user), the actual ransomware is harder to prevent for most organisations, as all the friendly hand-holding type advice they receive from police and governments doesn't save them (patching desktop systems won't prevent the file encryptor payload from running on the first host, after a user runs the bogus docx.exe file and ignores warnings through alert fatigue).

It would be interesting if companies were more willing to (or required to) share details of ingress vectors, to understand the extent to which they're being breached through really advanced attacks involving reversing of recent patches, versus someone popping a pulsesecure VPN that's been warned about for years. Or on-prem Exchange that they've continued to ignore all the warnings about as nothing is on fire. Or just a user clicking a link to a shared file mistakenly emailed to them, called CONFIDENTIAL - PAY SCALE 2022, which phishes their SSO credentials for 365...


I would like to see rate-limiting built into OS's.

Eg. an application is only allowed to touch 100 files per second or 1000 files per hour.

When it reaches those limits, it gets paused and a popup asks the user if this application really should be doing X.

Then at least ransomware can't run through stuff too quickly.


Indeed - I think Windows Defender dabbled in offering this as a feature. I at least recall seeing programs prevented from creating files in the Desktop or Documents folders.

A rate limit, with group-policy controllable "automatic response" would perhaps help - you need the GPO integration though so that an IT admin can say "never allow file system rate limit to be exceeded".

If you enforce a rate limit locally, and on the network, and move to copy-on-write filesystems, it would be a whole lot harder to cause straightforward harm (at least while migrating to a newer, safer OS architecture paradigm, where code doesn't run as the user).

In the post-Covid world, I think MS and others have a whole host of these kinds of issues to think about - Windows in an AD environment is still (as far as I know) not something really geared for working off-prem. It still relies heavily on LDAP and CIFS etc. A re-write to get a desktop OS ready for the "web first" world (where everything is sent to the AD domain TCP/443, using HTTPS, with client certificates rather than passwords, stored locally via hardware-backed secure storage, and trusted CAs used by the DC) would be a big first step towards this. Yes, I know you could use Direct Access or whatever MS has butchered into the system, but in a world moving to zero trust, MS needs to move to zero trust.

Rate limits would be a great starting point, as would some proper platform-level protections around preserving shadow copies, using copy-on-write, and locally preserving versioned user files as a priority. As soon as a ransomware attack touches the network, IT should be able to handle it, as their backup regime should take effect. At that point, if you don't have backups sufficiently separate from user-writable files (or you never validate them, and thus don't realise you're backing up transparently encrypted ransomware'd files for months), you're on your own!


My business involves working a lot with such situations, and frankly speaking, none of the above would help in the least bit.

Cost cutting is probably the biggest threat to most businesses. The mythos of the hyper-converged infrastructure, with the datastores and repositories for backups being hosted on the same physical device, are some sort of infection that just cannot be wrenched from people's heads.

IT Professionals (not managers, not hapless non-techies, actual persons with a cornucopia of certs and accolades on their linkedin) are in denial as to how to design a proper infrastructure to respond to ransomware. At this stage and for the foreseeable future, Ransomware is an inevitability; not "if" you get attacked, "when". But the countless number of conversations I've had where basically a group of people from the IT department theorycrafted a perfect defense only to get attacked because one of them clicked on a random excel document from a spoofed email is too high.

When clients ask me "what do we need to do to protect against ransomware" and I explain what airgapping means (tape, removable drive arrays), we're either ignored, or they say they accept and the clients just don't have the discipline to follow the required practices.

Modern IT prefers cargo-cult security, and IT professionals love their checklists from some organization, regardless of the fact that most of the checkboxes are useless to protect against ransomware. But the professional can eschew responsibility because "hey, I checked all the boxes."

Until technical professionals as a whole start to take security seriously and exhibit the discipline that is required for such security right now, Ransomware is going to continue to be prevalent. No amount of rate limiting from vendors will help, because users will simply just not use such versions, will disable such limits, will work around such limits, or any of dozens of workarounds to avoid it because such limits would be inconvenient (neverminding such limiting tooling probably will just be exploited)

We need discipline first, not tooling to try to correct for lack of discipline.


Behavioral heuristics are best learned in-situ; you need to know how the software is used with which data to correctly profile normal behavior. Some users and workloads hate sandboxes, though, and a 'Run as Adminstrator'-esque familiar-escape thus demanded by users will no doubt destroy its utility. Ultimately, someone must correctly articulate what the system is supposed to do, and this requires knowledge.


Okay, but these particular heuristics aren’t rocket science. Is a process rewriting 25% of my hard disk, and/or 10% of one of my backup drives? Time to send an alert to the user, and an IT admin if this isn’t a personal devices. There are very few legitimate use cases for that.


Had to troubleshoot Windows software from a MAJOR shipping provider that popped up a “you must do this thing” on a fully up to date Win10 system today.

“The thing” would not work as an unprivileged user account and would only work as a right click run as administrator situation :-)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: