I cannot emphasize enough the importance of backups. Take backups, verify your ability to restore from them, and keep them segregated from the rest of your infrastructure. It doesn't matter how inelegant and hacky your backup solution is, so long as you can restore from it. Any backup you can restore from is better than no backup.
You might get a call from one of your application engineers shortly before bed on a Friday night that the web front-ends are acting weird, and they can't get in to troubleshoot, and then 10 minutes later come to discover that the latest strain of Ryuk has laid waste to 2/3s of the servers and workstations across the company. And then all of a sudden, those VM snapshots you'd been copying off to another file share with a shell script have become your salvation. Yeah, containing Ryuk and the rest of incident response mode are going to suck, but at least now you don't have to write an apology to your customers that the data they entrusted to you has been irrevocably lost.
In case you're wondering, no, that did not literally happen to me. But it is a mild fictionalization of someone I know.
Keep backups, and test your restores regularly, people.
One of the first questions I ask is if they have at least one fully independent, full/incremental off-site backup that can't be corrupted from the main infrastructure, and if they have ever checked if they actually work and are restorable.
I'm continuously surprised how often the answer turns out to be no after dinner digging, even in larger companies with otherwise well-run IT.
No, the automatic 7 day RDS snapshots or turning on S3 versioning is not a sufficient backup. Neither is mirroring to a S3 Glacier bucket in the same org, or rsyncing to a a backup server in the same datacenter.
Backups are annoying and unglamorous. Nobody wants to do them, or do the tedious work of validating them or setting up something like an automated restore test.
It gets better (as in worse) someone can easily cut down back-up expenses, and become a hero by "balancing the budget with no disruptions to operations", get a fat bonus, and then after a year or so, leave.
Their successors won't get any bonuses by increasing the budget for something that has no ROI.
This is a somewhat cynical take, but there are kernels of truth in there. Often, organizations not focused on quality will allow such things to happen. More than one startup has also been out out of business by not having a backup. But it’s often negligence / ignorance and not budget. When you put all the cards on the table for an organization and give them the information they need, they make much better security choices.
I think backups come up some time after MVP and launch for most organizations, this means that you don't know when it actually has to come, so someone with technical skills says hey at some point we need to put in the effort and money to get good backups and ability to restore of all this data, and someone in charge of how the business grows has to say yes we better do it.
Unless you do not open for any sort of business without having all backup system set up then you run the risk, and with thousands of companies running the risk so as to not be out-competed by faster, riskier companies someone will suffer a bad outcome of their risk-taking.
> Their successors won't get any bonuses by increasing the budget for something that has no ROI.
Possibly, but not if the business has a culture of recognising downside risk. Single events that are not extremely unlikely and that could cripple the business are things that every company should be looking out for.
Three years ago, after doing YC Startup School, I built https://www.borgbase.com to offer the simple, but secure backup service I wanted myself. Today it’s a viable business and my customers are all great and value backups as essential part of their own business. Wouldn’t want to be in any other “more glamorous” corner of the industry. Also kudos to anyone - partner or competitor - working in this “unglamorous” space. You’re all doing great work.
Do you do offsite offline backups too with verification? What if your infra gets really hacked and they wipe out all of your customer's backup data everywhere? Just because borg clients have append only modes, it doesn't stop them from deleting the raw files on your drives.
If a storage server gets p0wned, the raw data could be deleted. That's true for every cloud provider. What's important, they still can't read the backup, since it's encrypted on the client. Storage servers are also isolated from each other and in different DCs, cities and regions.
Additional offline backups aren't really feasible past a certain data volume and daily change/velocity. I'd still encourage everyone to have them for their own essential data in addition to a cloud backup. E.g. by burning it to BluRay or tape (3-2-1 rule). You can see find my own, more philosophical discussion, of the topic here: https://docs.borgbase.com/strategy/. There I also distinguish between operational backups and archives. BorgBase is focused on the former. Offline backups are more suited for the latter.
> If a storage server gets p0wned, the raw data could be deleted. That's true for every cloud provider.
It's not hard to create a cloud backup service where delete requires separate credentials which are not used in day-to-day operations (and so can be kept secure). And without these credentials a backup is kept N days and cannot be deleted or overwritten. Don't know if anyone do this, though.
> I'm continuously surprised how often the answer turns out to be no after dinner digging, even in larger companies with otherwise well-run IT.
I am not even remotely surprised, because most companies are not IT oriented companies, and most of those either have managers that can't be made to understand the importance of DR, or (sadly) IT people who can't.
I myself am currently in a multi-year long political battle to justify a mere $600mo to get our DR env (built with old equipment that is on the verge of irrevocable failure) moved from a closet at a branch location to an actual co-lo that won't lose power and internet 3 times a week.
At least we finally got that backup generator approved for the corporate office after the third time a goose committed suicide on our power lines. Ugh...
> No, the automatic 7 day RDS snapshots or turning on S3 versioning is not a sufficient backup. Neither is mirroring to a S3 Glacier bucket in the same org, or rsyncing to a a backup server in the same datacenter.
If you lose control of an aws root account it can take weeks to get it back. That’s probably enough time for the hackers to clean out the backups.
Billing issues can lead to aws wiping out an account.
For $work the backups are in AWS but using a different payment method, account owner etc to prevent cross contamination. Honestly, they should be outside aws entirely, but separate accounts is a good start.
> If you lose control of an aws root account it can take weeks to get it back.
> Billing issues can lead to aws wiping out an account.
These are small-scale related issues though. When you're at the level where it's hard to have a reasonable backup outside of AWS, you can also resolve the root account issues fairly quickly by calling your TAM directly.
This applies to average people too. I wonder who among us can say they meet your (reasonable) standard.
Like you said, backups are annoying and unglamorous. Yet, the data on my laptop is the only thing I could not replace. It's more important to me than my passport or my birth certificate. Its preservation is certainly worth a bit of thought.
> Like you said, backups are annoying and unglamorous.
It’s called having a network attached storage (NAS) device. I have a Synology NAS, which I backup to, continuously at 5 minute intervals.
Warning: Microsoft image and file backups sometimes do not work.
I recommend Acronis True Image instead, which comes with antivirus. It pretty much always works, never falter never fail. Get the version that allows you to back up to the cloud with blockchain features. You will be happy you did.
Yes, of course. This is one of the reasons why I have 3 copies of everything I use (local/computer, cloud, NAS). This is particularly why I have a cloud backup, with blockchain data authentication.
Can you elaborate some more on your cloud backup solution? More specifically, I'm curious about the privacy/security aspect. It's the main reason why I've been somewhat hesitant to adopt cloud backups for my own computers. \
I encrypt all of my information before I back it up. It is a risk that I am willing to take. Governments can always decrypt my information. Bruce Schneier, a world-renowned security expert who wrote "Click Here to Kill Everybody", states that he does not recommend storing information in foreign servers, in countries that you are not a citizen of. I suggest taking that advice.
Acronis is Swiss-based and has to comply with the GDPR, due to its direct ties with the European Union. I am a dual US|EU (Croatian) citizen, with legal rights to work/live/retire in Switzerland, so I do not have to worry about storing my information "abroad".
The EU has strict regulations, and they are only going to get stricter. There have even been talks about the EU being allowed to legally break encryption recently. If you are a third-country national (not an EU/EEA/Swiss citizen), then I do not recommend Acronis cloud backup or any other EU/EEA/Swiss service for storing your personal data. America may have its problems, and privacy may be a joke, but at least you have rights and sovereignty there.
I use rsync and hard links to create incremental backups, and Google Drive as a convenience backup. First to a collection point, then to remote locations.
...really? What are you keeping on your laptop that is so important to you? I can't think of anything on my computer that I couldn't lose and pretty much just shrug off.
I have been heads down on my own side hustle as a one man show. Do you have any suggestions on how to properly approach backups for smaller groups like myself working with limited funds?
For context, some of the tech on my plate includes a few DO ubuntu droplets (hosting docker containerized services), Postgres DB (prod DO managed w/ auto-snaps, pretty much same scenario you mentioned), S3 storage for user-created assets (DO spaces w/ no backup strategy yet; no, not launched, yet), GitHub for source code, and physical MacBooks (iCloud + manual backups to a single physical external SSD).
Realistically speaking I’m bootstrapped. More importantly, I’m interested in learning the right ways in doing backups.
The horror stories I had shared with my boss are as relevant today as when they happened.
His is from the early 90s, the RAID card on the backup server was bad so it was just randomly flipping bits on the files being stored... nothing was ever tested until they needed one of those backups.
Mine was about a tape setup that followed the correct 3-2-1 setup... one of the CLI tools had an update and the agreement was waiting for user input so it would just drop and the rest of the script continued... no one had ever tested a backup so the tapes being shuffled around were empty.
Ransomware gangs often destroy your backup infrastructure. So it's important to create pull-only backups or backups that cannot be deleted / overwritten.
I thought this was sufficiently implied with "keep them segregated from the rest of your infrastructure," but yes, you are correct. It is important to have a set of backups that can't be destroyed by the attackers. You might get lucky that your hand-rolled solution is so hacky that a ransomware gang overlooks it. But better to not rely on that luck.
In the past, this was achieved by having a set of tapes offsite. Today, one might configure Veeam to lie when issued a delete command, and instead send the data off to an Amazon glacier instance that requires different credentials to read, write, and delete.
"Ransomware gangs often destroy your backup infrastructure. So it's important to create pull-only backups or backups that cannot be deleted / overwritten."
Every rsync.net customer has ZFS snapshots available in their account that are immutable. They are read-only.
So, even if Mallory trashes your primary site and then gains access to your rsync.net credentials, the daily/weekly/monthly snapshots cannot be destroyed.
Immutable backups are often overlooked. At borgbase.com, we call this “append-only” mode and the large majority of repositories uses it. With S3 (or similar) you would add some policies to disable deletions. So it’s usually doable, but needs to be considered when setting up the backup process.
Does the append-only mode you have in borgbase fix the issues of the append-only mode in Borg itself? https://borgbackup.readthedocs.io/en/stable/usage/notes.html... Because the way it works in Borg isn't really workable in practice, it only seems to be good to check a box.
We use the same public version of Borg, so those limitations apply. It’s still quite workable. Just prune from a different machine or once a year when the repo gets too large.
That’s all well and good for weather data, but if anything in your backup is in any way related to user behaviors or transactions, immutability is a crime.
Immutability in this context doesn't mean "kept forever", it just means the permission to hard-delete is separate from soft-delete.
GDPR requires that good data-stewards keep backups:
the controller and the processor shall implement... the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident
It's not "cannot be overwritten ever". It's more "you can overwrite it using credentials stored in a safe location and never from the production environment".
Someone will always have permissions to remove or change the entries, just not easily and not in an automated daily process way.
The neatest way I heard is to store PII in an encrypted form and then delete encryption keys when a user requests deletion. That requires the keys to be in a different backup pipeline though.
This is not true. The prevailing opinion is that backups do not need to be modified and that it’s enough to delete at restore time (and to of course not process the data again).
Modern ransomware gangs focus more on data exfiltration rather than actually locking down data, and it lets them remain undetected for longer too. That said, yes, correct, having good and reliable backups is vital.
In combination with techniques to minimise easy/unrestricted exfiltration of data. Such as egress filtering, software access policies/controls, minimum privilege, network segregation/isolation, good user access and network access controls, decent patching policy, good and tested backups and incident response plan…yada yada. Make sure if they do get in, the damage is limited.
Backups help recover your data. But gangs have evolved. They also use exfiltration of your data to embarrass you into paying up, which backups do not mitigate against: the public shaming.
Backups are nice. Until threat actor deletes them or encrypts them. Backups are also nice until you realize they exfiltrated data too. They are one mitigation but definitely not a sure fire insurance against a data breach and legal implications thereof.
Backups should be on a different segregated network (ideally off-site) with strict access controls and encryption. The threat actor should not just be able to find them lying about (possible even directly attached) with no authentication or authorisation required to access them. If they are able to access them somehow then they shouldn’t be readable. What you describe is people with a poor backup policy, which yeah I guess describes most folk. But backups aren’t the issue here, poor backups are.
And they do not mitigate against the threat of the data being published online which seems to be the new(ish) threat these days with ransomeware.
The difficulty with ransomware attacks and the like, is that it's less a technical problem and more a people problem.
IT departments will never have enough money/time/staff to keep systems up to date with the latest OS (look at the number of people still running critical systems on Windows XP).
Users will always open attachments from people they don't know, click links, or even pick up random USB sticks.
The perpetrators know this. They don't need to be more sophisticated than the InfoSec people at a given organisation, they just need to trick one user in that organisation in to letting them on to the network.
>The difficulty with ransomware attacks and the like, is that it's less a technical problem and more a people problem.
The cause is definitely technical, it is a huge gaping hole in the design of modern operating systems that you could sail the Ever Given through sideways without incident.
Your operating system does not confer to the user the ability to delegate only X resources to the opening of a file, email, etc. They (the users) have no ability to limit side effects. Blaming them for your bad system isn't ever going to help fix things.
The missing system of limiting side effects is known as Capability Based Security. We all have a practical example of it in our wallet or purse. We can remove a unit of currency, hand it to someone else for a purchase, and that is the maximum we can lose, unless something extraordinary happens.
We all have outlets, which limit the amount of power they will supply, and some even check to make sure it isn't supplied through us, or into a system that has arcing issues. We never have to worry that turning on a lamp will take down the power grid.
Imagine if there were no circuit breakers or fuses, would blaming people for not being careful enough help make the system safer? No, of course not. Neither does blaming the user for your defective Operating System.
> We all have outlets, which limit the amount of power they will supply, and some even check to make sure it isn't supplied through us, or into a system that has arcing issues.
Not that you don't have a point, but outlet safety measures only address accidental failures. A malicous device is perfectly capable of storing power in a battery or capacitor to exceed instantaneous power limits, or running at 12 amps (out of 15A fuses) 24/7 to pull much more power than you expect over time, or electrocuting you taser-style even if it doesn't have a high-current ground path. So while the safety features are useful, they're not particularly relevant to security.
The wallet example is pretty good, though, as is your actual point.
Absolutely this - most ransomware attacks are pretty unsophisticated. You don't need privilege escalation, or an exploit. You can carry out the attack using just basic user permissions. You are exploiting a basic "problem" of most modern OSs (that apps run "as" the user executing them) - the user/group permission model ceases to work in 2021 with non-expert users. Portal-based access to individual files via secure OS-provided portals (i.e. like on Android/iOS/flatpak) help to prevent apps needing access to every file on the filesystem, but until those are widely adopted, it will be increasingly difficult for "normal" organisations to prevent ransomware attacks.
You can prevent ransomware fairly simply by following best practice, and taking some steps that most companies will feel are excessive (but effective), such as whitelisting binaries, preventing running of any binaries not on that whitelist, and keeping that whitelist up to date on a regular real-time basis. Nobody wants to spend the time doing this, so they leave it a "free-for-all".
Exploiting user-level access is just the natural escalation now that getting good exploits is more costly and difficult. Now attackers will "make do" wiht what they have. IT can win the battle, but with inconvenience, friction, and increased costs in IT.
There's important businesses that are "critical infrastructure" still using Windows 7 on their corporate day-to-day let-me-check-my-emails-and-browse-the-web laptops, without extended support. Organisational inertia and a lack of recognition that they need to pay for the technology that enablers their business leads them to this position.
> most ransomware attacks are pretty unsophisticated
As weird as it sounds, this is both correct and incorrect at the same time.
It is correct, because ransomware is not particularly sophisticated by today's standards. Couple of decades of R&D has made the building blocks robust and uninteresting.
It is also correct in the sense that the attacks used to breach systems are unsophisticated. A vulnerability is published for an internet-facing system, and in just couple of days the underground toolkits are already (ab)using it.
It is incorrect in the sense that the crews who breached the systems are not the crews who deploy ransomware. Computer crime has evolved to a fully functioning economy, with high specialisation among its participants. Crew A reverse-engineers patches, updates their vulnerability exploitation engines and goes on to breach systems. (In a race against time, because there are other crews doing the same.) They then sell access to crews B, C and D.
Crew B are after financial information and will exfiltrate anything that can be sold to morally ambivalent hedge funds. They may also grab R&D material, because corporate espionage is a thing. Crew C will grab all the personally identifiable data and have intimate knowledge how to best monetise it for various types of fraud.
Crew D will deploy the ransomware, because they have all the sophistication you need to run their extortion operations at scale. These days this includes the ability to handle massive volumes of off-site backups, because why not. "Pay up or we leak it" is a perfectly valid extension to their business model.
The gangs I referred to as "Crew A" are known in the industry as Access Brokers. There are of course other operators too who work in a more asynchronous fashion, such as money launderers.
The economy powering the criminal enterprise markets is certainly sophisticated. And while most of the technology in use doesn't qualify for using that word, the internal operations these gangs run certainly do.
A really good point - we should distinguish the sophistication of the attack and the attacker. These are clearly highly organised and sophisticated attackers, many working in shifts etc.
If it gets in through an access broker, you're definitely looking at a sophisticated outfit of attackers.
I guess I'm approaching this as the defender - if the malicious code isn't exploiting anything needing patched (other than decades-outdated assumptions of a threat model where any binary has the ability to act inseparably "as" the user), the actual ransomware is harder to prevent for most organisations, as all the friendly hand-holding type advice they receive from police and governments doesn't save them (patching desktop systems won't prevent the file encryptor payload from running on the first host, after a user runs the bogus docx.exe file and ignores warnings through alert fatigue).
It would be interesting if companies were more willing to (or required to) share details of ingress vectors, to understand the extent to which they're being breached through really advanced attacks involving reversing of recent patches, versus someone popping a pulsesecure VPN that's been warned about for years. Or on-prem Exchange that they've continued to ignore all the warnings about as nothing is on fire. Or just a user clicking a link to a shared file mistakenly emailed to them, called CONFIDENTIAL - PAY SCALE 2022, which phishes their SSO credentials for 365...
Indeed - I think Windows Defender dabbled in offering this as a feature. I at least recall seeing programs prevented from creating files in the Desktop or Documents folders.
A rate limit, with group-policy controllable "automatic response" would perhaps help - you need the GPO integration though so that an IT admin can say "never allow file system rate limit to be exceeded".
If you enforce a rate limit locally, and on the network, and move to copy-on-write filesystems, it would be a whole lot harder to cause straightforward harm (at least while migrating to a newer, safer OS architecture paradigm, where code doesn't run as the user).
In the post-Covid world, I think MS and others have a whole host of these kinds of issues to think about - Windows in an AD environment is still (as far as I know) not something really geared for working off-prem. It still relies heavily on LDAP and CIFS etc. A re-write to get a desktop OS ready for the "web first" world (where everything is sent to the AD domain TCP/443, using HTTPS, with client certificates rather than passwords, stored locally via hardware-backed secure storage, and trusted CAs used by the DC) would be a big first step towards this. Yes, I know you could use Direct Access or whatever MS has butchered into the system, but in a world moving to zero trust, MS needs to move to zero trust.
Rate limits would be a great starting point, as would some proper platform-level protections around preserving shadow copies, using copy-on-write, and locally preserving versioned user files as a priority. As soon as a ransomware attack touches the network, IT should be able to handle it, as their backup regime should take effect. At that point, if you don't have backups sufficiently separate from user-writable files (or you never validate them, and thus don't realise you're backing up transparently encrypted ransomware'd files for months), you're on your own!
My business involves working a lot with such situations, and frankly speaking, none of the above would help in the least bit.
Cost cutting is probably the biggest threat to most businesses. The mythos of the hyper-converged infrastructure, with the datastores and repositories for backups being hosted on the same physical device, are some sort of infection that just cannot be wrenched from people's heads.
IT Professionals (not managers, not hapless non-techies, actual persons with a cornucopia of certs and accolades on their linkedin) are in denial as to how to design a proper infrastructure to respond to ransomware. At this stage and for the foreseeable future, Ransomware is an inevitability; not "if" you get attacked, "when". But the countless number of conversations I've had where basically a group of people from the IT department theorycrafted a perfect defense only to get attacked because one of them clicked on a random excel document from a spoofed email is too high.
When clients ask me "what do we need to do to protect against ransomware" and I explain what airgapping means (tape, removable drive arrays), we're either ignored, or they say they accept and the clients just don't have the discipline to follow the required practices.
Modern IT prefers cargo-cult security, and IT professionals love their checklists from some organization, regardless of the fact that most of the checkboxes are useless to protect against ransomware. But the professional can eschew responsibility because "hey, I checked all the boxes."
Until technical professionals as a whole start to take security seriously and exhibit the discipline that is required for such security right now, Ransomware is going to continue to be prevalent. No amount of rate limiting from vendors will help, because users will simply just not use such versions, will disable such limits, will work around such limits, or any of dozens of workarounds to avoid it because such limits would be inconvenient (neverminding such limiting tooling probably will just be exploited)
We need discipline first, not tooling to try to correct for lack of discipline.
Behavioral heuristics are best learned in-situ; you need to know how the software is used with which data to correctly profile normal behavior. Some users and workloads hate sandboxes, though, and a 'Run as Adminstrator'-esque familiar-escape thus demanded by users will no doubt destroy its utility. Ultimately, someone must correctly articulate what the system is supposed to do, and this requires knowledge.
Okay, but these particular heuristics aren’t rocket science. Is a process rewriting 25% of my hard disk, and/or 10% of one of my backup drives? Time to send an alert to the user, and an IT admin if this isn’t a personal devices. There are very few legitimate use cases for that.
Had to troubleshoot Windows software from a MAJOR shipping provider that popped up a “you must do this thing” on a fully up to date Win10 system today.
“The thing” would not work as an unprivileged user account and would only work as a right click run as administrator situation :-)
Implicit in this comment is the assumption that current technology is pretty much the best we can do?
> IT departments will never have enough money/time/staff to keep systems up to date with the latest OS (look at the number of people still running critical systems on Windows XP).
Why is it that even slightly old systems are so buggy that they are trivially hackable for a moderately well funded group?
Modern security is based primarily on security through obscurity. As long as you stay up to date, all of the bugs you have are sufficiently obscure that knowledge about them is probably too expensive for the type of hacker that would target you.
> Users will always open attachments from people they don't know, click links, or even pick up random USB sticks.
Why is any of that a problem? A user should not be able to threaten an organization's IT system even if they were outright hostile (unless they were put in a specific position of trust within IT; but even then the amount of damage they should be able to do from their personal work computer should be limited).
>Why is it that even slightly old systems are so buggy that they are trivially hackable for a moderately well funded group?
Because there's not enough money in making things bug-free from the start. It is possible (see seL4 and They Write the Right Stuff), but the incentives aren't there.
Some kind of liability or minimum standard (similar to building code) would help, but I'm not sure just how it would be best implemented.
That money would have to come from somewhere, though, and that's the pockets of consumers. Do they, in general, care enough? Is the security of software worth enough to them to spend the extra money? You don't just get what you pay for; you get what you're willing to pay for. And does the consumer have the expertise to evaluate the costliness of the threat or the security of the software? For that matter, I doubt the majority of developers have that expertise.
You're not wrong about why it doesn't exist, but I'm not convinced the market conditions exist to rectify that.
> Why is it that even slightly old systems are so buggy that they are trivially hackable for a moderately well funded group?
because software is tremendously complex with a large surface area to attack. And many OS features were designed when wide-scale hacking was not a problem.
Then that means the software is hopelessly inadequate for the current environment where wide-scale hacking is a constant problem. To echo what they said, why do we accept and deploy systems that catastrophically fail in circumstances that we know are going to occur? Why is it acceptable to take systems that were not previously connected and actively make a decision to connect them to internet if they are completely unfit for that environment? And not just that, they are so unfit that they not only fail in the new environment, but they enable total organizational collapse in a way reminiscent of the exhaust port on the Death Star.
why? because companies engineering the software cut corners to save costs, or their engineering talent isn't competent or talented enough to produce products that don't have majors vulns.
You can better believe software running an aircraft carrier has been hardened 12 ways til sunday. Prosumer operating systems - not so much.
100% agree. It's a trick that criminal con-men have been using forever in the physical world. There's no reason to kick a door down (draw attention to yourself) when you can convince someone inside to open it.
"My puppy just got hit by a car! Can I come in and use your phone to call for help?"
Why do twitter scams work so well? Because the margins are high enough from the few ppl who still fall for the scams. Awareness only does so much. You spread malware to millions of ppl, just a few conversions makes it worthwhile.
I agree with what you are saying, but calling it a people problem makes it harder to solve. If you organization is large enough than your users will always click on phishing links and download sketchy malware toolbars.
You should also expect to an lesser extent that your internet facing infrastructure will have vulnerabilities that will be exploited before you are aware of them.
These are facts of life and need to be expected. Not saying that security training is wasted money, but it is in no way a solution to for example phishing. Accept that you will have compromised clients and internet facing servers and start making a strategy with that scenario in mind.
There is no technical reason for allowing any random user to delete their data, or at least not requiring some specific capability that most processes don't have.
In fact, there were systems built this way in the 70's.
> IT departments will never have enough money/time/staff to keep systems up to date with the latest OS (look at the number of people still running critical systems on Windows XP).
It's not like Microsoft has an explicit EOL date announced for every OS release...
Is it true that hack any random staff / computer of the company can lead to the ransomware attack of the machine holding the crucial data of the company?
It is probably more true in "Corporate America" where MS Windows Active Directory is in use and all the computers are domain joined and have read/write access to file servers.
It is, but solving that problem would entail re-training staff, reduce "productivity", and moreover, cost money... Many companies have cut their IT provision below what is needed to simply stand still.
IT is a cost to their business, not a revenue source. They don't consider the counter-factual of "well, what if we didn't use IT and computers and the internet" when valuing what IT is bringing to their business. If they did, they'd perhaps be willing to spend more.
MBAs don't like spending money on something that doesn't yield them more sales though...
MBAs don't like wasting money. If bad IT costs them money or sales they care. If they can reduce the costs without losing that money they will. However they don't know how to solve this optimization problem and are learning the hard way when they get it wrong.
I think part of the issue is also that the negative impact of getting IT wrong is delayed, and often lands after your middle managers have moved on to other organisations, thus don't see the impact of cutting costs repeatedly.
Since there's no visible problem (nothing catches fire) the day, week or month after cutting spend on IT, it's an unnecessary expense in the eyes of beancounters.
> Users will always open attachments from people they don't know, click links, or even pick up random USB sticks.
One bank I interned at sent people an email about the weather or something to that extent and each link had a unique identifier. Shaming each individual user is the best way for them to learn.
> Shaming each individual user is the best way for them to learn.
It's the best way for them to stop trusting the security team and never come in with any issue, even if it could be used as an early signal preventing bigger attack. Many people's jobs rely on them receiving emails from unknown sources and receiving files from them. Shaming them for "you should've known this specific link is bad" is counterproductive. That's even before we get to whether they would actually put in any credentials.
Phishing tests have value. Running them to shame people into compliance is a waste of time.
Surprised at how much focus there is on backups as the solution. You'll never fully recover from those backups. Backups won't help you avoid fines, lawsuits, lost customers, and lost time.
I run an open data set on data breaches. The vast majority of ransomware incidents start with a phishing email, to beach head, to find domain admin, to game over.
The root problem is domain admin population size. Reduce it to zero with privileged access management to avoid ransomware.
I didn't post about backups to imply that they're the solution to ransomware. But having seen what ransomware can do, I know that backups can be the difference between loss of productivity, and the end of the business.
If you are keeping to best practice, including the things you recommend, then you should hopefully never need backups. But I see backups akin to seat belts, motorcycle helmets, fire extinguishers, et. al. They are things you should hopefully never need if you aren't doing anything stupid or dangerous, but if the situation ever goes sideways, can be the difference between surviving or not.
A second root problem is the insanity of public SMTP on today's Internet: allowing anyone, claiming any identity, to send you any content without limits.
I started the "mnm" open source project to enable a new email network, on a new protocol.
This problem is partially solved by DMARC/SPF/DKIM. There a few issues with DMARC, but the main one - adoption by senders is well below 100% so you just cannot block mail without DMARC.
But the main question I have - does a typical mail users actually care about sender domain? I suspect - not at all. And I see two main reasons for this. First notion of domain is de-emphasized everywhere - browsers turned address bar into a search bar and make real URL hard to notice, MUA (e. g. Outlook) don't show full email for senders in an address book. Second problem - legitimate senders often behave in exactly the same way as phishers - use unrelated/unknown domain and give no way to verify that the domain is legitimate. For example Charter/Sirius ISP sends mail from domain customeremailnotifications.com [1] and I found no ways to see for sure that is domain is owned or used by Charter. My memory is fuzzy, but PayPal (or ebay) AFAIR used a phishy domain too, something like managemypaypal.com. A largish ZA utility sends mail from eskomstatements.co.za having the main domain eskom.co.za and of course there is no good way to verify that both domains owned/used by the same company. List can be continued. All this conditions users to trust mail which is coming from a random-domain-registered-by-phishers, because legitimate senders do the same.
>There a few issues with DMARC, but the main one - adoption by senders is well below 100% so you just cannot block mail without DMARC.
I wonder if this could be fixed by email clients marking emails that fail DKIM as spam/attaching a large warning. Most users use email clients and they really don't do a great job of notify users of potential spoofing issues (with Gmail, you have to find "view original" to see that DKIM fails). I'm sure that spam filters would notice after a few hundred/thousand emails, but a successful spear-phishing attempt may not require that many emails. If customers complain about legitimate emails being marked as actually fraudulent, I'm sure adoption rates will increase.
However, if an org needs email, why can't they just configure their filters to move everything from outside the org in a special folder, and email server could further filter any link and put it through a warning page before redirecting to the link.
We already have DKIM and SPF to verify if the domains of the sender. Just setting up these can work.
There's this point in Civilization (1) where you get Knights and your attack is 4 and mostly you're going against Phalanxes who are defense 2, and even with various advantages you still have a massive edge. That's where I feel we are with a lot of computer systems. It's far easier to attack than defend, and with Bitcoin it's easier than ever to transfer wealth anonymously. In the case of the knights eventually the defence got the edge again with the advent of firearms, but that could be a long time coming.
I think it is likely that there will be a real world kidnapping where the kidnappers demand a Bitcoin ransom.
Once this happens, Bitcoin will get rapidly regulated out of existence by governments.
Imagine if it follows the usual stereotypical news coverage. An attractive, photogenic American woman goes to a foreign country and gets kidnapped. Later the kidnappers send ransom demands with a Bitcoin address.
This would be wall-to-wall 24/7 news coverage on all the major channels.
After that, the public would likely support almost any type of regulation on crypto-currency.
Kidnappers can also demand dollars for ransom. Are governments going to regulate dollars out of existence? Or are you saying they'll just use it as an excuse to harass bitcoin?
Dollars are a little different in that transferring large amounts anonymously is hard.
Collecting the ransom is probably the point of highest vulnerability and that is something law enforcement agencies like the FBI have used to catch kidnappers.
However, with cryptocurrency, that vulnerability is mitigated a lot, and that completely changes the dynamics.
There is a reason, the ransomware attackers aren't demanding suitcases of cash at pre-ordained meeting sites.
Just because lives aren't being directly targeted, how isn't this a form of terrorism? If your and your employee's livelihoods depends on your IT systems, and somebody intentionally destroys them, that is terrifying!
Unless there is some serious teeth to any response to this, it will keep hapening. The FBI and the UK CCC can put out as many recommendations as they like about backups and updates, but criminals will just keep finding targets, or upping the damage.
It is time to consider these attacks as terrorism, and respond to state sponsored terror attacks accordingly.
Hell no. Cryptocurrency is absolutely necessary, you just think it isn't because your government or bank never froze your money for arbitrary reasons and demanded you prove your innocence in order to give it back. It's just like the "got nothing to hide" argument in privacy.
It literally doesn't matter how much energy it consumes or how much crime it enables, if it allows us to escape government stupidity it's more than worth it. Actually, crime is proof that cryptocurrencies work and and actually provide the necessary privacy we all need. The harder it is for authorities to trace, the better.
Crypto is like an unregulated bank with no depositors' insurance. Those were forbidden (in most places) for a reason! What if a crypto software network goes all ... Geocities ... on owners? Or pick any other discontinued web or software service out of the wreckage of tech disruption. There isn't a M&A strategy to rescue these things....
From the perspective of cryptocurrencies in general, those limits don't mean much when you can create new cryptocurrencies at a whim. The supply is equally limitless.
Exchanges are good at blacklisting BTC ,so this means it will be hard for hackers to cash out. Just converting BTC into XMR is not a trivial process, as it needs to go through an exchange. Trustless cross chain transactions are still in infancy .
Is this really true? All it takes is for someone to set up a new exchange without a blacklist, and in the first few days all those blacklisted coins will be converted into other currencies and the blacklisted coins will end up in the wallets of other innocent users.
Send via an accountless coin swap. The trust problem is trivial to work around with a script to slice the loot into small lots (send 1/X, if goes through, send another 1/X, if not, move to another swap bot). Will take a little while, but you can read some Lambo reviews while you wait.
> It looks like they got the info from the Texas exchange.
I don't think the TX in that article means an exchange based in Texas. I think TX is abbreviating the word "transaction". Every instance of TX is followed directly by a bitcoin transaction id.
Yes, assuming sufficiently (not very) expressive transaction signing on said chains. Assume Alice wishes to exchange 1 ABC for Dan's 1 DEF. Alice publishes a transaction for 1 ABC with a UTXO that must be signed by both Alice and Dan. She then provides a zero-knowledge proof that her private key for that signature hashes to XXX. Dan then publishes a transaction for 1 DEF with a UTXO that requires a signature from Alice[0] and a hash preimage of XXX. Alice then pulishes a transaction moving that 1 DEF to her own whereever, which (since second-preimage attacks are hard) contains the private key Dan needs to claim the 1 ABC UTXO.
So this requires that chain ABC supports 2-of-2 (or k-of-n) signature requirements, and that chain DEF supports signature-and-hash-preimage requirements.
This simple protocol is vulnerable to losing money entirely if one party abandons it partway though, but that can mitigated by exchanging eg 0.1 ABC/DEF at a time, or by a more sophisticated exchange protocol.
Edit: 0: using a different key than the XXX preimage.
Do you mean in the specific case of ring signatures? Or at all?
There are already threshold-signature-based schemes for doing this with ECDSA (though they're very gas heavy at the moment). But none has emerged for ring signatures yet beyond the paper stage.
I don't get why everybody cares so much about the ransomware/cryptominer part, but not the data being exfiltrated and sold/used for criminal activity part..
My cynical suspicion is because exfiltration/sale doesn't prevent the business from continuing to operate, pretending nothing is wrong, and gaslighting their users via reassuring language if anyone finds out or is approached.
When the data is encrypted and locked up, the company itself cries foul (not out of care for people's data) as it can't keep doing whatever mundane things it was doing day-to-day.
Most data just isn't that useful. Sure there are exceptions, but most isn't very useful.
If you offered my company our competitors source code for free, we wouldn't take it - we have some ethics. I think most of you are in the same boat - even if you don't have strong company ethics are quick check would discover that you already know how to do everything they are doing, so time looking at their code is time you aren't adding those features to yours. (the one exception would be their file format which are valuable)
Even if the data is valuable, can they use it? I know the database admins in my company are registered with the SEC as not able to trade some things because we have insider information from our customers. Even if someone got that data though, they would have to figure out the database, what it means, AND be lucky enough to do that when there is something non-public that can be traded. Most of the time the expected supply is the same as actual supply, so that fact that we have insider information on the actual supply isn't actually useful. (the above is a different department from mine so I don't know the details very well)
Thus the example of a police department is an outlier as the data is sensitive for a long time.
This attack is most effective against victims without backups who are desperate to get back their data. If you have backups, you basically shrug it off, restore your data, (re)train your users and use forensics to mitigate any obvious weaknesses. In most cases, it takes less effort for the attackers to move on to the next victim instead of trying to extract value from any data they may have exfiltrated (probably none in a lot of cases).
As an aside does anyone know (with citations) the history and why reputable news publications like the BBC or reuters never cite their sources? It's always seemed odd that even quacks and conspiracy cites (mis)use sources whilst well respected publishers don't.
The solution to ransomware is to daily mirror every system to an append only backup and then just flash everything back if you get hit. You lose a few days...
The threat to that is a silent encryptiion on that goes on for weeks before the alert/ransom is demanded. Your mirrors are now full of encrypted trash, or you need to go back a month or more.
This could be managed with a backup that maintains 'fingerprint' hashes of all the files, tracks the changes and alerts if there are too many, or alternatively, the user/admin litters the system with a set of canary files of the same type that should never change, and the backup system halts and alerts if any of them do.
I'd like to see a utility to just check a set of canary files for changes. Anyone know of one?
Your data has still been leaked. A few days ago there was a story about a ransomware gang threatening to expose police informants if they didn't get paid.
But I think a lot of businesses really have a problem with the mechanics of just getting their business running again, like the one in the article. This seems fairly straightforward to defend against.
If you mirror systems then you will find yourself in a halting problem world of restoring older and older backups only to discover that they, just like all the younger ones you tried so far turn out to be exact reproductions of a state already breached. If you store only content, rebuilding the environment will be quite a feat.
That crypto has to become real money at some point. If the ransom is in something like bitcoin (very common) then identification is just a waiting game for that conversion
There's an argument to be made here for using hosted cloud providers instead of rolling your own servers, keeping your data off premise and relying on their security practices. The problem with managing your own servers and services is that you're constantly playing the security game, and responsible for keeping your systems secure and up to date. People hate on providers like Google, but the reality is they have _thousands_ of top tier engineers and security experts managing and monitoring their cloud services. Using their managed database and VM services are far more likely to provide better security than your in house IT team. When was the last time you heard of a Google Cloud customer falling victim to a ransomware attack? (I'm using Google as an example, can say the same for Microsoft, Amazon etc)
Can anyone elaborate on what a company wide restore operation looks like in a big company? I worry about this because IT support teams tend to test restores per system. But what if it's thousands of systems going down at the same time? If this happens I even wonder how long it would take to restore productivity if the ransom is being paid.
Also, what percentage of systems would typically be infected?
Criminalize crypto pyramid Ponzi schemes because it's good for nothing but things that makes humanity worse: gambling, ransomware, wastefulness, greed, etc.
This is going to be the rationale given for the heavy-handed cryptocurrency regulation they're going to bring down on all the exchanges that US persons can access.
Pretty soon all you'll be able to legally access as a USian is "Bitcoin!(tm)"[1] (like what PayPal is doing), not the actual uncut blockchain bitcoin that you can send and receive at will.
Personally, I don't see the problem. 1. Bitcoin drives up GPU costs. 2. Bitcoin makes it ridiculously easy to commit certain forms of crime. 3. And Bitcoin's energy footprint hurts the planet.
Putting an entire society under ubiquitous surveillance to catch a tiny minority of criminals isn't a good bargain. Fact is, though, that's not even why they do it.
Eventually you get to the point where you see that the information and ultimate large scale control permitted by the collection of that information is itself the end goal, and that it has nothing to do with detecting or preventing crime.
Bitcoin has been around for less than twenty years. Since then it has seen massive deflationary periods (when the relative value rises, like up until a month ago), and massive inflationary periods (when the value drops) In 2018 bitcoin had an "inflation" rate of roughly 500% (meaning that at the beginning of 2018 you could buy a basket of goods with an equivalent value of $13.5k USD, at the end of the year you had to spend 500% more bitcoin to get the exact same basket of goods. Meanwhile you only had to spend 2-3% more USD to get those same goods)
The fact that there aren't bitcoin loans is proof that it is not a viable store of value.
There is volatility for sure. But the same is true for any asset: stocks, bonds, real estate, even gold. Albeit not as much as a currency only around for 20 years, as you said.
As for loans, lots of exchanges let you trade with leverage, effectively trading on borrowed money.
More like increases inequality [1][2]. If you're rich, you have access to rock-bottom interest rates which you can then invest. If you're an average citizen, you watch all asset values (real estate to start with) soar while your income stays relatively stable.
Already happening with 'unhosted' wallets being blocked or heavily scrutinized. My personal experience is as follows:
Sent over 20 transactions from US exchange -> US exchange and no problems.
Sent a single transaction from my unhosted software wallet -> US exchange, and got my account locked. Questioned on everything including my employer's information, had do re-do advanced KYC, source of funds etc. (The unhosted wallet was funded from an exchange which makes it stupid, and i didn't use any coinjoins or anything).
They are fairly easy to pick out simply based on the way money flows in and out of the addresses. Also, they become known over time as customers share them then they are eventually labeled by companies (exchanges included) performing blockchain analytics. Many of the public blockchain explorers show labels for popular addresses. I've seen labels on big exchanges, Satoshi's coins, seized silk road coins, coins from hacks, etc.
To be fair it was a regulated stable-coin issuer based in NY where i was redeeming stable-coins for USD, so the strictest of all AML was to be expected. I was just surprised the tech is already in place for a travel rule for crypto. If i would guess it would've been a company like chainanalysis aggregating the exchange data to enable this.
Will increase the utility of decentralized exchanges like Uniswap and DeFi in general. The more CEX gets regulated the less people will want / need to use them.
Yep, that's what people don't understand, you can ban centralized entities as much as you want, but you can't stop people from running arbitrary code on their devices which means it's impossible to shutdown a properly decentralized network.
> "Not only that but, really distressingly, the funds that come in from paid ransoms fund other forms of organised crime, like human trafficking and child exploitation."
Not really. Those things either fund themselves or their divisions get shut down within the criminal organization. Paid ransoms mostly fund luxury sports cars and vacation homes.
Source: I'm a Russian mob boss and ransomware is highly segmented from the rest of my rackets.
Ransomware wouldn't be a problem if the software industry took quality assurance seriously (or was regulated to do so), like every other engineering industry. There's little difference to me between an insecure program that allows hackers to hold your data for ransom, and a defective home appliance that occasionally starts electric fires.
Well every home appliance could easily start a fire if random malicious actors got to fuck with it while it was plugged in. You'll note that other engineering disciplines would also fall apart if hostile actors were constantly throwing explosives at the things they make 24/7.
Yeah but software engineers know that hostile actors come with the territory any time they expose a networked device or service. It's no different than corrosion or any number of other inevitabilities that engineers have to deal with.
When's the last time a civil engineer designed a bridge without accounting for corrosion or the fact that people will be driving over it?
How is wear and tear equivalent to hostile humans purposefully trying to fuck it up? Even military installations needs armed guards to stop people from just cutting through the fence. Wear and tear is more equivalent to keeping your site from going down to high traffic. Show me a road that's still safe when three guys with guns are standing in the middle of it shooting at passing drivers.
How about a skyscraper in downtown New York that can withstand a nuclear blast? [1] Or a bunch of nuclear blast shelters built all over the world? Or every fighter jet or other heavy duty piece of military equipment literally built to withstand guys shooting at them? Engineers design stuff to withstand adversaries all the time, when it's required. Designing with adversaries in mind is always required for connected systems in our field.
The computer equivalent to "three guys with guns are standing in the middle of [a road] shooting at passing drivers" would be three gunmen gaining physical access to a datacenter - game over. We don't try to protect against that attack vector any more than civil engineers protect against terrorists when designing some intersection, except maybe we encrypt some data at rest and they put up some bollards and CCTV.
You're getting hung up on the agency aspect when the most important thing is the attack by attrition. It doesn't matter whether it is a force of nature like corrosion or all the bad actors in human civilization, the point is that it is a known quantity that will eventually degrade and break every nontrivial system.
We don't know which future zero day exploit will break our systems any more than civil engineers know which wave or car will cause the ultimate collapse, but we know that it is inevitable. That's why we have defense in depth. It is the nature of the beast.
> How about a skyscraper in downtown New York that can withstand a nuclear blast? [1] Or a bunch of nuclear blast shelters built all over the world? Or every fighter jet or other heavy duty piece of military equipment literally built to withstand guys shooting at them?
Well if they're exists, why don't we use them for everyone on daily basis, since it'll be safer? Because they're hella expensive and resources are limited.
If any smaller shops or businesses try to implement highest security for their system, their development and operation cost can multifold easily, and the ux can be reduced due to security.
If I was regularly getting shot at while driving my car I'd take action. Bullet proof glass would be a must - sure it is expensive, but I'd pay that price. In fact if this was a regular thing the average car would be built more like a tank - armor, fun flat tires (or metal tracks), probably a scuba type air supply so I can't be gassed out...
But since it isn't a regular thing I don't bother with that. My car wouldn't survive long in a real battle and I'm okay with that as I don't expect to have to drive my car in a real battle.
The warning is out to everything network connected: it is time to invest in the equivalent of armor and bullet proof glass.
Things that are exposed to an adversarial environment are usually engineered with that in mind. Locks are (usually) designed to be hard to pick, for instance.
I think locks are about as weak as software security, relatively speaking. The difference is that if an organized gang of criminals physically broke down the doors to a corporation and stole truckloads of computers, the law enforcement response would be significant. (And we mostly wouldn't be sitting around blaming the corporation for not hiring armed guards.)
In the case of ransomware though, this really is pointing firmly at the operating system. It's not (generally) insecure programs that lead to ransomware succeeding - ransomware works so effectively (and is a force multiplier for malicious actors) specifically because it runs with normal user privileges, and isn't needing to "exploit" anything.
It runs as a user, and just makes do with the access that user has to files.
Before we hold application software to account (and we really do need to), we need to start with the fundamentals - operating systems need to move beyond a "software runs as the current user" model. Otherwise I don't see how we can fix this with assurance/regulation - the root issue seems to be inherent design flaws in modern GUI/desktop operating systems. The tools are there to protect yourself (binary whitelisting, applocker, santa etc.), but they are seen as more inconvenient to use than doing nothing... Hence most companies do nothing, as that's cheaper.
I mean I disagree completely. A lot of ransomware occurs completely incidentally to the programs.
Not to mention a lot of actors getting hit with ransomware aren’t software companies. They’re governments, schools, and hospitals. Never mind taking QA seriously, these institutions don’t even take IT seriously.
I don't think you can just "kill cryptocurrencies." China tried that and most of the mining happens there now.
Plus not all cryptocurrencies waste power the way bitcoin does. And before crypto was big there was malware that asked for money via mailed checks and bank transfers (and there are plenty of scams that just call people and ask for money with no software at all.)
In addition banks make an incredible profit laundering money for drug/human trafficking. I'm sure they could be convinced to put that to use doing other things if crypto wasn't there.
Perhaps, but don't take comfort in that: protect yourself with good backups so you can keep the reputation of it not being worth it to attack linux/BSD users as they won't pay up. So long as there is no money in attacking they won't find our bugs.
I cannot emphasize enough the importance of backups. Take backups, verify your ability to restore from them, and keep them segregated from the rest of your infrastructure. It doesn't matter how inelegant and hacky your backup solution is, so long as you can restore from it. Any backup you can restore from is better than no backup.
You might get a call from one of your application engineers shortly before bed on a Friday night that the web front-ends are acting weird, and they can't get in to troubleshoot, and then 10 minutes later come to discover that the latest strain of Ryuk has laid waste to 2/3s of the servers and workstations across the company. And then all of a sudden, those VM snapshots you'd been copying off to another file share with a shell script have become your salvation. Yeah, containing Ryuk and the rest of incident response mode are going to suck, but at least now you don't have to write an apology to your customers that the data they entrusted to you has been irrevocably lost.
In case you're wondering, no, that did not literally happen to me. But it is a mild fictionalization of someone I know.
Keep backups, and test your restores regularly, people.