Hacker News new | past | comments | ask | show | jobs | submit login
Accidentally Stopping a Global Cyber Attack (malwaretech.com)
1981 points by pradeepchhetri on May 13, 2017 | hide | past | favorite | 338 comments



Lessons learnt by ransomware developers - rather than using a single pretty arbitrary test, always rely on a more robust statistical model to detect whether your code is running inside a sandbox.

Lessons learnt by NSA - never over estimate the skill level of your network admins.

Lessons learnt by Microsoft - never under estimate the loyalty of your Chinese Windows XP users, both XP and Win10 have 18% of the Chinese market [1].

Lessons learnt by the Chinese central government - NSA is a partner not a threat, they build tools which can make the coming annual China-US cyber security talk smooth.

[1] http://gs.statcounter.com/os-version-market-share/windows/de...


> Lessons learnt by ransomware developers - rather than using a single pretty arbitrary test, always rely on a more robust statistical model to detect whether your code is running inside a sandbox.

I like to imagine that one of the developers on that team filed a tech debt item to do exactly this, was never able to get their manager to prioritize it, and is now pulling out their hair saying, "I told you so!"


> I like to imagine that one of the developers on that team filed a tech debt item to do exactly this, was never able to get their manager to prioritize it, and is now pulling out their hair saying, "I told you so!"

Malware authors have budgets and schedules too. It's a business, probably more profitable than 90% of the startups in SV


That's not exactly high praise. A tuft of grass is more profitable than 90% of the startups in SV.


Especially if that tuft of grass is on piece of real estate in the bay area.


I wonder if Levi's jeans were more profitable than the average goldbug


Haha... That's a quite negative point of view. I mean even though most startups don't survive and won't make the involved parties more wealthy, there are people who actually use that stuff. From a user perspective it's profitable ;)


> It's a business

No, it's not, and it's pretty damn rude to make that claim in the presence of legitimate businesspeople.


Oh please. How many "legitimate businesspeople" believe its their moral duty to spend millions on tax evasion lawyers and schemes. That think nothing of destroying thousands of lives and millions of man years worth of hard-gotten savings from the defenseless with legal but immoral schemes. It is perfectly legitimate business people that refer to patients as "units" and literally let them die if not money can be made of them. I'll type all day and not get to 10% of legit accusations I can make against "legitimate businesspeople".

I'll take an honest crook any day.


Haha, yep. At least they're honest about screwing you.


They didn't say "reputable business". Even the mob is a business, meaning "something whose purpose is making money" (over-simplified, but I'm sure you'll get the point).


The definition of a business is an entity which provides goods/services to consumers.

The typical moral distinction b/w a business and other entities which make[1] money is a business (presumably) does it within the constraint of their counterparties enjoying the liberty of choice. This becomes a grey area when government enters the picture and removes liberties--which is why there is debate about the legitimate role of government's monopoly on legitimate violence/aggression here.

1 - note, a further distinction could be made between entities which create value, and those which transfer it.


> The definition of a business is an entity which provides goods/services to consumers.

That's not true. A business is a vehicle for making money, that's it. Most businesses do this by providing goods or services, but certainly not all - for example, financial traders that only manage their own funds, like the Renaissance Medallion fund.


Financial traders are still buying and selling goods and services with a counterparty.

Even here, we still can observe that in most cases (except those where government interferes, or perhaps with organized crime) the counterparty is also enjoying a choice in whether or not they want to do the deal. I would argue that in cases where a counterparty has no choice, such a scheme should not be viewed as a legitimate busines, as it historically would not be.


that's basically the root argument of libertarianism - that forced transactions are unethical and thus taxation is theft.

I'd still argue that a business does not have to provide a good or service to be considered a business though. The Medallion Fund that I mentioned solely exists to make money for its owners - it does not provide any goods or services.


I like to think that they know anything they implement will eventually get blocked, so they have a big collection of unexploited evasion tricks and just introduce them one by one.


I like to think wankers who stop hospitals go to jail and never work with a PC again. It would be nice if those BTC were hard to cash out too.


Not that it exonerates them whatsoever, but these kinds of attacks (including Wana Cryptor) usually aren't tailored for hospitals or any particular institution. They just harvest as many email addresses as they can (from leaks and purchased lists from spammers, etc.) and try to get as many infections as possible.

Hospitals just happened to be disproportionately affected by this attack because a lot of them have ineffective IT departments/mangement and never applied the MS17-010 patch.

Of course, these people are still felons and are likely responsible for millions of lost family photos, work and school documents, etc. They just aren't going out of their way to target hospitals.


I understand what you're trying to say, but think about what happens when the military strikes a hospital and calls it "collateral damage." While you're correct that these people probably did not intend to damage hospitals, they could have reasonably foreseen that an indiscriminate attack on computer networks around the globe would have deleterious effects on essential infrastructure.

This means that they knowingly or with reckless negligence unleashed such an attack on the world. If they had been more "scrupulous" criminals, they would have more narrowly tailored their attack on targets they believed deserved to be extorted or where such extortion would not interfere with life critical systems.

I'm not a lawyer, but if they were a nation state, I believe they would have violated the Geneva Convention's prohibition on attacking hospitals.

That said, I think this attack gives more weight to NSA critics that contend that their exploit research should be focused more on defense rather than offensive capabilities. Their carelessness combined with another group wanting to embarrass them is what allowed this indiscriminate attack to be inflicted on civilian infrastructure.


> think about what happens when the military strikes a hospital and calls it "collateral damage."

Old news. The more recent and much more insidious variant is calling the hospitals simply "valid targets".

Or in case of an unexpectedly intense media backlash, "a mistake".[0]

0: https://en.wikipedia.org/wiki/Kunduz_hospital_airstrike


When it comes to military/state objectives that the public poorly understands the risk scenario is quite different.

Which is why we're currently in a situation where zero-days that NSA easily knew would be leaked were not patched at least a month ahead of time were left unpatched. The costs aren't significant enough to motivate them to respond to their failures.

People like to blame the capitalistic incentives for not upgrading from Windows XP but to me the failure to respond to this obvious outcome of the leaking of NSA malware is far more insidious. These sys-admins managing old systems were not prepared for state-financed malware to be released to C-level cyber criminals as a 'threat-actor'.

The poor state of corporate information security has been exposed in the last few days, but even that sorry state is nothing compared to the failed responsibility of the US government to value their citizens over internal objectives. Which is increasingly a common narrative that is a unsurprisingly a result of the unencumbered growth of the security state and by proxy the executive branch whom they ultimately report to.


I understand your sentiment, but what exactly is the military to do when the enemy specifically uses hospitals to house command bunkers?


Do many people think that bombing the patients is the acceptable answer?


The hospital really needs to move the patients out immediately if the local military starts military operations from within hospital (which is basically a war crime).

Then, if the commanders force patients to stay by threat of violence to stay as human shields, that's a further war crime. The responsibility of casualties here is more with those using patients like this, than anyone else.


This seems like a long-winded way of saying "yes we bomb patients". Do you actually believe this?


There's probably a reason why we don't start killing hostages in an hostage situation.


Typically because the cost of not killing the hostage takers is basically the risk of dead hostages. There's nothing else at stake. You can safely assume that the next plane full of hostages that has terrorists at the controls will be shot down.

I'm not really advocating yes or no to bombing hospitals or schools to kill terrorist leaders hiding within - but your assertion is false. We will kill the hostages. All actual breaches involve a risk of % losses and that's baked into the decision to go in. Just a person somewhere trying to make a decision about the best outcome, for the "greater good".


Obviously using human shields like this is criminal. Do you see the people who bomb hospitals as baring any responsibility?


I believe many in the military would simple say that Total War doctrine, present since perhaps the US Civil War and definitely by World War I, would argue bombing patents to get at command-and-control would be acceptable during times of war.

Now before everyone buries me, total war is a rather rare military state, and probably only present a select few times in the 20th century.


About one million Germans were dead or wounded as a result of allied bombing. Almost certainly entire hospitals were blown up too, in Dresden if nothing else.

A small prize to pay for not having nazies stomp around my backyard (almost literally, there are remnants of a nazi bunker not half a mile from where I live).


Area bombing was largely ineffective and German production increased during the heaviest periods. In the view of many, it was a crime committed by a vindictive group of criminals. Area bombing has some well documented cases of slowing down the fall of nazis (hindering troop advances, taking away a lot of war production resources etc) but little evidence of speeding up the end.

Edit: good book on the subject https://www.google.co.nz/amp/s/amp.theguardian.com/books/201...


True of this attack, but ransomware attacks targeted specifically against hospitals have been booming over the past year or two. Aside from poor IT, hospitals often need immediate access to that data to treat patients, which makes them much more likely to pay. Which also means they generally ask for more than 300 dollars - that's the real proof that NHS was just collateral damage.


Indeed. t you chuck a Molotov into a crowded theater, you don't get to claim that you didn't mean to hit any children.


> never applied the MS17-010 patch.

Until today, there was nothing to apply if your computers were running XP or 2003. Guess which Windows versions are the most popular in UK hospitals? So I think your sentence should read like "Hospitals just happened to be disproportionately affected by this attack because they were forced to trust Microsoft would never put corporate profit before social responsibility."


XP and 2003 have been end-of-life for years. They both were released 14+ years ago. So you can just change what I said to:

"because a lot of them have ineffective IT departments/mangement and never applied the MS17-010 patch or are running ancient operating systems."

edit: And in fact, Microsoft did release a special XP hotfix for this vulnerability yesterday: https://blogs.technet.microsoft.com/msrc/2017/05/12/customer...


> because a lot of them are running ancient operating systems that are the only ones that can interoperate with legacy hardware

FTFY


What news reports said anything about legacy hardware? The BBC and Reuters articles claimed the NHS suffered infection of their patient records servers and their reception computers.


Apparently the impacted XP and 2003 machines were accessing the same disk servers as the patient record systems. Thus an infected CAT scanner controller (or whatever) was able to destroy the patient records.

That doesn't tell a story of missing money or maintenance contracts. It tells of poor or even irresponsible and incompetent deployment procedures.

You shouldn't allow your CAT scanner to write over your patient records at a server. You shouldn't even have them in the same network segment.


And on legacy software. My NHS Trust seems to have escaped unscathed, but it has software that won't run on modern systems which is why XP is still seen in most departments.


What software is that? There is a 32-bit version of Windows 10, which can still run 16-bit Windows/DOS programs, and IE11 still supports ActiveX, Silverlight, Java applets and even (in IE10 compatibility mode) VBScript.

So AFAICT 32-bit W10 can run most anything 32-bit XP can (likewise the 64-bit versions, though neither can run 16-bit programs), and IE11 can run most anything IE8 can (with minor configuration).

Is it software that relies on undocumented APIs? (I can't imagine why hospital software would require exotic methods of poking at the kernel or hardware).


A lot of times it's the hardware interface that's the issue. Old stuff uses serial and parallel ports, motherboard slots,or even abuses PS2 for other purposes.

Good luck finding a windows 10 compatible PC that has ISA slots for example. A lot of old custom hardware hooked right into the ISA bus


There is definitely software made for one version of Windows that won't run on another, regardless of bit count. Not a lot of it, but it's there.

In my experience, industrial software is often pretty poorly designed, so it wouldn't surprise me if it's more common in a hospital environment.


because .. drivers?


For what? Surely buying new printers is less expensive in the long or even short run than continuing to use an EOL-ed operating system.


We're not talking about printers.

We're talking about medical equipment, such as CAT scanners, dialysis machines, radiation therapy devices, chemical analysators and the like. Stuff where the computer interface could be an afterthought, added to a machine that was designed years ago with a physical knobs-and-dials type of user interface, and implemented and certified for a particular PC hardware generation. Then this interface PC becomes obsolete in 15 years even if the equipment itself would work for a hundred.


Is there any reason why medical equipment couldn't at least be airgapped or on a network without an outside connection at least? Still seems irresponsible.


Imaging tech here. Remote logins from vendor service staff are very helpful when stuff breaks as they can order parts or suggest fixes without coming in. They also track things like helium levels and water temperatures. Problems in these areas can be very very expensive. Losing a hour can be a loss in thousands in revenue very easily, let alone a few weeks of scanner time and tens (or maybe even low hundreds) of thousands in helium and parts.

Other reasons for network connectivity include retrieving and sending image sequences and data files (basically the actual scans) which is done all day everyday.

The more alarming part is the retrieving of raw data which is the unreconstructed scan. This involves attaching a memory stick that is supposedly clean and uploading to that. Generally this stick is stuck into any old researcher PC and files are off loaded. Vendors don't particularly like this but getting 10-20 gig files off the scanner via command line is pretty clunky at the best of times.


Such devices absolutely should be isolated in separate networks (DMZs), and connections to outside world should be removed except for the bare minimum.

That the NHS has not done this is their actual failing and negligence. It doesn't take that much money to move such devices to a quarantined network.


I mean, they are being systematically under-funded by one of the UK parties such that it will fail, so they can then point at it saying "I told you so", and so then get to adopt a US-like system, so they too can get in on that sweet, sweet cashflow :/


I assume drivers for scanners... but yes, if you underfund a healthcare system (remember half the cost of the US system for better outcomes) and constantly demand "efficiency savings" (and cancel long term Microsoft support contract) managers will cut IT before frontline services.


Places that cut the IT budget first are also places that raise the IT budget last.


XP has been unsupported for over three years and 2003 for nearly two years. Still using them at this point is gross negligence on the part of the hospitals.


>Still using them at this point is gross negligence

I'd guess that most hospitals don't do in-house development for the software they use. They paid someone else for it, probably at "enterprise" rates; it's hard to blame them for not having the budget or desire to replace working systems with new shiny (complete with new bugs) every X years.


Sigh, we need to fix the software economy. Imagine if the software being used by hospitals and other public institutions was open source as a rule. Then maybe it could actually be reused and collaborated on instead of rotting away with the need of replacing it all when it's just not usable any more.


If only some guy with a long beard had told us for the last 30 years what was going to happen! :)


This thread seems to be a series of "well, they had to make this error because previously they had made this other error"... presumably this can go on ad naseum, but isn't the eventual resolution going to be "spend money to install current hardware and software"? They could have done that at any point in the past. Complicated etiologies for broken systems, miss the forest for the trees.


>Complicated etiologies for broken systems...

...are how the state-of-the-art is advanced in other industries? Imagine if the FAA's response to an air disaster was, "Never mind root causes, you just should've bought a newer plane".


If they were flying airplanes from the 50's not supported by their constructors anymore, I'd say it'd be pretty good answer.



Back in the late 90's the government of the time split the NHS into Trusts and outsourced the IT to the likes of ICL (not sure who does it now). With that the last time any major overhaul was done upon the hardware and software was Y2K and as with most outsourced IT contracts it focused upon support from a reaction basis and not a proactive one.

With that the GSN (Government Secure Network) is still a good ring-fence (that's outsourced as well) but once something gets inside, boom.

Now with the Trusts - they do have a local IT bod and in the cases I dealt with, somebody who knew how a PC works and enthusiastic, which is nice but also dangerous and I had to deal with a few issues that were as I call them "enthusiastically driven". As such you have all these Trusts operating at some level as independants and with varity of results.

One case, was one `IT manager` at a Trust who was posting on a alt.ph.uk (UK hacking usenet group) and offering up inside information about how they operated. That did not happen as the alt.ph.uk lot are a moral ethical lot and health services are taboo, so was rightly shot down and equally the chap was soon in talks with security services.

But with so many legacy systems, and an event driven support mentality (again Y2K being an exception) then such events can and will happen. Sadly many trusts lack provision to handle such issues and as with many IT area's are event driven instead of being proactive. Indeed ITIL the golden managment love-in solution for support management is event-driven and many an implementation ticks all the ITIL boxes of compliance and yet still lack proactive support. This alas is mostly gets compared to firefighters pouring water on buildings so they won't catch fire and sadly pretty darn systemic in many an organization.

With that the best anybody in IT can do it to flag up an issue in a documented way to cover there ass then the outlined event does transpire to prevent unfair scapegoating. A sad situation of which many of not all IT support staff in all capacities can attest too.

Ironicaly DOS based legacy systems with no networking and exitic ISA cards in some equally over-priced hardware still work and the need to replace them does become moot, alas that example gets projected upon other systems that are networked. But the whole health industry has many legacy setup's that are expensive to replace, more so if they work and the motivation to limit potential damage from future events above and beyond backup's becomes a management issue that lacks a voice for budgets.


No argument about the hospitals.

But making BTC hard to cash out is a hard problem. Although particular addresses can be blacklisted, mixing services are now mainstream. Some return fresh BTC from miners. Even so, it's problematic to mix humongous quantities. For example, the Sheep Marketplace owner/thief overwhelmed Bitcoin Fog with 10^5 BTC. The trail went dead after that, but he got busted while cashing out. His girlfriend was receiving huge international wire transfers, and could not explain where the money was coming from.


It probably originated in Russia or one of the other cybercrime-heavy ex-Soviet states (Ukraine, Belarus, etc), so outside the jurisdiction of UK authorities. Although this time it appears to have done most of its damage in Russia, so the perpetrators might not benefit from the usual blind eye.


I've seen ransomware that explicitly tried to avoid hospitals, schools, government, etc., so there's that. I always assumed it was out of self interest though.


Yes screwing some Radom punter over is quite different that triggering an attack that meets the crtiteria for a CNI attack.

All this means instead of pc plod being unable extradite the perps from eastern Europe to you get the serious players involved.


Nah, let's not jeopardize the fungibility of bitcoins please. Besides, with anon-coins like zcash, what you're proposing would not be possible.


There is nothing that stops them to relaunch attack with modified version. Initial wave will use spam, then worm-like part of the ransomware will penetrate internal networks.


Sure but this generated a lot of press so it made the vuln more known; Microsoft released a patch and systems are likely less vulnerable to the same attack.

Similar attacks using other vulns or tooling are inevitable but this is prob much less impactful and the registration probably mitigated a lot of damage



You make it sound like some professional outfit. Is it really like that? I would've thought that it's a bunch of teens.


I doubt it. This is organised crime motivated by money (which is usually something adults do); the very fact that the program tries to detect whether it's being sandboxed indicates a certain level of professionalism.


You think a bunch of teens orchestrated a global attack on this scale? Surely this is satire and you dropped the /s right?


>...orchestrated a global attack on this scale?

Was it "orchestrated", or did the worm just spread randomly and opportunistically?


My bet is a bunch of teens. Not really orchestrated as much as exploited a vulnerability amplified by p2p connection, which led to worldwide scale.

By the look and UX of the virus (yes there's a UX there too), they do seem to have a better grasp than most script kiddies, who usually can barely extend whatever script they've got.


Hard to say. It could be an organized crime gang, terrorists, state actor, someone not making enough money legitimately in a 3rd-world country, bored middle-aged techie or teenager wanting to "get away" with something. There may or may not be levels of management, contractor(s) or multiple participants. (Shady "businesses" most definitely have subcontractors. Heck, I know of someone whom got their degrees paid for by a shady illegal gambling outfit.)


Why wouldn't they do it initially? It would take like 5 minutes to make it use a random string instead of a hardcoded one.


will he/she be called into a HR meeting to make a Performance Improvement Plan? ;)


Probaly get a pair of concrete over shoes


Potato, potahto.


I'm that developer but instead of working on malware, I just want to make sure that iPhones can actually use the website.


Malware is more interesting.

The site itself doesn't seem to have enough ads or well placed enough ads to be "income as a goal". So I'm guessing it's a " proof I can do stuff " or "trophy room" blog, which doesn't care (HR and recruiting will happily use worse websites to judge canidate value, or trophy rooms will be put in a room no one else wants/cabin so far from everyone it doesn't have electricity)


I'm kind of confused as to what the role of agencies like NSA, GHCQ, etc are in situations like this. Are they supposed to put an end to the attack? If so, how is it that a single researcher beat them to it (presumably with a budget many orders of magnitude less)?

Or maybe this story isn't really accurate and there was no accident...

EDIT:

And if it isn't the role of those agencies to defend the public health IT infrastructure, which agencies are responsible, if any?


The NSA knew about this vulnerability and decided to use it offensively, rather than notifying Microsoft about it.

Then, due to lax controls, the exploit got leaked and used by the ransomware developers.

Their culpability goes back a lot further than not noticing a kill switch.


If previously aware, shouldn't they have been even more prepared to stop the attack then?


The attackers used the NSA's exploit as a means to distribute their payload. It was the payload that was inadvertently disabled.


I honestly don't know the details, I am just wondering what government agencies have the responsibility of defending against attacks like this.

Even in this case though, you would think the NSA, etc have to do less analysis of the payload since they got to inspect and play with it for much longer than anyone else. Therefore they could waste less time on that and more quickly focus on the rest of the issue.


Historically, I think the answer is just that the NSA doesn't even try.

There are some three-letter agencies that do work on fighting malware, often by partnering with relevant companies like Microsoft (who was a major anti-malware player here too). I know the FBI does so publicly, and some government groups invite large companies to low-secrecy briefings on security.

But I've never heard a mention of the NSA 'fighting' malware that isn't obviously governmental. Even if they knew about the exploit, used the exploit instead of disclosing it, and are well-placed to fight it, I think that's just filed under 'not my department'.


NOBUS is basically a doctrine of assuming that no one else will find and use these exploits, so they can be maintained as strict-offense.


Do you want 3 letter agencies having more ways to get access to private networks?

Right now looking at how the election scandals went they are there at prosecution and have access that they are given willingly.


Sorry, I am unclear on how this is related. Are you saying it is not their responsibility to stop attacks in progress?


The danish center for cyber security issued a threat assessment, but there was not too many git in denmark because friday was a public holiday.


Lessons learned in school: overestimate and underestimate are both just one word, no spaces needed.


I don't get why you are being downvoted (also one word). I thought your comment was quite funny


I guess some people interpreted the joke as confrontational and admittedly it does not contribute to the conversation. Nonetheless, I just could not resist.


Thing with HN is you get a lot of english second language posters... who actually didn't learn those words in school.


Huh, I would suspect that in modern times English is widely taught at schools all over the world, unless I am horribly mistaken. The quality of teaching is quite another issue and it's usually lacking - to which I can testify from my own experiences with post-Soviet secondary education system. Regardless, I am not starting a grammar crusade here, more like making a small remark.


I don't think the malware developers wanted this kind of heat directed towards them.

If anything they will learn to automatically disable any nodes that are clearly operating out of a public office building.


I wonder if something like that might be the case - that this malware escaped early or spread much farther and faster than the authors intended. That would explain the sloppy attempt at an analysis countermeasure that made it trivially easy to shut down the whole thing.


There is a pretty long history of worms proving way more aggressive (either in damage or spread) than intended. I wouldn't be shocked if a slow-and-profitable infection went wrong and set off a crisis response.


Ms should release one last security patch for XP, even pirate copies: a firewall that kills every protocol except HTTP/HTTPS and email. Close all the ports.


I'm pretty sure remotely disabling computers is covered under the computer misuse act in the UK and every similar law in all countries that have them.


No DNS huh? NTP? Outgoing ssh? Http{s} to​ high number ports? Outgoing RDP?

...see the problem?


Oh, you mean like GWX was for Win7+? "Hello, we tried to upgrade your computer to WinX, your computer is now an expensive brick." Yeah, that's a great idea.


Tunneling still works


Pirates don't install security patches.


Neither does the NHS


There was no patch for XP until today, so how could they?


XP has been out of support for over three years. Continuing to use it is gross negligence. It was only a matter of time until something like this happened.


2 years for NHS, but your point stands :) http://www.v3.co.uk/v3-uk/news/2406304/windows-xp-government...


Yes. Gross negligence from gov for under funding a core government service.


Quite intentionally, too. One of their parties is looking jealously at the cashflow that a US-style system might provide them. At the cost of everyone else, sure, but what's that to the allure of profit?


that's not really true, windows update has been working fine for pirated copies of windows for ages since creating spam-bots hurts everyone and doesn't really sell more real copies of windows


> Re your Statscounter link http://gs.statcounter.com/os-version-market-share/windows/de...

Win 7 is rising again for months

Win10 and WinXP are shrinking


It is normal to see such data move +/-1% in a few months time. It is far more shocking to see 18% windows users in China are still running their XP. Must be very easy for NSA to do their job.


The elephant in the room is that XP was the last version of Windows that was relatively trivial to pirate. Activation procedures got stricter from 2008 onwards.


What makes you think that? Win 7 is simple to activate using Daz loader etc. my theory is that Win XP is the last version of Windows that will run comfortably on true toaster <1GB RAM computers.


See, I would have said that with Windows 2000. It came out only two years earlier and was pretty similar. I definitely am still holding on to my install cds.


Then you really don't want to see South America... a few years ago every PC I saw at a business ran XP.


When I visited a firm we had contracted out with in India they also had machines running xp right next to machines running Windows 10 for our project


"Lessons learnt by ransomware developers..."

If you are suggesting that developers, regardless whether they develop mobile apps or ransomware, will start relying less on DNS, I respectfully disagree.

Someone else in this thread commented how reliance on DNS makes systems "fragile". With that I strongly agree.

The same old assumptions will continue to be made, such as the one that DNS, specifcally, ICANN DNS, is always going to be used.

How to break unwanted software? Do not follow the assumptions.

For example, to break a very large quantity of shellcode change the name or location of the shell to something other than "/bin/sh".[1]

Will shellcoders switch to a "robust statistical model" instead of hard coding "/bin/sh"?

Someone once said that programmers are lazy. Was he joking?

1. Yes, I know it may also break wanted third party software. When I first edited init.c, renamed and moved sh I was seeking to learn about dependencies. I expected things to break. That was the point: an experiment. I wanted to see what would break and what would not.


If you change the name or location of the shell to something other than "/bin/sh", plenty of legitimate software would break too.

Even though the POSIX standard says:

> Applications should note that the standard PATH to the shell cannot be assumed to be either /bin/sh or /usr/bin/sh, and should be determined by interrogation of the PATH returned by `getconf PATH`, ensuring that the returned pathname is an absolute pathname and not a shell built-in.

> For example, to determine the location of the standard sh utility:

command −v sh


I don't understand your Chinese government point. The post never said anything about China. And the only thing about the NSA was that it was an NSA-developed exploit. That doesn't make the NSA look friendly.


If you look at cybersecpolitics, a blog written by a former NSA, he specifically says the "we hacked into a server box" story for how these tools got leaked is implausible.


Lesson more people should learn: dump Windows.


> Lessons learnt by the Chinese central government - NSA is a partner not a threat, they build tools which can make the coming annual China-US cyber security talk smooth.

Wow, +1 Insightful!


I guess this came off (unintentionally) as sarcastic. I had always assumed the risk of the NSA leak was from non-state actors (as this case) and it hasn't occurred to me that governments will find it just as handy -- in retrospect of course they will.


Sadly, the malware author(s) have updated their code and are now spreading a variant without the "kill-switch" domain check: https://motherboard.vice.com/en_us/article/round-two-wannacr...

However, MalwareTech's sinkhole intervention has bought enough time for patches to be pushed out, so at this point it is absolutely imperative that everyone apply these patches as soon as possible.


Kudos to MalwareTech, but they could have delayed a month or so this publication.


The hackers would have easily figured out the kill-switch site is up when trying to debug in their own environment


It's highly doubtful the malware authors hadn't already figured out what went wrong by the time this was published.


> the employee came back with the news that the registration of the domain had triggered the ransomware meaning we’d encrypted everyone’s files...

Even though this fortunately turned out to be false, what if it had been true? Would the security researcher be held in any way accountable for activating the ransomware? If I were the author, I might be a bit more careful in the future before changing factors in the global environment[1] that have the potential to adversely affect the malware's behavior, but of course I'm not a security researcher, so I really don't know.

[1] I suppose a domain could probably be made to appear unregistered after being registered - depending on the actual check performed - but there are other binary signals (e.g., the existence of a certain address or value in the bitcoin blockchain) that might not be so easy to reverse.


I would think not. For something bad to happen from registering the domain, there would have to be some kind of weird booby-trap in the malware. What's the motivation for a malware author to do that? If they can do something worse, the incentive is to just do it, rather than wait for a security researcher to do something first that they may or may not ever do. It's not impossible, but it's a little ridiculous and wildly unprecedented in the field of malware analysis.

When there's a global infection spreading wildly and crippling essential organizations, you want everyone to act fast, not spend weeks making sure everything is perfect. If you see the malware connecting out to an unregistered domain, you just register it now. Whoever is first gets it, and the attacker could realize their mistake at any time. Even without knowing what this malware does with the connection, odds are 99.9% that the situation is better with the domain controlled by a security researcher than by a malware author. Punishing researchers if something done in good faith turned out badly would incentivize them to overanalyze everything and delay taking any potential beneficial action until it's too late.


Under that same assumption of assuming maximum damage, what was the motivation for the malware author to put in a killswitch?


The theory is that the malware wouldn't execute in sandboxes for malware analysis, since the analysis environment will accept all connections from the sandbox.

So, if connection = successful, then we're being analyzed and don't execute.

If connection = unsuccessful, then we're on a real workstation, execute!

Then the scheme fell apart when someone registered that domain, so all connections = successful and malware will not execute (but machine still infected).


Probably to prevent it from fucking their own computer network up. Just change your local resolver to be authoritative for the domain(s) in question, and bob's your uncle.

But next they'll likely use more domains, and more expensive ones, so that random security researchers can't just expense the registration on the corporate credit card. I know .ng costs 50k, but .np might be pretty comical to deploy if you're not really worried about a global off switch.


If that's the motivation, just use a non-existant TLD, or, even better, .local

If the motivation is to have a killswitch, you don't want something expensive, because the attackers would then have to pay for it if they want to activate it for whatever reason.


.ng domains cost a few hundred bucks per year, the times when they cost >10k are long gone


The motivation for the domain check is to see if you are in a sandbox environment, and if you are, don't do anything. If you read the article, he talks about this.


The end of the article suggests it's an attempt to detect sandboxed environments that fake all domains.


I don't think that's a thing. And certainly not a common enough thing to use as a sandbox evasion technique. The sandbox is either connected to the internet or to a proxy that replays past connections.


Seems like a reasonable thing to do on a sandbox specifically set up for the purpose of analyzing malware. It's not helpful to just block all connections, because you want to see what it's trying to connect to and what it's trying to send there. You don't know what it's going to try to connect to, so you can't redirect only specific things. Letting it connect to the actual internet is obviously just crazy. Redirecting everything to an internal honeypot sounds pretty reasonable.


Maybe I'm wrong, but I would guess that sandbox evasion techniques aren't intended to stop one-off reverse engineering, but rather to get around the bulk programmatic analysis that Google or FireEye does. Those require an internet connection or a replaying proxy because a lot of modern malware comes as a minimal package that downloads its payload from the web.


I would hope sandbox environments aren't connected to the internet, spreading malware to others while your doing your research.

A responsible researcher would have a fully isolated, both from the corp net, and the internet. Then will slowly being to allow connections out as they can confirm that's not how it's spreading...


Many sandbox environments resolve whatever they're queried for with the same IP address. This is addressed in the article.


How many ISPs spoof failed DNS queries so they can feed you ads? That alone would make this useless as an evasion technique.


It means the malware would not execute for users on those ISPs because they false-positive for being a sandbox; that doesn't make it useless unless every ISP is doing that.


OTOH it means that laptops won't activate until they are on a work network, for an SMB worm that's probably not a bad strategy.


Clearly not that many considering how well it did manage to spread.


Somebody else hijacking the c&c servers?


Security researchers absolutely should have liability for poking malware in the wild on their own initiative.

If you're a bomb enthusiast or researcher, you'd absolutely be liable if you tried to defuse a bomb without being requested to by the police. This is no different because of the potential for massive collateral damage. You want to see what happens when the domain is registered? Resolve the DNS on your own network.

It's only when acting under government direction that you should be immunized from liability.


>>It's only when acting under government direction that you should be immunized from liability.

Why? I 10000% disagree that the Government should be immunized from liability in the first place. This entire mess is a direct result of the NSA not being held accountable and hoarding Vulnerabilities.

Your reliance on government == good, everyone else === bad is alarming to me.

That said, I do not believe neither the government nor a Research should be held liable under the circumstance proposed in the hypothetical we are discussion.

I do believe the NSA should be required by law and policy to alert any and all software vendors to vulnerabilities they discover.


Whew, you almost got me. :-)

Got all riled up and then saw the username.

Enjoy!


I was being serious.

I am curious why you disagree with me, though.


Folks disagree with you for the same reason they pass [good samaritan laws](https://en.wikipedia.org/wiki/Good_Samaritan_law): intent.

A bit more econo-mathematically stated: we expect more good than bad to come by us if we indemnify them from whatever liability they may have had by accidentally triggering the mechanism. Perhaps because there's more smart people outside the government than inside it, just as there are more smart people outside any corporation than inside it, or anywhere really, because human ingenuity is widely distributed.


My exact point was that this was closer to diffusing a bomb you happen across than a good Samaritan law because of the collateral damage potential, which we normally assess the tradeoffs of through collective means such as government, and because it's a non-emergency situation.

Good Samaritan laws only apply to emergency care rendered to people in need of it (and only if they don't refuse). You wouldn't even be covered if you grabbed someone's broken arm and tried to set it without permission. (That would actually be assault, for which you'd be liable.) You definitely wouldn't be covered if you unilaterally released a protein in to the atmosphere because you suspected it would stop the flu. You'd probably get charged with using a WMD if it backfired and people got sick.

I think people who are arguing that it's a good Samaritan situation are simply being selfish, because they don't want to have to consider how their actions might impact others or act with restraint and professionalism.

There are plenty of ways to proceed with getting help from security consultants even if there is liability -- eg, confining their actions to a single network and being indemnified by the owner.

Globally poking a widespread infection without a care in what the infected prefer is emphatically not what Good Samaritan laws are meant to protect and should carry global liability.

Ed: To address the question under me, since I'm "posting too fast" --

My problem is that many of these FBI programs exist in a legal limbo -- the researchers are working with the government, but I'm not sure they have the kinds of immunization agreements that government contractors usually get (eg, that you have to sue the government not the contractor since the actions are taken under government authority because you're working for the government) nor that they have to observe the restrictions placed on government actions. Too much of cyber security exists in these (intentionally) gray areas.

I dislike this Wild West state of affairs and want the matter of liability and restrictions/accountability to be directly addressed, even if it's just making de jure the de facto situation. I think cybersecurity, as currently practiced, is probably ripe for some nasty lawsuits if a researcher screws up a situation like this.

Does anyone believe that if the registering the DNS address had bricked the NHS systems, the NCSC would've taken the fall?


Setting the good Samaritanness aside, In the NHS situation, is it really plausible that there was more 'bomb' to go off to justify restraint? And given that the researcher had explicit support from the FBI and NCSC, do you think there is some level of professionalism not met?


> Does anyone believe that if the registering the DNS address had bricked the NHS systems, the NCSC would've taken the fall?

Since you seem to take the possibility seriously, what benefit would the authors of ransomware derive from that? Some sort of game theoretic red-wire to slow down forensics?


Anti-tampering mechanisms on C2 systems, using the data/computer as a hostage. You're not trying to slow down analysis, you're creating a consequence for tampering with DNS C2 records.

I think malware authors derive a game theoretic advantage by having tampering with DNS C2 systems result in data loss, because a non-trivial portion of people will prefer to pay and retrieve their data. Some of the frustration from that will be pointed at the people who actually tripped the switch.

Further, because of the current legal status, if a security researcher issues the command to the DNS C2 system that deletes the data (by messing with the DNS records), not the malware authors, they're quite possibly liable for the data loss, going to face hacking charges, etc. (Hacking charges because they knowingly issued commands to malware that gave them unauthorized access to computer systems.)

I don't believe that security researchers should be the ones making that call -- I think the only sane way to make it is through collective mechanisms like government.


>I don't believe that security researchers should be the ones making that call

It is said that there is no problem in computer science which cannot be solved by one more level of indirection.

http://wiki.c2.com/?OneMoreLevelOfIndirection

Just make someone else responsible for it. Problem solved.


The same government that can't be bothered to update their systems?


Yes.

Read my other comments for a more nuanced view discussing how it would play out in the real world, with changes relegated to particular networks and trade groups making deals for systems under their control.

But the only groups that should be able to authorize decisions about other people's things (free of liability or possible prosecution) are groups under collective control, ie governments.


Really?

Imagine a person in a locked building with ticking bomb, authorities are nowhere in sight. People, including loved ones, are vulnerable.

Should that person just wait and do nothing for the fear of you, sir, SomeStupidPoint, had created a law that holds that person accountable if something goes wrong?

Its her life and your law doesn't mean a thing, if not downright unethical and tormenting. She has every right to try to defuse or shield the bomb.

Should the IT departments of fortune 500 companies not try to respond and save their assets or should they just wait for Authorities?

You think a ticking time bomb is a crime scene, I think of it as a self-defense situation that hasn't played out completely yet. She has full right to self-defense, successful or not.

Look, ma, words on paper, that must stop bad things from happening right? Ma? Maaaa?


For the first part I just thought that you might don't understand a few things about how the internet works, but when you then wrote

> It's only when acting under government direction that you should be immunized from liability.

I also thought that you meant it as a satire.

Anticipating your next question, can I ask what kind of internet police he should have asked?


If you think that you're causing a net positive, then you should be able to monetize some of that value to cover the damages or otherwise negotiate to be indemnified by stakeholders.

I don't think globally releasing changes meant to tweak malware is a good idea, because of jurisdictional issues and liability. Down thread, I suggest confining changes to a network and indemnification from network owners (who may in turn be indemnified by network subscribers).

In practice, this would look like trapping malware DNS queries to security researcher controlled servers at the Comcast network DNS level, rather than registering a global name for it, with the researchers being indemnified by Comcast (who likely is indemnified as part of your subscription agreement).

This has well tread liability law behind it and moves us out of the situation where every random group feels free to potentially cause harm to hundreds of thousands or millions of computers across the globe because they're "good guys" and shoot from the hip.

I expect that network operates would quickly establish industry groups and a certification process for getting researchers to help protect their networks, and we would quickly be back to mostly the same situation, sans questionable legality. (They likely would be liable for occasional collateral damage and pay that out of industry membership dues to the group.)


So,

A good Samaritan must know all workings of the malware, including trip wires, actual triggers and threat scales. And, must know if any threats that get released (not yet in sandbox) as a consequence of his investigations.

A good Samaritan must seek and get all necessary approvals using a procedure and checklist. If such procedure, checklist and inventory of authorities to seek approval from does not exist, the good Samaritan should immediately embark upon the task of defining one. Fully realizing that just creation of such listing would require constitutional amendment.

A good Samaritan should be able to look away from current mayhem (patient support systems, ambulances, public infrastructure collapsing) while above things are settled first.

On an existing command chain, sure, up to a point. US doesn't have any control if someone at Kaspersky had done it or some kid in Asia trying to understand how it works.

What you are proposing is going to make people on good side to disengage at the thought of prosecution. You would be shooting your soldiers for not successfully defending you. Worst, you would shoot enemy of your enemy for triggering the common threat.

The control that you desire goes against nature. Unless you put every single human being in matrix, you won't have that control.


> Would the security researcher be held in any way accountable for activating the ransomware?

I think that would be the equivalent of an arsonist also leaving a water activated chemical at the scene of the fire, and then blaming the firemen for using water to put out the fire when it made the situation worse.


The ransomware was already active for hours by that point. The answer the employee provided did not make any sense. Not now, not the first time I read the blog.


That's what bugs me about the blog post but it may only be an issue with how it's written or my understanding.

From the Talos Intelligence blog:

>The above subroutine attempts an HTTP GET to this domain, and if it fails, continues to carry out the infection. However if it succeeds, the subroutine exits.

It's not clear if the subroutine being shown is the main entry point in which case return 0 exits (which is good for us), or if it's part of a larger framework that would be doing stuff later on (which is potentially bad for everyone because it could decide to do other things if it finds that domain sinkholed?)

The blog author checked on whether or not the domain name changes, but didn't specify any details about anything going on higher in the stack:

>All this code is doing is attempting to connect to the domain we registered and if the connection is not successful it ransoms the system, if it is successful the malware exits (this was not clear to me at first from the screenshot as I lacked the context of what the parent function may be doing with the results).

So my question is how much knowledge did they have of the rest of the code when registering the domain? Would the analysis environment have provided more information if the malware had continue to run after realizing the domain was sinkholed?


It's an interesting thought experiment. The closest analogy I can think of is pulling the wrong wire on a bomb.


Let's be honest if he didn't someone else malicious or otherwise would have. This is the internet.

Better in the hands of someone like this.


Ironically, lying DNS resolvers redirecting nonexistent domains to ads were also helpful in order to mitigate the attack.


DNS being one of the most fragile aspects of the net, a lot of issues can be solving by a local DNS resolver that does a lot of caching and blocking


I recommend pi-hole if you have a Raspberry Pi laying around: https://github.com/pi-hole/pi-hole


Sure, and something like dnscrypt-proxy can do that, but in what is being discussed here, blocking the domain would do the opposite of what you are trying to achieve.

The ransomware prematurely quits if the domain resolves to an IP, and a webserver listens to that IP.


Remember that time when VeriSign did this (wildcarding) on a global basis for .com and .net in 2003, briefly causing outrage?

"As of a little while ago (it is around 7:45 PM US Eastern on Mon 15 Sep 2003 as I write this), VeriSign added a wildcard A record to the .COM and .NET TLD DNS zones. The IP address returned is 64.94.110.11, which reverses to sitefinder.verisign.com. What that means in plain English is that most mis-typed domain names that would formerly have resulted in a helpful error message now results in a VeriSign advertising opportunity. For example, if my domain name was 'somecompany.com,' and somebody typed 'soemcompany.com' by mistake, they would get VeriSign's advertising."

https://m.slashdot.org/story/38665


Even today, T-Mobile still does that – any domains that can not be resolved are redirected to their ads.


Verizon does it too, although they just show search results and don't show ads. It's possible to opt out, but it hasn't bothered me enough to do that yet.


But VeriSign is not an ISP -- they were the ones running the authoritative root name servers and doing this to literally everyone.


Adding this domain to various adblocker list might also have interesting effects.


What really amazes me about this attack is that the main attack vector seems to be exploiting a SMB vulnerability. Reasonable enough of a way to spread within an organization, but it's amazing that so many organizations seem to have this port and service open to the world for this worm to exploit.

I'm not the most diligent follower of security news, but I'm pretty sure that SMB network sharing is riddled with security vulnerabilities, latency issues, etc, and is generally wildly unsuitable for being left wide open to the entire internet. How could any institution with a competent IT department not have had this service firewalled off from the net for years?


The attack was usually introduced via a phishing attack and then spread through SMB vulnerabilities. It generally wasn't that SMB endpoints were open to the internet.


The irony of this is SMB servers are often setup with insecure configs exactly because it's so rarely used over public networks. Too many people assume public networks will be their Maginot Line against SMB attacks.


Certainly possible, but if it was spread with an ordinary phishing attack, I gotta wonder why all of these organizations got hit so hard in such a short timeframe. We've had lots and lots of conventionally spread ransomware attacks go out, but I don't know of any that have had this kind of effect in this timeframe.


Could it be organisations sharing data by SMB infecting each other?


It only spread through SMB shares where available. Other vectors include downloaded executables (email, web, etc).


After the recent so called 'cyber attacks' of WannaCry, I was careful to update any Windows machine I have and install things like EMET and MalwareBytes on them. I switched to Linux years ago because I've heard nothing but bad news concerning Windows, but one thing struck me about the WannaCry infections: I heard the attackers used an exploit pulled from the recent ShadowBrokers leak, something related to 'SMB'. A few questions: Explain it to me like I'm five please

1.) What is SMB? And is it easy to remove from systems by simply uninstalling it (like I have done[0])?

2.) Does WannaCry just land on a machine through a simple point-and-click exploit? Do they just enter a vulnerable IP address and they can plant the exploit on the machine and run it?

3.) I am aware that it also gets onto machines by people randomly clicking on shady e-mail attachments, but I am very curious about how it simply lands on computers with very little or no user stupidity at all?

[0] I uninstalled SMB by going to > Add or remove programs > Remove windows features


First of all, SMB is a network protocol for sharing files. It's sometimes known by the name samba, which is an implementation of the protocol. If you have a remote drive mounted for sharing documents with your coworkers there's a good chance you're using SMB.

This exploit worked in two stages. First, there was a massive email campaign. Then, when employees would click on the attachment, the malware would worm its way onto other computers on the local network using an exploit in the SMB file sharing stack (which orignally came from leaked NSA malware). Then it would encrypt the user's files and demand the ransom.


Any network service being shipped right now by a serious vendor ought to be secure enough for the open Internet; frankly most large organizations' internal networks aren't any safer than the Internet. Why should SMB be any less safe than SSH or HTTP?


The whole attack is a wonderful show of incompetence within the IT departments of the compromised companies.


> After about 5 minutes the employee came back with the news that the registration of the domain had triggered the ransomware meaning we’d encrypted everyone’s files (don’t worry, this was later proven to not be the case), but it still caused quite a bit of panic. I contacted Kafeine about this and he linked me to the following freshly posted tweet made by ProofPoint researcher Darien Huss, who stated the opposite (that our registration of the domain had actually stopped the ransomware and prevent the spread).

That's quite an high abstraction level programming thing to do to use a domain name registration state as a boolean. Is that a regular thing ?


I believe the malware was designed to do that as part of a way to test and see if it's in a sandboxed environment if someone was trying to analyze it. If I understood this correctly, checking the domain was a way to do that (although I might be completely wrong).


Can somebody explain how this will work? AFAIK it does not even check for obvious things such as vmware processes running in the background.


It was explained quite clearly in the article. Sandboxed environments will generally have a catch all that replies to any IP requests with a sinkhole server. To prevent analysis, it'd do a lookup to a known unregistered domain and if it got back an IP address (which should not happen except in the sandbox with an unregistered domain) then it quits because it assumes it's sandboxed and being analyzed.


I still can't understand how the malware authors could be so smart (or, if not smart, at least competent enough to build ransomware from scratch, make it wormable with ETERNALBLUE, and launch a massive and effective spam campaign) and yet so stupid.

They could've achieved the same sandbox detection effect by just registering the domain and pointing it at 1.1.1.1 or whatever. The non-sandboxed connections would still fail, and no one else could take the domain.


I don't think the creator would be too keen to create anything unnecessary that could be linked back to them through a paper trail.


I find it interesting that they didn't randomize a couple of long strings and tried to resolve those instead like the article mentioned has been done in the past


They could've achieved the same sandbox detection effect by just registering the domain

That would leave a paper trail, potentially revealing who's behind the malware.


You wouldn't do that with a .com domain, rather with a .local domain or such.


I would guess that a simple DNS/HTTP works just as well. In contrast checking for vmware processes or similar would probably result in a code signature that can be picked up much more easily by malware scanners.


Sometimes sandboxes resolve all DNS, so if you point to to a nonsensical URL and it resolves, then you may be in a sandbox.


More likely used during testing by the author itself?


I'm not really sure what you're trying to say, but reading the rest of the article will answer your question.


[flagged]


All I understood was that you seemed unsure about the purpose of the domain check, so I pointed you in the right direction. Sorry if that came off as condescending.


Well, since you didn't mean it then I am the one who is sorry. I have obviously misjudged you, overreacted and misunderstood your phrasing.

I apologize and will try to do better next time.


Did not register as condescension to me


It's a fairly regular thing. If you are a Google Chrome user, then your WWW browser has been doing much the same check, for fairly similar reasons (detecting whether it is running inside an environment where DNS service is being generally altered to redirect clients to a special server), for the past half decade.

* https://mikewest.org/2012/02/chrome-connects-to-three-random...


I think that it is more likely that there was meant to be something else on the other end which did something but x0rz got there first so it broke.


I think a distributed global database is a more appropriate analogy.


This may well give birth to a new paradigm: Domain-Driven Programming.

Each condition is satisfied by a different domain lookup.


Wow, now, let us imagine using aggregation of millions of states of human minds as a Boolean constant used in millions of scenarios for about four years straight.


I think it's great that this was used to stop the malware, but pre-emptively registering the domain without understanding what it did seems dangerous.

The malware could have just as easily used the registration of that domain as a flag to start deleting data, no?


I am not sure that being able to trigger the deletion of all data in one sweep would be of any interest to the attacker. Firstly, if they simply chose to stop ransoming decryption keys, the encrypted data would effectively be deleted anyway, and secondly, deleting the data would foreclose on any prospect of further gains from the attack.


"Due to interference, we are no longer able to process unlock requests. Goodbye."

It's probably easier, as you point out, to have the virus delete its keys and wipe itself out. (And has the added benefit of taking some forensic info with it.)

But in a marketing sense, blaming people interfering with your network for the lost data may make you safer, as many victims are likely to prefer you extorting them to the good guys causing data loss by stopping you.

Being a criminal is all about customer service.


Some people just want to watch the world burn


In this case they would have activated the domain themselves later and caused even more damage.


This may indeed be exactly what the authors of the next ransomware will do.

Two domains, one defuses the ransomware, the other detonates it.


> Two domains, one defuses the ransomware, the other detonates it.

And one of the domains will be called redwire[randomchars].com, and the other bluewire[randomchars].com. Which one do you sinkhole, the red wire or the blue wire?


You just test both in a disposable environment, and then you know which one to sinkhole publicly.

The researcher in this case registered the domain right away because he had experience that that creates a positive result. Once that sort of thing starts creating bad results, then researchers will start testing more carefully before grabbing domains.


This would be Very Bad if your ISP is one of those that intercepts NXDOMAIN responses and instead returns an A record to some other "helpful" thing... or some DNS provider that returns a "this site has been blocked by your administrators" page...


The article says this already is done in other malware.


Why? What would be the goal? If you're in ransomware business, why would you ever want to delete data before the scheduled time? You want to get paid instead.


The goal would be to prevent security researchers from preemptively registering all domains the malware connects to.

Although it was only a thought, with what `cesarb` mentioned in mind.


So you'd slow the researcher by a few minutes of extra disassembly time if they needed to be careful - what would the malware authors gain here? A few more potential payments in that timeframe? Same time could be invested in improving the sandbox detection instead of creating fun decoys that will be identified anyway. It was still only version 1, we'll see how v2 evolves.


> Same time could be invested in improving the sandbox detection

It isn't an either-or proposition, and the psychology of the conflict is important. If you force your opponent consider every possible move to be potentially dangerous, you slow them down by more than just the cost of the game with a domain name. And that's valuable.

Googling for "OODA Loop" might be helpful in thinking about this.


Wow! Thanks for mentioning OODA (https://en.wikipedia.org/wiki/OODA_loop), never heard of that before. That's a really intriguing concept... so many cogsci, ML, netsec, and game theory connections. While the wikipedia page is rather sparse, it's already added a few things to my reading pile.


One thing I was curious about as I read this was: are there not extenuating circumstances during which the domain registrar can seize a domain? If say, this domain that was unregistered had been registered, is the fact that it's controlled already mean that there's no way to reset it to a new registrar?


Yeah, I guess in his experience that scenario is less likely tan it being a c2 endpoint so he made the decision on that. And it turned out to be the correct one and might have prevented this from becoming a much more destructive blaster worm.


Ransomware authors could come up with better flags than registering a domain since anyone can do it. Why not do something like creating an ipns address in ipfs and check if it contains something? Nobody but them would be able to put content at that address and can be checked through multiple gateways.


the guys explains that the ransomware does this check to defend itself from being observed in a vm


Honestly, how stupid were the malware authors to use standard DNS for a domain that could take down their shit when they use Tor for the actual key and address communication and everything... it's like they half understood what they were doing.

Well, I guess maybe they didn't want things to get too out of hand and now if they want they can be back up soon with that fixed.


> it's like they half understood what they were doing.

And that's exactly what is so wrong about the NSA and others not being good stewards of their own bloody malware. A lot of these criminals would not be able to get their act together at this level without being partially funded by the three letter agencies. Think of it as an advanced form of script kiddies, they can use the tools and wrap them but they could not come up with those tools of their own accord.


It's all explained in the article. It is a sandboxing detection method, some environments resolve all DNS requests to a host that captures all traffic. It's still stupid but there is a reason for this behaviour.


> how stupid were the malware authors to ...

They were clever enough to execute this attack, collect over £160k according to the last estimate I've seen (likely way more now), and achieve that in one day. You seem to underestimate them including assumptions that this was simply missed. There are many potential scenarios where this is beneficial to the authors.


As with all SW development, there are always a lot of things that they "could have" or "should have" but every day you wait with malware which has a know patch out means less spread for you. So they probably made the decision to just get it out as quickly as possible even if it was not perfect.


One of the comments (under the article) is very apt: better wording would have been to say "serendipitous" instead of "accidental".

This guy is sort of a hero, IMO. Given that this is affecting healthcare systems, he might very well have actually saved a bunch of lives! I hope he slept well, totally deserved it :)


"Please turn JavaScript on and reload the page."

Uh, no. Here's an archived copy:

https://archive.fo/BhLZn


Does someone know what this domain actually is?

EDIT: After looking explicitly for it I found www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com.


Is that a Welsh company?


You can tell it's _not_ Cymraeg because of the "j"s.

Despite Jones being the stereotypical name for people in Wales, the old Welsh language, Cymraeg, doesn't have a J, nor have Z, K, V, X (IIRC). Jones is an English loanword, brought over apparently with the Norman conquest (though derived from Hebrew).

The modern language of Wales is of course British English with a ~100% use rate; Cymraeg still has an approx 8% (but falling) of the population who say when surveyed that they can speak it fluently, however.

Yeah, I'm terrible at parties.


...says adolph. The Germans, however, use much shorter terms like "Rechtsschutzversicherungsgesellschaften". :-)


Rechtsschutzversicherungsgesellschaften German Noun

- insurance company which provides legal protection


My guess was going to be left-handed bricklayers that wear blue hats. Close.


Off-topic, but I think that traditionally, the longest official word in German was "Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz", the name for a law about testing and labeling of beef, now repealed. This word (63 letters) was of course not in everyday use by most people, but it was actually in the law.


"Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch" - town in Wales.

Edit: https://en.wikipedia.org/wiki/Llanfairpwllgwyngyll


Will visiting that URL cause my IP to "hit" on https://intel.malwaretech.com/WannaCrypt.html now?

The map is populating much faster now, maybe they integrated it with the URL?


Say that 5 times fast, I dare you.


The registered domain gwea.com: http://www.thedailybeast.com/articles/2017/05/12/stolen-nsa-...

Edit: I'm not so sure now. The whois record seems to suggest recent activity:

   Domain Name: GWEA.COM
   Registrar: 22NET, INC.
   Sponsoring Registrar IANA ID: 1555
   Whois Server: whois.22.cn
   Referral URL: http://www.22.cn
   Name Server: PK3.22.CN
   Name Server: PK4.22.CN
   Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
   Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Updated Date: 18-mar-2017
   Creation Date: 17-mar-1999
   Expiration Date: 17-mar-2018


ends with "gwea.com". Big difference.


At the risk of losing more karma... isn't the domain registered to stop this gwea.com itself?

The hacker, though, didn’t register the gwea.com domain name. On Friday morning, a 22-year-old UK security researcher known online as MalwareTech noticed the address in WannaCry’s code and found that it was still available. “I saw it wasn’t registered and thought, ‘I think I’ll have that,’” he says. He purchased it at NameCheap.com for $10.69, and [...] [1]

If it is, it seems to contradict the whois record.

[1]: http://www.thedailybeast.com/articles/2017/05/12/stolen-nsa-...


The domain is not gwea.com, it ends in gwea.com. Two paragraphs above your quote in the article:

> a dot-com address consisting of a long string of gobbledygook letters and numbers ending in “gwea.com”

jstoja mentions it above: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com


My bad, I had misunderstood it completely. Thanks.


Sir my mobile phone is hage


> "One thing that is very important to note is our sinkholing only stops this sample and there is nothing stopping them removing the domain check and trying again, so it’s incredibly importiant that any unpatched systems are patched as quickly as possible."

(A very important point at the bottom of the article)


Pretty interesting, if I'm reading it correctly the existence of the domain is checked, and if is there, the program is aborted, in order to stop sandbox analysis.

I was wondering why they didn't just do a simple variant:

1) Instead of relying on DNS, which anyone can create, why not make a user account on some well known forum site. Like HN or Reddit.

2) Open the site, look for the user's page, and check his message titles by hashing them against some hash that can be in your code.

3) Detonate if you don't see the code, or the user account doesn't exist.

This would have the useful characteristic that you could start/stop the attack using just an internet browser, anywhere. And the code word that you are after would be crypto hashed, so the defenders would have to find your keyword somehow from the hash. Heck, you could confound everyone by turning the thing on or off according to location, time of day, and so on.

For extra points make it a blockchain thing. They're already using that for payment, right?


My understanding is the idea is more to check if a non registered domain behaves as if it was registered. Some sandboxing methods apparently lead to this for some reason.


or just check against a handful of www.{{UUID}}.com domains instead of hard coding anything


Reddit has been used to point to C&C servers in the past[0], so I wouldn't be surprised if some malware does send commands directly via Reddit.

[0] https://www.intego.com/mac-security-blog/iworm-botnet-uses-r...


Sometimes you've got a deadline and you just have to ship it.


or instead of relying on the domain name existing, look at the contents of the web page


I'm curious, does anyone know what tool he uses to disassemble the program into C? It looks neat.


That's HexRays in IDA. Pretty nice but damn expensive.


The author says they are doing this for a living. Who are they working for?


He's a great follow on Twitter, he works for some cyber security company in LA I think but lives in England. Got offered the job after he created a Mira bot tracker IIRC.


Agreed. I've been following him for a few years now, very interesting account.


Handle?



I think he's been working for the same people since before Mirai.


Do you know what company it is?


Yeah, I'd love to do that kind of stuff for a living. Not sure if I'd be good enough though. I'm fluent in x86 assembly and do quite a lot of debugging on that level, but reverse engineering remains a tough problem.


Probably freelance consulting for companies who really want to keep their stuff protected from this sort of thing.


That's where the real money is in malware. This malware group may have made $30k on the attack, but the malware-protection industry will earn orders of magnitude more business.


Great write-up. It's funny; a mistake/exploit allowed the malware; a mistake/bug allowed it to be mitigated...by the researchers mistaken intent that registering the domain would simply provide him with sample data.


So will companies start holding bitcoin as insurance on these kinds of attacks?


Yes:

Several of London’s largest banks are looking to stockpile bitcoins in order to pay off cyber criminals who threaten to bring down their critical IT systems.

https://www.theguardian.com/technology/2016/oct/22/city-bank...


Keep in mind this article makes... Little sense the way it was written. The DDoS attack at the time was done in a way that could be replicated by anybody. Multiple control networks gave jobs to IoT devices without real authentication. That means anybody known to pay off attackers would immediately get attacked by another group using the same sources. Maybe there's some stockpile of Bitcoin "just in case", but it would require a very special situation - not a common DDoS they talk about.


I was just pointing out that according to the article some London banks ARE buying bitcoin so that the ransom payment option is on the table in case of an emergency, and in fact they notified senior police officers about this activity (to get their blessing? to avoid the bitcoin buying look suspicious if they stumbled upon it?)


I understand. I'm just pointing out why the article smells like bs and is technically invalid in many ways. Including the fact that police will not help you with a DDoS and is largely irrelevant in the discussion (apart from post mortem / following up after the attack). Also banks are playing with cryptocurrencies for quite a while now. London banks have ideas of private blockchains as well. Nobody would think it's suspicious that they buy some.

The may be some truth in there, but this is a popular tech post. I'd look for more details than the guardian provides.


> The may be some truth in there, but this is a popular tech post. I'd look for more details than the guardian provides.

Can you provide some references for the positions you have stated in this discussion?


Mirai (not named that yet in the original article) source released -> anyone can take control. DDoS as a service was sold. Paying off one attacker doesn't stop others. https://www.forbes.com/sites/thomasbrewster/2016/10/23/massi...

Police will not help you with ongoing DDoS - I don't think you need a source for that one.

Canary Wharf playing with bitcoin / blockchain in 2015: https://www.ft.com/content/eb1f8256-7b4b-11e5-a1fe-567b37f80...

Ransomware can happen again, getting data back not guaranteed (FBI recommendation): https://www.fbi.gov/news/stories/incidents-of-ransomware-on-...


I wonder if there are regulatory requirements for them to register a policy for the handling of bank robberies, that extends to extortion and ransom? While it may not be relevant in this case, such a regulation might have been written with hostage situations in mind, when the police would be involved.


One of the security people quoted speaks of the concern over new models of attack that may be on the horizon. I agree that it does not make any sense to make being ready to pay a ransom your primary preparedness, but it seems reasonable as a last-ditch response if you have a generally sound security infrastructure. For one thing, the latter would reduce the chances of the sort of follow-on attack that you posit, and that risk may well be secondary to the certainty of being crippled by an actual attack-in-progress.

Anyway, regardless of the merits of the strategy, it appears to be an established fact in some cases.


No. Paying doesn't guarantee you recover your data or that you won't get hit again. If you're getting to the stage where you're dealing with contracts and money, implementing a backup system is likely a simpler solution.

Also btc price is a bit unpredictable. Would you really risk buying it ahead of time, not knowing if you'll ever need it? What if the price goes down and you need to buy extra anyway? Apart from very specific situations it just isn't a great investment.


I'm sort of bearish on most things blockchain but this seems like one of a very small number of things that zero-trust auto-executing contracts could be useful for. Someone could probably work out a mathematical proof of "if you transfer this much etherium the private key will be decoded." Heh.


That's a lot of work for little reward, I think. Most of the time the idea is that the hackers have every incentive to decrypt or else word will spread that it doesn't work even if you pay so people will stop paying.


Brown hat hackers have an incentive to create non-functional ransomware, to spread FUD on the legitimate ransomware. If news spreads that some ransomware does not actually decrypt the files after payment, then potential victims would have a stronger incentive to secure their systems, and a weaker incentive to stockpile ransom funds (wt actual f!)

Then, the onus would once again be on the legitimate ransomware developers to prove their ability to do business, and some sort of Ethereum based automated contract might be an ideal tool for that.


If the context is widespread ransomware infection, provable decryption is unlikely to make a difference. Attackers are unlikely to make money from people who know what Ethereum, or a smart contract is. Even taking those who know the meaning, very few could actually verify the application.

The attacks work on those who didn't have protection and lost valuable data. Just sticking a "decryption guaranteed by Ethereum contract" will likely have the same effect on them.


You've just made me consider that this could be an attempt to inflate BTC.

Create a demand. The wallet wouldn't even ever have to be tapped. $300 is small fry, but an overall inflation on a strongly-leveraged BTC investment could allow an disconnected cashout.


It seems a bit scary that security researchers are relying on bugs in malware to get their job done.


Yet malware creators rely on bugs to spread their work, so why not. Fighting fire with fire...


Right. But the problem is that malware creators can choose from many more attack vectors than security researchers typically can.


There are many things in the current state of security to be worried about, yet complacency remains high (which is one of the things...)


How is that scary?


Imagine how bad it would have been if someone actually competent had chosen to weaponize one of the NSA exploits? This seems to have script kiddie written all over it.


To mitigate, I am running Debian as the host and jailing Windows 10 in a Virtual Machine, and have uninstalled SMB1.0 on the machine by going into > Programs and Features > Add or Remove Windows Components. I have also blocked port 445 (SMB) with ufw (On Debian)

    sudo ufw deny out to any port 445
Aswell as this I am not deferring updates in any way and dutifully patching. I've always hardened Windows in this way and I've never had issues with malware, and if I did, the impact would be minimal because I've compartmentalized my files in such a way that even the worst malware would only encrypt some of my files and not all of them.

I store all my critical files in an offline environment (sandbox) so the only files that are going to be encrypted are replaceable (non important) and disposable. For example, I wouldn't cry if my C.V got encrypted because a copy of it exists in about 50 locations either offline and online.

Unfortunately I need Windows because my colleagues like to send Windows-only .DOCX files which work best in MS Word, and I don't have a Google account, so I can't open them in Docs. This is a conscious decision to permaban Google from my life, but Windows is staying.


> This is a conscious decision to permaban Google from my life

MS collects alot of metrics from your devices, not just G*g.


Why not just use Apache open office?


Or Libreoffice, which has had many enhancements (including to compatibility with MS Word files) since it forked from OpenOffice.org, while Apache OpenOffice stagnated.


Thanks for the tip, and I will try this. As I said, .DOCX files work best in their native Win Office environment as I've had problems with them in open source solutions (formatting issues), whitespace injection ruining the layout, etc


Installing the crosextra fonts have improved the viewing of some .pptx for me: https://wiki.debian.org/SubstitutingCalibriAndCambriaFonts


Happy outcome but could have so easily gone the other way. Surely it would have been been more responsible to locally fake the registration of the domain first (apparently as easy as modifying /etc/hosts in this case) given he had no idea how the payload would respond? o_O

Not sure I'd be singing his praises if his rash decision had triggered the deletion of the encrypted files.


Wonder how much longer it would have taken to understand the impact if he had just modified the hosts file instead of registering the domain?


It sounds like the mere existence of a specific DNS A record was the kill switch for the ransomware. That seems like a pretty bad kill switch, surely the attacker should have required some sort of password to deactivate the ransomware.


Does this mean that I can safely connect my outdated Windows 7 back to the internet?


You are fine if you are behind a router on a local network at home and don't have SMB port exposed to the public internet. And assuming all other clients on your network aren't infected.


No! All it takes is a single byte to be patched and the "killswitch" can be disabled. There are almost certainly other variants already in circulation.

Use an offline security update.


Yep, tried doing that. But unfortunately I could get it to install. Keep getting "This update is not applicable to your computer" even though I'm doing everything right.


*couldn't


Disable SMBv1 before you connect it!


Are there any side effects of disabling SMBs for your local machine that you won't be using to connect to network drives?


> In certain sandbox environments traffic is intercepted by replying to all URL lookups with an IP address belonging to the sandbox rather than the real IP address the URL points to, a side effect of this is if an unregistered domain is queried it will respond as it it were registered (which should never happen).

Is this something VMs do? Does anyone have more info on this?


Now is the time to write a virus that uses the same exploit and automatically patches the vulnerable before a new version of the ransomware is released.


You may want to read CFAA rules (https://en.m.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act) first. Breaking into someone's computer to fix it is still breaking into someone's computer and very illegal.


Not in all jurisdictions, per https://en.wikipedia.org/wiki/Negotiorum_gestio

> Negotiorum gestio (Latin for "management of business") is a form of spontaneous voluntary agency in which an intervenor or intermeddler, the gestor, acts on behalf and for the benefit of a principal (dominus negotii), but without the latter's prior consent. The gestor is only entitled to reimbursement for expenses and not to remuneration, the underlying principle being that negotiorum gestio is intended as an act of generosity and friendship and not to allow the gestor to profit from his intermeddling. This form of intervention is classified as a quasi-contract and found in civil-law jurisdictions and in mixed systems (e.g. Scots, South African, and Philippine laws).

> For example, while you are traveling abroad, a typhoon hits your home town and the roofing of your house is in danger. To avoid the catastrophic situation, your neighbour does something urgently necessary. You are the 'principal' and your neighbour here is the 'gestor', the act of which saved your house is the negotiorum gestio.


I'd be happy to be proven wrong, but I don't think it applies here. Specifically, nothing in this summary indicates that you can break other laws to fulfill this one. IANAL, etc.


IANAL either.

The summary gives the example of securing your neighbors roof when a tornado is about to hit. Possible laws to break to do this, are "breaking and entering" or "trespassing".

Note that a lot of these laws state that care must be taken not to break laws unnecessarily. Bricking IOT devices that can be used for DDOS-attacks may be a step too far.

And strictly, in the case of patching a server under negotiorum gestio, you have not broken any laws: It is not unlawful computer intrusion when you have implicit permission of the owner of a device (the same goes for entering your neighbors house when they are on vacation, and have accidentally left a pot of milk to boil on the stove).

But I guess such far-reaching Good Samaritan laws are very foreign to the US, since there, off-duty doctors are sued for performing a painful Heimlich maneuver.


The Government can hire developers to do it themselves.


I think someone did that for some iot devices


Why do operating systems allow users to run any executable?

For programmers it's important to be able to - but when you're not coding, running any executable is not required.

It should be that all programs are in /usr/bin & the others. Only root can write there. Users shouldn't be able to run any program that is located anywhere else.

And this would be no problem. Am I wrong?


Most of the machines out there are single user, and managed by that user, not by a professional sysadmin. As such, the user has the needed access (root) to install any programs that they want, and all modern OSes allow only the admins to install programs.

We already put multiple warning messages when a user decides to execute a suspicious binary, and yet everyone still clicks through any prompt without the second thought ?

What's your suggestion that a). allows any user to have the machine installed and configured as he wants? and b). do not allow random programs from executing ?


>In certain sandbox environments traffic is intercepted by replying to all URL lookups with an IP address belonging to the sandbox rather than the real IP address the URL points to, a side effect of this is if an unregistered domain is queried it will respond as it it were registered (which should never happen).

Can someone please explain this? I have no idea what was said there.


In a sandbox environment (e.g., in a lab trying to deconstruct malware) they'll have a private DNS infrastructure setup to resolve all domains to some local IP address. That way they can intercept and reverse engineer the command & control traffic. The author of this malware tried to slow down security analysts by trying to resolve a "fake" (unregistered) domain. Probably just pounded on the keyboard and added a .com. The idea is that the domain should not resolve. If it does, it's an indication that they're in a sandbox / lab environment so the malware doesn't trigger. Again, this is an attempt to slow down analysis. Of course this was a stupid tactic because registering the domain and setting up DNS stopped the malware from triggering globally.


Private dns server connected to the vm malware host


Off-topic: what is the site doing "checking your browser" for five seconds before showing the content?

I know that this sort of data can be valuable - what browser I use, which plugins are there - but I just assumed everyone was doing this in negligible time frames. What more is there to check in a browser?



This is not accidental. Not even close!

This story, if true, details a person who profiled this malware and correctly logged the network requests it was making and then correctly identified a fundamental vulnerability in the software. This is not an accident at all - it is rather a profile in supreme competence. We should recognize it as such.


Although the domain name registration was intentional, activating the kill switch wasn't.

The author registered the domain name without knowing what would happen (the virus might as well have wiped the entire disk) and was surprised to see that he had activated a kill switch. That's the accidental part.


Not surprising to see 14 year old unpatched software connected to the internet being hacked like that. At least, the ones in charge of budgeting these upgrades should pay a price for failing at doing so, the users are obviously innocent victims.


Can someone write a patch worm that spread and fix the bug by exploiting the bug itself?


So from a practical point of view how to disentangle infrastructures from this sort of attack?


install security updates and do security hardering by whitelisting only what you need.


How reasonable is it to say that NSA are at least in part responsible for deaths resulting from the NHS crisis, since the ransomware is using their exploit?


Somewhat reasonable.

NSA can be held accountable for not disclosing a vulnerability responsibly but this exploit may have been found anyway by the creators of this ransomware. There is no one person/group/institution to blame here. It's multiple vectors that failed. Are there any reports that connect deaths of patients directly to this ransomware attack?


Can a Windows laptop that doesn’t have the Windows Update patch get infected just by being connected to the Internet via a home Wi-Fi network?


That would be fine, as all commercial home routers firewall off access to the SMB service that is being exploited.


On Monday, are some governments going to have to make a statement on what they advise owners of infected computers to do?


All I see here is "Please turn on Javascript and reload the page".


Was the domain ever named?

Found it www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com


Well, it didn't really stop it. It slowed it for a little bit, and then it was modified and spread again.

I also wonder if now ransomware developers will leave red-herrings in the code where if the wrong domain is registered, it will do something more destructive.

It's like knowing which wire to cut when you're defusing a bomb!


Interesting timing for this cyber attack given the recent news. Are Robert Mercer and Cambridge Analytica also being investigated?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: