Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Antivirus Companies Shouldn’t Have Hidden What They Knew About Regin (technologyreview.com)
123 points by aburan28 on Dec 8, 2014 | hide | past | favorite | 65 comments


It's easy to say it should be publicised, but I'm not sure how someone can do that if the infected organisation specifically asks the antivirus company not to publicly discuss the malware, like what appears to have been the case with F-Secure. (https://mobile.twitter.com/mikko/status/536959936476221440)

Sure, they could just go ahead and disclose it against the customers express wishes, but how many would hire their services in the future if there's no expectation of customer confidentiality?

Edit: There likely is an NDA in place even before externals are allowed in, so breaching it might invite some additional problems than just image issues.


It would be possible to discuss the find publicly without disclosing the source. I'm pretty sure that's how all national security journalism functions today.


Even that may be against the contract. I've worked with companies where their NDA said their security troubles can't be disclosed even if the information was completely sanitized. Even saying "One company in the US was impacted by Malware X" would be inviting the lawyers.

But I don't know this particular contract, so that may not be in play.


To add something: I think that it makes a lot of sense for an infected organisation to ask for confidentiality about something like this; any public communication in the opposite sense would be an immediate signal for the attackers to release a new version of the malware. Although, depending on what the malware does, the attackers may realize immediately that it has been protected against.

EDIT: also, it's not like antivirus companies publicise everything they do. From what I've seen (admittedly not much, so maybe I'm wrong) most releases are simply summarised as "added protection against 100 new threats".


Are you suggesting that it shouldn't be added to the signature list? Forget the news - that will be the tip off more than anything for the virus makers.

When they 'release a new version', that behaviour can be spotted by the AV companies, and that new version can then be added to the signature list as well.


But that doesn't seem to have been the case with Kaspersky. So what's their excuse? Other than being pro-surveillance that is:

https://twitter.com/csoghoian/status/505376361268400128


That is a good point, some companies had very different motivations for not discussing it, e.g. Ronald Prins from Fox IT had this to say to Mashable:

For Prins, the reason is completely different.

"We didn't want to interfere with NSA/GCHQ operations," he told Mashable, explaining that everyone seemed to be waiting for someone else to disclose details of Regin first, not wanting to impede legitimate operations related to "global security."

http://mashable.com/2014/11/25/regin-spy-malware-nsa-gchq/


Cleaning up a modern malware infection is quite a big task. You can't immediately wipe it off a single computer because you will be playing whack-a-mole forever.

You have to carefully monitor and learn everything you can about the malicious actor, and discover all the infections. Then produce a plan to remove it and prevent further infection. This is all implemented at a single instant.

However, it doesn't end there, you then have to monitor very carefully to see if it comes back. If this can all be done in secret it is much easier, especially if the malicious actor doesn't know you know they are there.

If you immediately reported everything you knew it would greatly assist the malicious actor - keeping it secret is part of trying to stay ahead in the game. Even after the first incident keeping it secret helps with future incidents.


> Cleaning up a modern malware infection is quite a big task.

And the only way to clean up a compromised computer is a full reinstall. You can't possibly know what has happened on the compromised computer during the compromise. This is what the desktop support jockeys and most companies get wrong - obviously it's probably because of the cost associated with a full reinstall, but it doesn't make it any less valid. If it costs too much, companies should then focus on preventing machines getting compromised in the first place.

Think about it, if I ask you to hand over your laptop for say, an hour, during which I have completely free reign over it, can you tell me everything I've done during that hour and all the backdoors installed, if any?

And to nitpick, obviously these days not even a full reinstall might do it when there's BIOS viruses and even hard drive firmwares can be compromised etc. of course.


Unless you reinstall all your systems simultaneously and close whatever vector was used in the attack at the same time, reinstalling might not help.

As long as the attacker has at least one route into the organization at anyone time, the possibility of reinfection exists. So I guess it's wise to take a coordinated approach.


Isn't it possible to clean up a compromised computer if hash of everything installed and an external audit log of installations is maintained? You could scan the drive of the machine externally and know what bits have been compromised.


How do you gather the hashes? The machine itself (obviously) couldn't do it, since the malware may have changed the system to always report correct hashes (and/or send the original binaries).

So, it means what you mean by 'externally' and how sophisticated you expect the malware to be.


For pre-installed files, you have hashes that are publicly known. Afterward, you continue to set the hash with each modification to a file, and report it over the network or to an external device. If you can identify the time of infection, you know which hashes are good.

After you get the infected machine, you pull out the drive and scan it externally, looking for bad hashes and files that shouldn't be there.


"Think about it, if I ask you to hand over your laptop for say, an hour, during which I have completely free reign over it, can you tell me everything I've done during that hour and all the backdoors installed, if any?"

Yes, I can. Read up on modern digital forensics. Everything you do on a machine leaves a trace and there are ways to recover those traces and put together exactly what you did. That is exactly what Incident Response/Digital Forensics Firms do. An IR firm would never tell you to just reimage a machine when you're dealing with an advanced attacker. They'd want to go through, use the tools they have to identify exactly what happened on the machine and what other machines were compromised before they even started talking remediation. Wiping the one machine that you got an alert on would do absolutely nothing to solve the problem.


"Read up on modern digital forensics. Everything you do on a machine leaves a trace and there are ways to recover those traces and put together exactly what you did."

As a followup, both you and GP should read up on digital forensics from someplace OTHER than their marketing material...


Granted I'm not an expert, but I'm not confident that you could tell everything, at least not in the default configuration of desktop OSes without special auditing in place. Some stuff, sure.

> An IR firm would never tell you to just reimage a machine when you're dealing with an advanced attacker. They'd want to go through, use the tools they have to identify exactly what happened on the machine

Yes, I was talking more in the run-of-the-mill case sense, not regarding thorough forensic investigations. If you're actually going to investigate the incident deeper, you should at least get memory dumps, process dumps and an image of the machine and such. In my experience though, at most companies the SOP is just monitoring standard antivirus stuff and then when an infection comes up, it either gets automatically cleaned or someone goes over and fixes it with a manual scan or whatever. Which is completely inadequate.


I'll admit to a fairly casual acquaintance with digital forensics, but I disagree, if for no other reason than you didn't really address all the possibilities of the scenario given. I'd be happy to hear if I am in turn missing something. Forensic analysts are good at tracing activities done from within a system, and/or by entities that don't know a lot about forensics or don't have system privileges to cover their tracks. I read noinsight's scenario as involving physical access by a skilled attacker. If one shut down the laptop, pulled out the hard drive, mounted it on another system, modified a single sector of a document that doesn't exist anywhere else (and the laptop owner isn't perceptive enough to notice the change), preserved file metadata, and placed the drive back in the laptop, what would digital forensics be able to tell you? Only the details of the system being shut down, I'd guess. Maybe if they paid tens of thousands of dollars to a specialized lab, they could find faint magnetic traces of the former contents of the changed sector, but I'm not sure that's in the scope of what you meant. If all that was done was read the drive, there'd be even less chance of determining what was read. As mentioned, the attacker might compromise the BIOS or firmware, and while that would be detectable, I think only the highest-end IR firms would look for it, let alone have the resources to identify subtle changes.

Even working within the system, I'd say many attackers can remove traces such that many investigators won't find them, by doing things like deleting created logs, restoring file metadata to its original state, and writing over the erased evidence multiple times. (This perhaps assumes root access and a consumer-grade OS in default configuration.) It might lead to a suspicious state where the system has been running for hours with no artifacts that would routinely be left, but the investigator might not be able to determine much of what was done. It might be as simple as using a browser the investigators don't check: http://www.cbsnews.com/news/casey-anthony-detectives-overloo...


Non-ironic question: do many people on HN use anti-virus of some form or another? Generally i'd probably trust people's judgement here as far as that goes. I naively use Linux with no externally accessible services running and no active anti-virus -- am i an idiot, should i clean-install everything ASAP? Or do yous reckon this is reasonable?

edit: clarification


Myself, no. It's been my experience that AV programs only warn after it's too late and you've already been owned by some nasty bit of code. Gee, thanks for nothing, time to flatten and reinstall. There's also the problem that literally every one I've found on a Windows platform appears to be designed for the lowest common denominator, easily frightened end user - scary warnings and dire predictions about what might happen if I don't renew my subscription (another problem, you have to subscribe to all of these on a recurring basis), Fisher-Price level UI, and so on.

To borrow a phrase from the /g/ community, I run Common Sense 2014 platinum edition. I.e. I don't download stuff from sketchy websites in general, I don't click email attachments, I don't use the Java web plugin, I do use things like Adblock and Noscript, etc.

If I do end up with a possibly suspicious file it gets sent off to a multi-scanner environment like VirusTotal or Jotti where I can get about 30 different opinions simultaneously.

The last few infections I've gotten were due to doing something boneheaded. Running something from a torrent without checking it first, or turning my browser security off temporarily and forgetting to reenable it.


I'd rather get informed after getting owned than be unaware of an infection, at least you then know to bin your system and start fresh.

However, I'd also run EMET to make it a bit harder for an exploit getting past NoScript to operate correctly.


The average vulnerability bug that you're likely to get having taken those precautions (no open ports to the internet, no live code running on untrusted webpages, etc ) easily sidesteps antivirus protection anyways.

I know, security is all about layers, but the usability and performance tradeoff gained for this paper tiger protection is not worth it, in my mind.


I do, but a very basic kind; ie. Microsoft Security Essentials. That doesn't change my distrust of binaries though, and if I must run things I don't trust, I frequently do it in a VM. But the gist of it is that if I can alleviate some of the risk that comes from running Windows, with no discernible cost (ie. no noticeable system slowdown), I will do so (obviously in addition to "common sense" security measures). It'd be foolish not to.


Microsoft Security Essentials/Windows Defender is one of the few antivirus software that fails AV-TEST certification.

http://www.av-test.org/en/antivirus/home-windows/windows-7/

http://www.av-test.org/en/antivirus/home-windows/windows-8/

How old is your PC that modern antivirus software noticeably slows it down?


Yeah, if i had to use Windows i would consider it essential to use some form of active anti-virus, but then again, i don't think that would alleviate my (unfounded?) general unease and feeling of still not being that "safe". But i'm trying to get a reality check here :)


Windows 8 and Windows 10 come with AV built-in. It's generally "good enough" for home users. It will flag suspicious downloads before they download and before they run, and then scan them as they're running. With those two OSes, I never use an AV. I'd prefer user education over a stricter AV.

On the other hand, my work laptop is Red Hat 6 and Linux, Mac, or Windows, we're required to run Symantec by corporate policy. Then again I work for a security company, so...


Nope, I use OSX and, maybe naively, assume that gives me a greater level of protection than if I were running Windows. I'm also pretty careful when it comes to the most obvious attack vectors. In my experience, anti-virus software really slows a machine down, almost to the point that it might be better off with the virus instead - of course, that is highly dependent on the nature of the virus! I'm deeply cynical about virus companies and what involvement they might have in virus creation in the first place, although I understand how conspiracy-theory-crazy this might sound.


> In my experience, anti-virus software really slows a machine down, almost to the point that it might be better off with the virus instead

That's ridiculous. It has been a hell of a long time since antivirus applications affected the performance of a machine that way. The performance impact on any remotely modern machine is negligible.

And under no circumstance is it 'better off with the virus instead'.


I know of at least, Intego's product for Mac OS as late as a couple months ago (got it as part of a bundle, installed, was horrified, uninstalled a day later), as well as CheckPoint endpoint security live up to the stereotype of antivirus programs being bloated system hogs.

Really, look at what they do. They pre-emptively scan every executable program you run at the least. Unless you're on an SSD, and probably not even then, this is a blocking operation that is impossible to not notice.


But Mac OS X scans all code you run to verify code signatures, doesn't it? If so, the performance impact of scanning isn't that high.

I think the main problem with antivirus on modern hardware is that commercial entities selling antivirus have to add bells and whistles. Few people would be willing to pay $x a year for a program they aren't even aware of running. So, that $x program makes sure you see it frequently by adding progress displays, toolbars, task bar items, etc. they also make sure they have stuff to report, even if that includes meaningless stuff such as registry keys on Windows. Detecting that meaningless stuff takes time, too.


Taking a hash of the binary and verifying that against a few-k signature at a known location is kind of on a different level performancewise than scanning literally the entire binary looking for various known signatures and applying heuristic analysis. (Windows does the same thing, FWIW)

I think you're spot on, but from the scope of a user who knows 90% of that stuff is BS, it's just another bullet point in the list of why I don't run AV software.


Hey man, just because you haven't experienced antivirus slowdowns does not mean they do not occur. Perhaps the slowdowns these people are experiencing are due to a system configuration that you have not used.

Earlier this year my Windows 7 machine with a Xeon W3565 (3.20 GHz) and 6GB RAM was slowing down noticeably every time Symantec Endpoint Protection 12 downloaded new definitions--something it did twice a day. I would consider this a reasonably powerful machine, and the slowdown had an effect on my ability to work.


"almost to the point that it might be better off with the virus instead - of course, that is highly dependent on the nature of the virus!"

Sense the tone. My tongue is clearly planted firmly in my cheek, and if you're too literal to recognise that, at least err on the side of not downvoting.

For the record, it's been about 4 years since I used antivirus software on Windows, maybe things have improved since then.


> It has been a hell of a long time since antivirus applications affected the performance of a machine that way

Do you have the benchmarks to support this claim? Unless things have improved considerably in the last year, simply doing a "git clone" took measurably longer even on a machine with an SSD. Microsoft Security Essentials had by far the lowest impact but it was still easily visible.


On one of my systems I do, but that is required by my client when I attach my device to their network. And my wife's desktop has MS security essentials.

Keep in mind that when New York Times was hacked a while back that 50 kinds of malware was found, and only one of them was detected by multiple AV products.

So depending upon AV to protect you is fraught with peril.

Consider the possibility that adding AV to your system increases the attack surface. Does anyone remember the Michelangelo Virus from a while back? A well-known firm' AV software caused more damage than the virus itself. (It wiped out the boot sector.)

Don't count on AV to protect you.


Total non-antivirus user here.

I worked for 5 years in the trenches in the anti-malware industry and countless times I've seen antivirus software completely hose up computers and worse: having its own insecurity and hooks into the Windows API used directly to infect a system (I'm looking at you, AVG circa fall'09).

Most importantly, no antivirus seems to do a very good job of dealing with emerging threats and malware is rapidly getting more sophisticated than the AV vendors can cope with. The major problems these days all seem to come down to an insufficiently secured operating system.

The only real, effective antivirus is user education.


Not since a brief period in the 90s. You're much better spending your time staying on top of updates and not having plugins disabled in your default browser. As long as you don't run an email client which executes code[1] trying to enumerate badness is a losing game - it's too easy to try to bypass signatures, particularly since a malware author can simply test against the various AV clients before releasing something.

1. i.e. not Outlook managed by an enterprise IT department


I do on my (Windows) work laptop, because the company that owns it justifiably insists I do. It's completely transparent, except for the odd days where an upgrade to VirtualBox or VMPlayer will introduce new "hardening" features, that the AV software will happily flag as a terrible threat.


it depends on the market share. Windows is the most used OS, that's why you see virus on it. I heard OSX started getting some after becoming popular once again a few years back.

As long as linux isn't too mainstream, it will be less a problem. But you could still use ClamAV if you share some files with other computers.


"Sharing some files" seems vague though: i'm not sure if i correctly understand the attack vectors, but apart from running untrusted binaries (and as a normal user even that doesn't sound too scary -- or does it?), i guess it'd be via things like targeted exploitation e.g. buffer overflows or somesuch? Other than that it seems unlikely (but far from impossible of course) that an adversary could simply make my machine do evil stuff like spamming or leaking my files. (edit 2:) Also, i have to trust my distribution... That's actually the part i find most worrying.

edit: Also, i'm not sure i agree with you that Windows being the most widely used OS is the reason for the proliferation of viruses for that platform. As i understand it (not wanting to start a flamewar here, i genuinely don't know), it also suffers from some poor security architecture -- but maybe my information is outdated. But sure, the fact that "everyone" uses it makes it a more valuable target, of course.


At a high level, Windows and OSX share the same general security architecture: most stuff happens as a user, with root called in when necessary.

The single best advice to not getting infected is to not do stupid stuff.


your linux could be used as a shared drive (samba), without executing stuff. It could be nice to scan files if they can be used on other OSes.

As for Windows being unsafe, sure, it's easier to propagate stuff since you have root access. But like I said, viruses on OSX do exist.


I'm not sure i understand: do you mean i should scan my files, in case i share a file which would infect a Windows box? If so, i don't think i care. If you mean something else (i.e. somebody planting evil files on my "share"), please elaborate.

I also stated i do not run external-facing services. That applies to the Samba example, too (although i was fibbing: i allow keypair-only login via sshd).


> in case i share a file which would infect a Windows box?

yes, that's what I meant.


Nope. Haven't used A/V in 3 years now. On OSX, Windows or linux. When you're more tech friendly you know to sandboxie things that seem suspicious or test in a vm.


So you sandbox all the documents and images and movies and whatnots that come from untrusted sources? Because they all look suspicious? And after testing, how do you know you're infected?


Depends upon source of document; mostly now a days i can throw a document in google drive or a web editor and see it. PDF to HTML works good too; Movies are all from trusted sources streamed via Linux Server... So if its untrusted the clip would most likely loop with a blurred out screen saying i need a codec pack.

I don't do much locally other than write code. I use Web IDE's, Web Editors, Online Markup, Stream via HTML5... Not much to download now-a-days.


No.


Whelp, if that's not a succint answer I dunno what is. I ended up asking the search engine what you thought on the topic [0], and I think it was informative. At least, it made me feel a bit less bad about my stance. (Confirmation bias / appeal to authority ahoy.)

[0] https://hn.algolia.com/?q=tptacek+antivirus#!/comment/foreve...


I'll take the bait: what kind of system would you run? Linux? No evil web browser plugins? A personal computer which you never use to browse the web (RMS style)? Anything more exotic?


I have a Macbook, like most of us. I turn the firewall on, and I use Chrome. That's about it.


To all?


"My guess is that none of the companies wanted to go public with an incomplete picture." -- or, see Assange's censorship pyramid: http://wikileaks.org/Transcript-Meeting-Assange-Schmidt#279


Here we see the other side of the "responsible disclosure" coin -- if the ethical white-hat security researcher is required to withhold publication of a critical vulnerability for a set amount of time, is there a corresponding deadline for timely publication as well? And if that deadline is not met by meaningful attempts at remediation or disclosure, is the researcher not compelled to publish the findings independently?

Obviously these companies failed miserably to meet any reasonable person's timeline of disclosure. One question is whether the extra time researching this malware reasonably would have produced additional worthwhile intelligence about its function and targets. If so, then the delay was "worthwhile". Another question is whether it's not better to simply release an incomplete picture to the security community (perhaps selectively) and let the larger hive mind go to work on finding and corroborating additional clues.

It seems like the firms chose the former; many HN readers would advocate the latter. So finally, the question remains whether such a forced disclosure would be perceived as an irresponsible "leak" based only upon the disagreement in methodology and interpretation of "responsible"? Would its withholding be considered likewise irresponsible? Can a single firm, a collection of firms, or the security research community at large meaningfully stay ahead of a dedicated state-funded attacker? (Probably, Probably, Probably not).

If a nation-state is producing malware, it logically will also be monitoring the channels of disclosure for evidence of its release and detection in the wild. But that's no reason to limit the resources being dedicated to protecting the public; it's egotism at best and collusion at worst.


I think the issue is... Windows is the biggest problem because most of the 'malware' kids come from hacking communities... Most communities use VB.net, C# when i coded in both of these languages things seemed easy; and there is really only FUD packagers on Windows; to make the executable undetectable. It's easy to code a RAT program or anything in vb.net or c#


I didn't see a compelling argument for why they should be publicized in the article.


Unfortunately Schneier first guesses that in reality nobody came out because they didn't have the full picture, instead of the reasons the companies actually gave, and then he goes on about how the reason he is guessing to be the real reason is not a good reason.


Lack of full picture is a consequence of the AV companies' given reason of customer confidentiality. Unless they had closed circle of sharing the info and analysis with each other, in which case keeping it all quiet is even more more damning...

Psychologists will tell you that you shouldn't put too much faith in what people tell you about their reasons. Both because people are bad at introspection and/or try to put a spin on things, and because for most things a single reason doesn't even exist. (See also "Why did you buy product X", a question subject to much study).


You mean other than the obvious "trust" benefit? If "anti-virus" companies don't want to protect you against "certain" viruses for political reasons, why bother trusting them and buying their product?


They did protect against it (from what I understood), they just didn't make public the existence of the virus.

Frankly, I'm not sure how many of their clients (being one myself) care if they publicize anything, as long as they protect against it.


What would your alternative be? Not buying their products?


Yes. If the security product you're buying isn't providing you with security, what are you paying for?


if the free alternatives protect you equally fine, yes.


> if the free alternatives protect you equally fine, yes.

Well, that's a Catch-22 and a half if there ever was one. Absence of detection doesn't mean absence of malware, it just means you haven't found out how badly you've been infected. The paid-for guys are halfways in the pockets of people who've paid more and the free guys are lagging behind.


I don't care if they publicize it or not, what I care is that they protect me against the malware.

I'm very disappointed by Kaspersky : I chose them specifically because they were Russians, I though they would not be susceptible to NSA pressures.


From the article it appears that they did protect against it.

At least, that's what I understand from "all the companies had added signatures for Regin to their detection database".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: