Hacker News new | past | comments | ask | show | jobs | submit login
Poll: Do you use anti-virus software on desktop Linux?
46 points by OJFord on Jan 26, 2023 | hide | past | favorite | 96 comments
No, I think it does more harm than good on Linux
165 points
No, I think we don't need it on Linux
87 points
No, but I think I probably should
28 points
Yes, I will comment (or upvote a comment) about what I use
12 points



I don't use it on any OS. There was a dark age of plentiful zero-days when it made sense to use antivirus (I suppose), but these days as a technical user you're pretty safe if you run an adblocker and don't download random binaries.


> There was a dark age of plentiful zero-days

Antivirus is not an effective way to protect against Zero-day exploits. Antivirus is effective against known threats, but zero-days are new threats that antivirus programs are not designed to detect.


it's not really so black-and-white. It is not unusual, for example, for signature-based antivirus to detect the payload of a zero-day-based delivery mechanism. When they are packed in one file, this can incidentally protect you from the original exploit. This scenario is actually a lot more common than you might think, as simple economy means that a lot of malware authors will use different delivery mechanisms over time for the same payload. You see this a lot with botnets, for example, where there's a relatively small number of popular botnet agents that are delivered by multiple groups using multiple means.

In general, it's important to remember that malware involves multiple separate steps, typically today something like the initial exploit, a downloader, and persistence, which may retrieve additional payload binaries. Even if your antivirus is completely unaware of the original exploit, it may detect the downloader or persistent binary. This common problem (for malware authors) has lead to work on things like fileless persistence but those methods are more difficult and less reliable, so a lot of malware still needs to drop a persistent binary somewhere and use one of a fairly limited number of methods to get it to start again in the future. This is a huge opportunity for antivirus to detect a problem no matter the original exploit, and one of the things that antivirus is most effective at.

There is also heuristic-based protection, and in practice few host-based security solutions are purely signature-based. Heuristic protection has significant limitations but can be effective, especially for common malware patterns like loading drivers (no longer as common on modern Windows due to restrictions on driver loading). Heuristic-based systems tend to make enemies of their users though since it's difficult to tune them to be at all effective without a noticeable false positive rate. You see this a lot with packer detection: a lot of AV products use heuristic methods to recognize common packers (obfuscators), with the result that some self-extracting executables and commercial obfuscated binaries will also be detected. There's a lot of interest in machine-learning heuristic detection, but the false positive issue limits its use so far.


While it is true that signature-based antivirus can sometimes detect the payload of a zero-day-based delivery mechanism, it is not a reliable or comprehensive method for protecting against malware. Zero-day exploits and advanced malware can evade signature-based detection, and I wouldn’t bank on rely on this method of protection. In addition, heuristic-based protection methods have limitations and a high false positive rate, making them less effective.


nothing is a reliable or comprehensive method of protection - that's why we employ defense in depth, including host-based security and software hardening.


There are virus "kits" that allow creating new binaries as often as needed. So for whatever lag time (typically days to weeks) the AV folks have, you just generate something newer. Things are plenty sophisticated to allow VMs, encrypted binaries, and obfuscation tricks ... shared by commercial software that you can't just blacklist all bad binaries in any kind of general way.

So there's an infinite supply of bad binaries and AV companies are by definition, behind. Basically selling snake oil that promises to help, but never will.


That's why most AVs rely on behavioral detections rather than strictly file signature or hash-based detections.


> I don't use it on any OS

You don't even leave the built-in Windows Security (or Defender) running on Windows?

I haven't noticed any performance impact or false positives in the years it's been running, and it is supposed to be highly effective. Better safe than sorry I guess.

Although I have never had a virus in 30+ years.


Windows defender definitely has a performance impact, sometimes to the point of freezing when plugging in a USB stick for example (on low spec hardware).

And you can disable it, but only with hackistry.

"Although I have never had a virus in 30+ years. "

And this might translate to: you never had a virus you were aware of.

Most professional virus are quite silent and don't want to be noticed by doing noisy stuff. Their point? Spreading everywhere, until they find a high value target. But I doubt windows defender is a defence against those.


Afaik, modern malware will also do things like keylog for passwords, silent malware is neither reserved for high value targets or reserved for av evasion tier malware (which is relatively easy to do)


Yes, "high value" is relative.

But a criminal gang won't risk attention, to steal some bucks of a poor linux hacker, who potentially will raise hell, to find out how that steal happened.

Which is the reason linux is more secure in the first place - there is simply plenty of easier prey elsewhere.


Won’t they? How do you know what an entire collective would do?


I know that from basic reasoning (what if I wanted to to make money via hacking) and the fact that allmost everyday there is news about someone being ransomed on windows, but very, very rarely (if at all) on linux (on the desktop). Lower numbers, sure, but also a higher IT affinity. Overall not worth the hassle.


It's been a few years since I ran Windows, but I remember some annoying behavior.

- Every time you open a network drive in Windows Explorer, it will (sometimes?) partially download the contents to scan for viruses. It was noticeable on wifi.

- If you run grep in a large repo (in the multi-GB range), it will scan each file before letting grep access it. Ripgrep is multithreaded, but that didn't matter since Defender seemed to be single-threaded. I could see it pegging just one core as it made sure none of those text files had a virus.

I ran Process Hacker, which had a tray icon you could quickly hover to see what's using CPU. I noticed Defender slowing things down often enough that it had to go.


> You don't even leave the built-in Windows Security (or Defender) running on Windows?

Is there even an option to disable it? I disabled what I could on my VM and it still eats up all my CPU and memory (I should maybe get a faster laptop, but it works fine for everything except this, and I only rarely need it to test something).


+1

I would love to know too how to disable it completely.


Adding c:/ in the whitelist is your best bet. I strongly advise against it unless you’re doing malware research though.


WinAero Tweaker can disable it.


"Lisa, I want to buy your rock."


For my own (PC) machines, I don't run any sort of AV or anti-malware. For everyone else in my family, they run Defender and an anti-malware tool.


Yes, we use ClamAV at work.

The reason is maybe the least satisfying of all. Because there is a rule somewhere saying that all workstations have to have an antivirus, Linux or Windows, same rules. Since "apt install clamav" is easier than arguing against the rulemakers again, that's what we did. And it is also not completely stupid, Linux viruses exist, and detecting malware on unaffected system is good too, because chances are that uncaught files will end up on vulnerable systems later. But really, the main reason we installed an antivirus to comply with some corporate rules.


Ah yes, the pleasures of getting ISO certification.


I've seen clamav used for PCI compliance too. Probably a lot of certifications that have some "security" component.


SOC-2 requires that you run an antivirus too.


No, it doesn't.

It's very important to distinguish between what is actually required for compliance, and what is being done in the name of compliance, and make sure that "compliance" isn't just abused to shut down discussions easily.

SOC2 does not have any criteria specifically requiring antivirus software.

The actual requirements are more generic; for example CC6.8: "controls to prevent or detect and act upon the introduction of unauthorized or malicious software". That doesn't even sound like an unreasonable requirement to me.

If your company reads "no computer without antivirus" into that, that's on your company (and what they put in their SOC2 Type 1). Of course AV manufacturers will gladly agree that it should be read that way, and auditors will be more familiar with that approach. But if you can achieve the same in a different way, that's ok too - you just have to write it down that way. And the auditors can be reasoned with - their focus is anyway more to verify that you can show evidence (e.g. screenshots) of actually doing what you wrote down.


Has it caught anything?


No, at least not on the desktop machines.

On servers, it picked up a few exploitable (but uninstalled) packages from our Debian mirror, and a few EICAR files. So, no actual threat averted.


We definitely have... but they were windows viruses / malware. Still, glad to see it was flagging something.


I think what we really need on Linux is:

* A better application firewall (like Little Snitch for macOS, OpenSnitch looks promising)

* Sandboxing by default (falling a bit behind macOS, bubblewrap is a good solution)

* Better package management (Nix is SOTA, but we need better tools to monitor upstream against malicious commits)

* Better monitoring tools (that take advantage of eBPF and report suspicious activity)


Not sure how good bubblewrap is, to my knowledge it only refuses unprivileged actions and doesn’t really have a way of “negotiation” between the sandbox and the app running. I do know that flatpak does have this option for at least the file picker dialog, which is a good direction, but ideally the mobile OS’s permission system should be adapted in some way.

My gripe with flatpak is that it mixes up a (imo bad) way of packaging with sandboxing.


Yes, I agree flatpak is a bad way of packaging. Note, bubblewrap is independent of flatpak.

In fact there are some proposals to add sandboxing to nix, which is the antithesis of flatpak, using bubblewrap.

Firejail is a more usable alternative and comes with very sane default rules, e.g. only allow Firefox to see the Downloads directory in home.

However, it has a much larger attack surface than bubblewrap [1].

[1] https://github.com/netblue30/firejail/discussions/4522


We are developing the Portmaster Application Firewall that has a couple nice privacy and security features, including network monitoring. Open Source. Linux & Windows. Android in progress.

https://safing.io/

If you check it out, we'd love feedback!


Why do you want an application firewall? I thought the reason folks ran those on Windows was because of proprietary, must-have software that opened ports with mysterious purposes that unresponsive vendors wouldn't explain or close.


Little Snitch is designed to protect you by limiting outbound traffic. The idea is to block all traffic and approve or deny application connections the first time they happen by creating rules.

Imagine you are running a compromised package installed with e.g. pip. This could provide a last line of defense when it tries to steal your data, if it's not supposed to make certain connections.


The big issue I had with ClamAV is that the entire virus definitions database has to be kept in memory which is significant on VMs with only 1 or 2GB of memory. Maybe there's away to limit the definitions to only Linux malware since IIRC the majority of the ClamAV database are for Windows viruses/malware.


A bit better of a question would be "Yes, it's stopped malware from running."

Has anyone ever caught a thing with it running? It's exclusively a checkbox item for security. Does nothing but take up RAM. Tell me otherwise.


ClamAV, worth running, it doesn't use many resources. Linux is popular enough now that it is an attractive target, especially with all the cloud vms running out there

If you run it behind a NAT, you'll need to run an internal virus database server. They rate limit downloads per IP. So fetch theirs once per day, serve as much as you want internally.


If you're in a situation where you need to use antivirus on linux I'd assume you're already owned


Linux desktop users are so trusty; the only reason half of them aren't owned yet is because they aren't being targeted.

Groups doing Windows malware are routinely doing things like reverse engineering vulnerabilities from binary diffs within hours of Microsoft releasing a patch. If they would spend even a fraction of the skill and effort on targeting Linux desktop users, plenty of Linux users would be owned fast. And unlike with Windows, the Linux community as a whole hasn't spent years evolving ways to respond to this quickly (with the notable exception of _offering_ security updates to address vulnerabilities quickly... through some channels and not others, and many users update twice a year at best).

How much software from hundreds of third-party repos (e.g. AUR) and third-party package managers (e.g. pip) do devs use? Taking over an abandoned package (maybe a dependency of a more popular package), or worse actually legitimately maintaining a package for half a year before adding the malware, isn't hard. And once the malware is on the computer, there are few safeguards for the actual data (malware can probably encrypt $HOME or exfiltrate ~/.ssh/*).

The Linux ecosystem is more than the kernel; it's everything other than the kernel that needs hardening. Especially the culture; the "it can't happen to us" attitude needs to go.


> Linux desktop users are so trusty; the only reason half of them aren't owned yet is because they aren't being targeted.

And the other half have ClamAV? Why _wouldn't_ they be targeted, surely a large number of proprietary technologies are developed on linux


No, the other half would simply get lucky and the malware wouldn't reach them.

No malware can reach every system, especially in the Linux world which is so fragmented. Users will only use repos relevant to them - for example, Fedora users won't use AUR/PPA, malware on crates.io would only reach Rust developers etc. Also, it would be much harder to get malware into any distro's primary repos than dedicated "lower standards" repos like PPA/COPR/AUR, which many users simply don't use. And realistically malware authors will only be able to infect some packages, and for some time.


Adding to this: Linux users are diverse in the applications they use. Even Window Managers vary widely between users and distributions.

They are also vocal about changes so when someone notices something in their package it will be shared quickly.


Let me introduce you to a security topic called ‘compliance’.


owned by the man


Naive question: Is there information about how viruses and anti-viruses work? I have various linux servers (which seem not to be the point of this discussion) and a linux laptop machine I use almost exclusively for development. I worry about supply chain attacks (I do machine learning development in python and there have been various recent examples of package repositories having malicious content inserted.) Would any antivirus help with that?

What else do I need to be worried about using linux?


ML is pretty promiscuous (as are some other ecosystems), so supply-chain is a realistic threat to a developer. restricting the user that runs such code is helpful. but suppose it gets owned: - can it call home? - can it escalate to root?

the latter might be via a local kernel compromise, but that's challenging if calling home is hard. otoh, sudo is often installed...


ClamAV can detect programs that take part in DDoS attacks. The program did not attack the VM, just used it as part of a botnet


So far, most classic AV systems are either so bad you're better off with nothing or they are not advanced enough to do much (and thus they're not really helping with anything). It's not that there is no need, it's just that there is no realistic scenario where what's available and what it needs to solve match up.


I don’t even use it in Windows.


Unless you've specifically disable Defender, you're using it.


The only Antivirus that has not yet bitten me is Microsoft Defender. I'm not usually one to praise MS, but this one eliminated a whole category of problems for me and many other people, especially non-technical folks. It works 99% of the time and they have their peace of mind and it doesn't mess up their system.

As for Linux... I was trialing something at a last job and it slowed compile times to 1/3 of the speed and did some other things. Glad I could save my coworkers the experience by writing a detailed report of why this is a bad idea for dev machines.


Clamav is the only anti virus I know of for Linux


It's all I've heard of too - though I suppose you run it in wine or a container, or even a traditional VM for that matter (just limited to scanning the mounted volume, not processes etc.) - I just wanted to keep it open and didn't think there was any point splitting the 'yes' vote.


ClamAV runs natively in linux. AV typically hashes files and matches them against a known database. It's not doing something that is only possible in windows


It also looks inside files complimenting my FreeBSD desktop.

PDF's for example can embed JavaScript.

Clam alerts to those with malicious payload.

I am sure there are false positives but I don't want anything embedded in my PDF's.

I use xPDF reader and feel relatively safe. It does only the bare basic.

Nevertheless I like knowing whats lurking inside my PDF's on my disk.

Deleting any PDF with embedded JS is fine with me.

For that I salute you ClamAV.


I know, I was saying you could, I suppose, run Avast or McAfee or whatever in those ways.


Recently hit v1 too


I don’t. I guess I blindly trust the packages in arch and ready the pkgbuilds in aur. Never found and virus issues.


No. Antivirus software is a scam on any OS. But don’t tell that to the sheep — we don’t want to spook them.


Yes for compliance reasons. I use clamav because I found it in the repos and was the only functioning one.


What does more harm than good mean? What harm could there be but a slower machine?


Over reliance and security holes.

Two issues that have been pit falls on windows. The user puts too much trust in said AV system, not understanding its limits and capabilities, subsequently stopping doing his own due diligence in the vast depths of the net.

Also, historically AV programs were used to breach systems. They usually enjoy a high level of access, but the team behind these programs isn't magically bigger than any other piece of software. As such a target for attackers, who study the AV software to circumvent it and as a result, may stumble one ways to use it as an attack vector.


I mean there's several AV's on windows that make you massively more vulnerable to attack in really stupid ways (like injecting laughably exploitable code into browsers)


Antivirus consumes money, staff time, CPU resources, disk I/O, etc.

You can have false positives and false negatives, which can cause even more problems and train people to ignore whatever the antivirus says.

The antivirus can cause security problems as well, things like man in the middle proxies to scan HTTPS traffic, but they "forget" to check the certificates, which handily disables the entire point of HTTPS.

Basically it's makes security worse, and consumes resources that would be better used elsewhere, like say with a whitelist.


e.g. some anti-virus software requires a kernel module; this adds additional attack surface

e.g. historically anti-virus engines have had bugs where e.g. when they search inside of a .zip file; their .zip parser was susceptible to a buffer overflow that would have allowed a malicious file when scanned to run arbitrary code.

e.g. some anti-virus software has a daemon that runs on localhost with an exposed port. This port receives RPCs. websites in your browser have been able to make requests to the anti-virus daemon.


The argument I was anticipating (having seen it before) was that it widens the surface area, it's another thing (and its packaging/distribution) to trust, so one might feel that there's more harm/risk from that than the good (finding other issues) they expect it to achieve.


If you work for a company that has cybersecurity insurance, they ask, point-blank, if all employee desktops have antivirus before they quote your premium. You just have to use it. It’s not a subject for religious dogma.



Would love to hear from the "I don't think we need it on Linux" folks. What is it about antivirus that we don't need? Is it the wrong solution to securing a workstation?


AV as a concept needs reconsidering: the problem is that attackers can test to see if their malware is blocked and tweak it until it isn’t, so there’s always a lengthy period where they can launch attacks which aren’t detected. AV also doesn’t help with the common case where something runs entirely in memory in an exploited process - the vendors will blather about behavioral checks but that doesn’t seem to do more than keep marketers employed.

Where I’d prefer to see time going is basically two areas: rather than trying to catch every possible bad thing, only allow know okay binaries to run (the hard part being supporting software developers), and extensive sandboxing to catch up with Apple. It’s hard to block every possible bit of bad code but we can minimize a lot of the damage if, say, a malicious PDF file didn’t mean the attacker could just read AWS credentials or SSH keys.

The other benefit is that AV software has a history of security problems. Most of that is that the vendors still use C like it’s the 90s and putting complex binary decoding logic into a privileged context is a recipe for bugs.


Most software used comes from repositories. Most of the time the user is not logged in as root. Any virus would have to exploit a system vulnerability, if that is common and serious enough to cause harm then anti-virus software will take longer to catch than it will take to be automatically fixed by updates.


> Any virus would have to exploit a system vulnerability,

I'm not sure if by virus you mean some specific definition, but malware can still result in a very long and painful day/week/whatever with just access to your home directory and nothing else.

What would happen if your ~/.aws folder was piped to pastebin? Even if you're using short-lived STS sessions with ephemeral keys, I imagine most people would still find themselves in a world of hurt.

How about sending interesting files from your browser's userdata directory? All your cookies, your browser's password manager, possibly copies of your third-party password manager's cache (even if it's all encrypted), copies of cached files, your Downloads directory.


calling home or exfiltration is indeed a serious threat. otoh, it's fairly straightforward to partition / reduce / sandbox environments in Linux. do you need to touch AWS infrastructure from the same account, host, vm as you read email or surf the web? do these environments need full, direct internet access?


> it's fairly straightforward to partition / reduce / sandbox environments in Linux.

Perhaps in some distros, but not so much elsewhere.

> Do you need to touch AWS infrastructure from the same account, host, vm as you read email or surf the web?

In short: Yes.

> do these environments need full, direct internet access?

Not sure what you mean by the environment, but in general, yeah - a whole bunch of tooling these days is basically unusable without internet access.


What percentage of desktop Linux users do that? Most distros don't do any sandboxing and those that do typically have easy ways to run binaries outside of a sandbox.


>Most of the time the user is not logged in as root

Why does this matter? Most malicious things someone would want to do don't require root. eg. VNC, DDoS, mic / webcam capture, token stealing, keylogging, ransomeware, stealing ssh / pgp keys, adware, backdoored web browsers. And for the small percentage that do you can just backdoor sudo or make a fake system update dialog that captures the user's password to let you have root whenever you want.


it's fairly easy to resist lateral movement in Linux...


Two of the most ubiquitous categories of malware today are ransomware and agents used to steal secrets such as web browser sessions. Because both of these categories interact with files the user has access to anyway, privilege separation (especially only the basic form of privilege separation traditional on Linux) is of little help. The attack surface is all owned by the user anyway. Both sandboxing (such as kernel capabilities) and mandatory access control (SELinux) are helpful in reducing this possibility, but both of these are relatively difficult to use and so not common on workstations.

It's also reasonably common for an exploit to become known by AV vendors and have signatures released before it's been widely patched. Turnaround time from a major exploit becoming known to the industry to a signature release by AV vendors can be as short as a day, especially with the significant intelligence sharing that now happens in the AV industry. AV vendors sometimes release signatures before the exploit is publicly known as a result of information-sharing agreements, although this is a touchy issue because the signatures themselves become a form of public release. While keeping software up to date tremendously reduces risk, there is still a window of opportunity.


> Any virus would have to exploit a system vulnerability

XKCD 1200[0] disagrees:

> If someone steals my laptop while I'm logged in, they can read my email, take my money, and impersonate me to my friends, but at least they can't install drivers without my permission.

[0]: https://xkcd.com/1200/


That’s one of my favorite xkcd comics because it describes the (very dire) situation so well. Unfortunately the linux userspace really doesn’t seem to care about security even a tiny bit, as if they were still living in the early days of computing where you could be naively trusting everything. And fortunately open-source software is indeed well-mannered for the most time, but that is no reason to be delusional.

Mobile OSs are way ahead in terms of security and the other two major desktop Os also does at least some mitigation against potential attacks. Yet our .ssh folder, web cache, backups everything can be read/written from the same user account one uses for npm installing any random package which has the potential to just encrypt your whole home directory..


I'm hopeful about efforts like bubblewrap, but widespread adoption is very tough. As long as policies are delegated (like AppArmor), I don't see that improving.

TPMs and Passkeys are also a good refuge - Just keep private material off the device.

What I'd like to see is a boundary between system installed packages (which I implicitly trust, but worried about malicious commits upstream, as others have noted) and other code, such as installed via pip, npm, cargo etc.

While it's feasible for me to audit a single shell script, or a PKGBUILD from AUR, it's pretty impossible for modern lanaguage package managers.


I just don't think it's effective. In fact, it can increase your attack surface [0,1].

[0] https://www.bleepingcomputer.com/news/security/antivirus-and...

[1] https://rack911labs.ca/research/exploiting-almost-every-anti...


Well there's a couple problems. A) virus companies feed the FUD to make it seem really important and worthwhile B) virus writing folks have figured out how to make an arbitrary number of binaries that the virus checkers don't catch for days or weeks C) if attackers are running binaries on your system, you have problems than antivirus is not going to catch or fix.

Virus companies seem to focus much more on marketing then technical excellence. They typically run with full privs, regularly download code/rules from a central server, and are often written poorly. Seems like the industry is awash in security issues: buffer overflows, false positives, false negatives, not checking signatures on downloaded rules/code, and breaking various APIs, network protocols, etc by playing man in the middle. Even things like proxying SSL to scan traffic for SSL downloads ... but failing to check the cert.

So I see little value in running a closed source daemon from a anti-virus company to catch binaries that no serious attacker would use anyways. I trust the binaries from the OS's repos MUCH more than the antivirus programs. Similarly I don't trust IBM's BigFix that was malware Gateway used to help profit from tracking users and showing ads with their special "dock" that came installed on Windows systems. They of course made it very hard to uninstall, since that maximizes their profits.

Generally it seems like the wrong approach. If you want to do it right, have a whitelist for approved binaries. Ideally hooked up to your local mirror/repo so you can have approved signatures for all binaries BEFORE said binaries land on your Linux boxes. Spend whatever resources you would on anti-virus on patching, reporting, monitoring, firewalls, training, etc.


Most of it comes from a philosophical perspective, rather than a technical one. In many forms, viruses usually find their way in from software vulnerabilities, leaving deliberately (albeit unknowingly) installing in the minority. With the open source/free software development cycle, it should be possible to eliminate most avenues of vulnerability.

So in short, the questions we should be asking are:

1. How do viruses find their ways in?

2. And, what can be done (as a user or developer) to prevent that?

These are obvious, I know, and the software devs for Windows aren't deliberately writing insecure software, but these questions are ones better seen from a behavioural point of view.


Never had linux antivirus do anything other than flag Windows viruses


LoL that made my day XD


ClamAV/FreshClam/U-block Origin/SeaMonkey/XFCE4/FreeBSD

I like that deep dive Clam gives you inside files. Finds lots of PDF's with JS. Naughty little kiddies.


No anti-virus on any machine except my brother-in-law's laptop. He loves downloading "free" screensavers and the like.


At work we have to run Microsoft defender on Linux. Seems pretty good. I don't even notice it.


Huh? You mean you install windows defender on linux, or you perform scans with defender from windows on the linux partitions?

Was this irony, or has the world become even weirder?


> Was this irony, or has the world become even weirder?

Nope, it's definitely the latter. Microsoft released a version of Defender for multiple platforms including: MacOS, Linux, Android and iOS. IIRC there's personal and enterprise editions both linked to 365 through "Microsoft 365 Defender".

I really can't imagine it doing much especially on iOS, it's there for the security checkbox/policy I guess.

Sources: https://learn.microsoft.com/en-us/microsoft-365/security/def... https://learn.microsoft.com/en-us/microsoft-365/security/def...


I don't even use an anti-virus on my windows PC.


So you disabled Windows Defender?


Ah, no I haven't disabled that. So it must be running.


ClamAV because of Pulse secure VPN demands it.


No and I wouldn't use it on Windows. I used Windows before 2007 but never had problems with viruses. Mostly they come with pirated software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: