[Non-experts] mistakenly worry that software updates are a security risk.
I think this betrays a lack of thought about the risks to non-experts. Tons of malware masquerades as legitimate updates, and non-experts don't always have the knowledge to distinguish legitimate updates from malicious ones. Therefore, to non-experts software updates are a security risk.
Edit: And this is why Chrome's policy of updating automatically and completely silently is the right thing to do, and everyone else (Adobe, Oracle, Microsoft, looking at you) is doing it wrong.
I largely agree with you, but non-technical users have had the frequent experience of "I updated X and then X, Y, or Z broke." Sometimes they even have the causational arrow correct, too. Yesterday they had no problem with Word or hackers. Today, Word doesn't work, to protect them from hackers. "Thanks, geeks."
Or consider how non-technical users can come to associate installs/updates with Arbitrary Negative Consequences even without that being a reflection of reality.
Bingo Card Creator, back when it was downloadable, was accused of killing multiple hard drives every year. You and I know that is preposterous, but all the user knows is that the last consequential thing they did with their computer was install the update and now their machine is bricked.
(The user does not appreciate that the MTBF of laptop hard drives among BCC users is approximately 18 months and that since we'd accordingly be the last thing someone did before data loss at least once a day.)
At least one of these users contacted their IT department, whereupon they were warned in the sternest possible terms to never ever ever ever download anything from the Googles because that could erase all their memory and give their hard drive a virus. A lifetime of learned (and taught) helplessness like that adds up.
> And this is why Chrome's policy of updating automatically and completely silently is the right thing to do
Not everyone is hooked up to unlimited broadband 24/7. To anyone who is frequently jumping between capped satellite & 3G/4G networks, silent auto-updating software not only unexpectedly slows down your already non-ideal connection, but also eats up lots of your capped data.
Lots of services tend to forget about the users who aren't hooked up to broadband 100% of the time.
The proliferation of mobile devices and the convergence of mobile and traditional end user operating systems is solving this one. Android and Windows 8.1 both have the ability to mark an arbitrary WiFi network as metered, and many core services as well as third party apps will happily discriminate between unlimited WiFi and metered WiFi/cellular connections.
Exposing the whole update process to the end user is like exposing the innards of the car's engine to the driver. There's no need to do that. They don't need to be aware of it. It should be just part of the daily magic to them, the stuff that keeps things running even though they don't understand how it happens or are not even aware of it.
Of course, the intricacies of the process should still be exposed to the technical users via various tools and APIs.
... but you can't avoid exposing the fact that the application substantially changed with no advance notice or control, because today's updates are not just security / bug fixes but also UI re-designs, major feature shuffling, etc.
Companies have an interest in forcing users to manually download updates because of ad revenue from the download page, and bundled software on the updater. It will take a lot to convince them to prioritize user security over money.
But are they doing silent updates? Any update that requires a reboot is not silent, and as far as I can see there's been no progress in silent updating since Windows 7. It is definitely possible to patch vulnerabilities in memory without rebooting my computer; heck, most malware will silently patch the vulnerability it used as part of the infection process; Microsoft just can't be bothered to do it themselves.
That's true; however Chrome restarts in a couple of seconds and restores most of your state. Also, all of the work of installing the update is done before it prompts you to do anything. None of that is true for Windows.
When you have 1.6 billion users, every time you waste 5 minutes of their time installing updates and rebooting, that wastes 190 human lifetimes worth of man-hours. I know that Microsoft does not properly account for this when deciding how much effort to allocate to making updates less intrusive.
Most people will not stare at the screen for 5 minutes while it's updating. They will be doing non-computer tasks in the meantime. Also, this ignores the ability for the updates to be postponed[1] until a convenient time (at lunch?, after work?), which means the lost productivity is reduced to the time it takes to restore the workspace.
[1] Even with windows 10's forced updates, I still think it's possible to postpone updates, just not indefinitely.
This is the wrong objection. If only 10% of users lose 5 minutes each, that's... 19 lifetimes' worth of man-hours, which is still excessive. The real reason for not doing rebootless updates is that they're hard to implement, and hard to implement in such a way that they create a risk of problems much bigger than losing 5 minutes (like data loss or security vulns staying open).
But these are not actual lifetimes. You can't kill people by updating a billion computers for the same reason you can't have a baby in one month with 9 women.
the problem with security updates is that they often include feature updates that are not security related... companies like Google appear to have too many engineers and implement changes that I would rather skip.
Well, upgrading often equals bloating and not all of us want to be updating their computer every two years.
That added to what's been said in this thread means no, is not the right thing to do.
It's not just MS, I'm pretty sure every Ubuntu OS update from Hardy to Lucid destroyed 1) my video configuration and 2) one other large thing and 3) 20 little things
But are the security experts actually safer online?
The study seems to assume that they are. It may be a fair assumption, but it would be interesting to know if it actually is true or not. It would also help validate the security practices.
If it turns out that the security experts got infected just as much, or only slightly less than the non-experts, then following their practices might not be worth the effort...
> But are the security experts actually safer online?
s/security experts/technical users/
Yes, we are. Whenever you see a laptop full of malware, that's a non-technical user. We're not safe from it entirely, nobody is 100% safe, but we're in much better shape than the regular folks.
Not just being pedantic here for the sake of it, I think it would be good to know how much safer experts generally are. Just saying "we're better than them!" isn't very convincing or useful.
I'm skeptical too. "The experts", e.g. HBGary, get pwned worse than having a bunch of crapware on their desktop, but it's for the same types of mistakes:
"So what do we have in total? A Web application with SQL injection flaws and insecure passwords. Passwords that were badly chosen. Passwords that were reused. Servers that allowed password-based authentication. Systems that weren't patched. And an astonishing willingness to hand out credentials over e-mail, even when the person being asked for them should have realized something was up. [...]
Most frustrating for HBGary must be the knowledge that they know what they did wrong, and they were perfectly aware of best practices; they just didn't actually use them. Everybody knows you don't use easy-to-crack passwords, but some employees did. Everybody knows you don't re-use passwords, but some of them did. Everybody knows that you should patch servers to keep them free of known security flaws, but they didn't." [1]
Granted, this doesn't prove that experts are generally unsafe, and maybe somebody with a beef with these ones in particular would say they're just semi-knowledgeable salesmen cashing in on the cyber scare, but it lends some weight to the idea that too many experts aren't much safer.
Well, my example of HBGary was chosen because they were a firm specializing in computer security. But, as shown by Ars Technica (or claimed by Anonymous) they had some pretty bad security failures themselves.
You do have a point about programmers, or to generalize a bit, people who are more technically-inclined than average but who don't care/know about security. I shake my head seeing things like people flashing community-built Android ROMs with signature checks disabled, closed-source rooting tools, sideloaded APKs downloaded from dubious filesharing sites, "curl http://whatever | sh".
Very good question, but I'm not sure how you imagine this could be addressed.
Security experts may use a lot more password managers and want to use unique passwords, but you could say that's just because they have a lot more accounts than normal people. Most people have a couple accounts for social networking, their bank, perhaps a local library... experts usually work in the field, spend their days online in an office, and have lots of accounts for various tasks or activities.
The attack surface is very different: lots of accounts versus just a couple. Lots of time online browsing various sites versus browsing your average social network in spare time. Perhaps I'm overgeneralizing, but it probably matches a good percentage.
And then there is the definition of "safer" or "expert". Are you an expert when you got a degree in the field? When you followed some online courses? When you work in the field? Or when you read a lot about security?
The truth is that we don't know if security experts are actually safer. Maybe the fact that they are experts makes them more confident that they can deal with an incident if such a thing arises. So it might be that they take more risks, but then are simply better at fixing things when they break.
We also don't have a ground truth for what users should do to stay safe online. Do updates work better than a strong password? Measuring the effectiveness of different security actions as scale is so challenging, that we don't now how to do it.
Yes, plus it might be true that security experts visit riskier websites because of the nature of their research. It's a difficult task to try and balance these factors out to make a useful comparison.
It depends on what your definition of safety is. The perspective of this paper probably relates to compromised accounts and information leaks, not adware infections.
I haven't completely read the full paper yet (it's pretty big), but in a brief scan, I can't actually find any definition they use for staying safe. They talk about 'protecting their security online' and 'to stay safe online', however I didn't spot anything more specific.
As you point out, there are a variety of attacks and big differences between e.g. leaking a password or getting a virus infection. But since their highlighted techniques cover both of these attacks (virus scanners, password best practices), you have to assume that they are relating to a whole range of attacks.
We did not provide a definition of what "staying safe online" means. As a result, some participants might have thought more of protecting online accounts, while others focussed on keeping their systems from getting compromised, etc. But coming from a non-technical user the question would be likely to be framed just like that: vague, because users don't know what the biggest threats are and what they should defend from first.
The thing that software security people do that most normal people don't do is: browsing and accessing email in a virtual machine, not their actual machine.
For example, running Chrome in a Docker container. Why not? Drawbacks? Security risks? Feasibility?
I understand that users download things but personally I can't recall doing that in recent memory, other than things like news/tech spec PDFs for later review. Moving downloaded files out of the browser's container would involve a fair bit of ceremony (physically selecting files/folders and dragging them out of the browsers "Download" folder and onto the host's file system, disallowing saving files outside of that folder, and so forth) but it doesn't seem that bad.
What do most users do with a browser other than open the thing, browse websites, and download files for later?
Isn't the Chrome sandbox a better assurance than Chrome-under-Docker?
Except for the obscurity angle of course (nobody writes exploits against Chrome-under-Docker).
To be fair, VMs are also mostly about the obscurity angle too, and if you do all your browing in a VM the cookies alone will make the attacker sufficiently happy that they will probably not care. People don't hack because they want root on the bare-metal OS, people hack because they want to steal data. If it's in a VM or container, then getting there is enough.
No. I'm barely on board with the pain/benefit of running an isolation VM. Containers provide so much less isolation than VMs, it's hard to imagine they're worth the inconvenience.
Because VMware has essentially abandoned their client virtualization software? (Workstation 11 is a paid minor bug fix.) While it's way better than VirtualBox, it's still annoying. Stuff like USB devices will randomly not work. You need more system resources, which heats up the machine, making it hot to the touch. Oh and it crashes at times, too. With Windows-on-Windows setup, I was having daily crashes. VMware doesn't seem to care and offers no support with the product (gotta buy a company support plan). They even had a kb article to the effect of "Known issue: Workstation crashes when you run Office 2007".
It's usable, just a bit annoying. I feel little option but to run Windows as a host OS in order to get best driver/video/battery support, so VMware is essentially mandatory.
Most of the banking-information-malware I take a look at hooks multiple browsers from the OS. So the browsers would need to be isolated in a better way than just being different browsers.
Using an uncommon browser for something like banking (Opera or Vivaldi or something) would be a pretty good solution because no one would have bothered to develop the hooks for them, unless the malware is keylogging, which is not uncommon but less popular now than the smarter solution of watching browser form submits on known bank websites.
It would actually be less secure to run your browser in a Docker container. If an attacker can break out of the container, it has root access to your system.
What do most users do with a browser other than open the thing, browse websites, and download files for later?
The download part is the scary one. An average user can’t make the distinction between an OS message and a malware disguising as the former. Thus they download shit that wrecks havoc on their PCs. Besides, you could be infected by a compromised Flash banner so you don't even have to download anything.
And I don’t think that containers are a feasible approach for the average user. I doubt VMs are either. Our best bet would be browsers running in an isolation context simulating a VM. But that would require more RAM than the average user has available.
If your using a Linux distro already you have to:
1. install virtual box from package manager
2. download .iso of some distro
3. install
4. update virtual machine
5. browse
Of these someone who's installed their own OS has 2 slightly novel steps. So yeah trivial is maybe the wrong words but still easy.
You forgot step 1.5: spend forever signing the VM kernel modules so they can be started under secure boot. Such pain. Fortunately, once the signature's in the UEFI, upgrading from Fedora 20 to 22 just necessitates resigning the new VirtualBox module without the UEFI steps.
Just to be clear, do you browse and read email in a virtual-machine or is this statement referring to the behaviours of security researchers more generally?
If so, have you ever forgot to use the virtual machine and instead browsed or read email on your host operating system? If so, what did you do?
Furthmore, is it possible to break out of the hypervisor and into the host operating system?
Your last question is really, really interesting. The answer is yes in at least one case (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-0923), but I'm not aware of any real in-the-wild behavior of this sort (that is, not just basically a PoC of that vuln).
Note that that vuln is related to shared directories - anyone using a VM for isolation should probably not use these. I recently heard an amusing story of another infosec professional that cryptowalled their Macbook by testing on a VM with over-generous folder shares. Of course this is an extreme example caused by a clear mistake, but it's food for thought.
Non-developers probably do everything that would be valuable to an attacker in the browser and/or via email. (Heck, what else do people do on a computer?)
You really don't want to hook up your company's network storage to the internet; running a browser without domain credentials should avoid that, in a typical enterprise environment.
What should I use as a VM environment/OS? Obviously I know GNU/Linux distros, but if all I'm going to be doing is using Firefox, I don't want a 8GB+ VM file hogging space on my normal system. Right now, I only have a 256 GB SSD in my laptop, and have my VMs on an external hard drive, which when plugged in to use uses up 1 out of my 2 USB ports. I would love to browse in a VM if I could get away without using a ton of space on my SSD. Also if you have any suggestions for options that also support high-DPI, that would be great, but not strictly necessary.
You can use Fedora, Debian, Whonix, or Windows in "seamless" mode with minimal effort. It has a composable networking architecture (e.g. easy to make all a VM's traffic go through Tor, whose traffic in turn goes over a VPN). Disposable VMs are a native feature. Template VMs reduce duplication of /; Application VMs use a template VM with CoW / and their own persistent /home.
Qubes uses Fedora 20 for Dom0, so you get all the same graphics support as Fedora gives you.
I used to use Crunchbang whenever I wanted a lightweight, easy and modern GNU/Linux virtual machine, but they seem to have pulled the plug.
Debian is what I'm most familiar with, so that's my new place to go. Still quite big with a relatively basic install, but not as big as Ubuntu or Linux Mint.
Really lightweight stuff is Damn Small Linux (old) or Puppy Linux (never tried it), but I'm not sure how usable that is for day-to-day tasks.
I'm a bit surprised that using a virtual machine was mentioned nowhere in the paper.
Edit: I'm also surprised that we have such a large proportion of this discussion talking about something (using VMs) that is mentioned nowhere in the paper the thread is supposedly about. Perhaps if we were also discussing the (apparently) surprising fact that it was not mentioned it would make sense (to me, anyway).
This is true. I'm hoping the "browse in a VM" thing catches on with the public, even if it's for the "I want to hide my browser history from my significant other" sort of reasons.
Browsing makes sense, but why email? Unless you run an OS and email client that 1) renders HTML mail by default (making you vulnerable to browser engine attacks) or 2) makes it too easy to run executable email attachments, then is an email client really higher risk than many other pieces of software?
Any time you provide a vector for an attacker, you run the risk of being exploited. Email is even worse than web browsing, because an attacker has the ability to send it to you, rather than waiting for you to go to their site.
I would never claim to know my email-renderer so well that I was 100% confident that it didn't have any attack vectors, particularly with all the crazy things you can do with unicode nowadays - much more straightforward to just read it in a virtual-machine. That way, even if you guessed wrong on the attack surface of your email client, the damage is contained.
Even more straightforward still to hire a Mechanical Turk to read your email to you over the phone, and there's no risk of viruses capable of VM breakout!
My personal work email flow involves clicking a lot of links. Email is quite tied to web browsing for me, so if I wanted to make sure that all of my browsing was done in a VM, and I wanted to make it convenient, I would do my emailing from the same VM
I am personally concerned with the "patch, patch, patch" message. Stated that way, I completely agree with it. However, for many it is just "update, update, update."
I'm all for getting the latest security patches. Or any security patches, really. I'm growing tired of getting the latest possibly risky feature from a product because it is the only way I can get a security patch.
Just yesterday, Windows Update automatically installed a driver for my GTX 970. It broke OpenGL and I had to go to Nvidia's website to get their standard driver and reinstall it.
And since Windows 10 breaks the ability to block specific updates, I'll probably have to keep the installer around and reinstall it every damn time that Windows Update decides that the driver MS is distributing is better than the one from nvidia.com.
I'm a techy and I completely understand why users ignore updates. Either it's invisible and the user doesn't know it happened, or it breaks something with no obvious way to revert, or it arbitrarily changes things that were fine how they were. So their perception ends up being "every time it updates, things get worse."
I'm not sure sure you can blame it all Nvidia. The versions that Microsoft ships are written by Nvidia, yes, but they're torn down to essentials with a bunch of features removed. Among those, Nvidia's various control panel type addons, and apparently some important OpenGL extensions that LWGL relies on.
By going through WHQL, Nvidia gets to have better out of the box support on Windows, and Microsoft gets to ship a stripped down driver without Nvidia's control panel cruft and with better support for DirectX than OpenGL.
I don't see what Nvidia's motivation for the last part is unless Microsoft said "Don't bother including all of the OpenGL capabilities, DirectX is fine for basic drivers."
It frustrates me that EVERY day when I open the Pandora windows desktop app(lication), an Adobe AIR popup asks to install an update. EVERY day! I know Agile is hot and all, but is their AIR framework so fresh that they constantly have to fix things?
Brilliant. Measuring how well typical users understand/implement security measures has long been overdue.
Personally, I find Figure 2 (on Page 5) of the paper most interesting: it shows the difference between expert and non-expert mentioning certain practices -- which to me seems roughly equal to how under-/overappreciated that practice is.
The top contenders for underrated (i.e. used more frequently by experts compared to non-experts) are: System updates, 2-factor-auth, password managers, unique passwords and checking for https. Most overrated: antivirus, password changes, only visiting known sites and using strong passwords.
As a security community, we appear to have gotten the point across when it comes to antivirus and strong passwords. Anyone giving general advice should consider this and emphasize the "underrated" measures.
> The high adoption of antivirus software
> among non-experts ... might be due to the
> good usability of the install-once type of
> solution that antivirus software offers.
Or due to the fact, that antivirus companies make money on selling antivirus software to non-experts and have a long history of advertising it to non-experts as a security solution.
One bit of advice that should be up there is to run an ad blocker and a flash blocker (not so relevant anymore now that FF started blocking by default). I know, I know, websites depend on ads for revenue. But ads are also a great way to deliver exploits, in addition to all the personal tracking ad networks do. Our number one priority is to protect ourselves, not to protect website revenue.
https://www.eff.org/privacybadger I prefer this over Ghostery. You can also for the most part replicate Ghostery by just downloading an appropriate filter for Ublock Origin.
Security experts have more faith in their ability to avoid triggering a scenario where a virus has the chance to gain a foothold. That's why there's such an emphasis on patches, to plug the holes they can't see to personally.
- many would prefer to whitelist trusted software than blacklist malware
- bypassing AV is trivial
- http://www.sevagas.com/IMG/pdf/BypassAVDynamics.pdf
- https://github.com/Veil-Framework/Veil-Evasion
That many security experts use GNU/Linux or *BSD is definitely a factor, but many people still use Windows.
I'm not a security expert, but I don't use any 'continuous scan' anti-virus on my Mac because I strongly suspect it would make the system more vulnerable, not less.
In my personal and professional experience, A/V is effective but not in the way you might think. A/V has a very good detection and quarantine rate for situations where the infection vector is bloody obvious. When you open the .zip attachment to that email and then go right on ahead and run the .exe inside, that's when a good A/V product will save you.
What I'm saying is that I view A/V as protection from users, not from malicious actors. In a corporate environment with mixed-skill users, A/V is key. On my own devices, I don't frequently get into situations where A/V would be effective, the threats that are more likely to get me are more sophisticated.
This isn't to say that I don't have Windows Defender enabled, but I don't see a value return in purchasing a commercial product.
Not entirely surprising the experts ranked "install software updates" #1, but it didn't even make the non-experts' top 5.
We, as an industry, still have a long way to go in making it easy and safe for consumers to keep their software up to date. Have you ever tried to explain to someone (outside the industry) which "click to install the latest version" messages are important to obey, and which are malicious?
This seems misleading. Good security jerks know that there isn't a rule that works for everything. This list might be a little misleading to the non-security jerks.
For example, 'software updates' are half the battle, but the other half of the battle is configuring your software to be more secure (browser sandboxing, NoScript, pop-up blockers, malware detectors, OS hardening).
All the rest of the security concerns are authentication-based, but there are very few accounts that are important enough to need a secure account. Banks and money transfer services, business accounts (taxes, professional services, ebay/etsy merchants, etc), followed e-mail accounts, are probably the only really critical accounts most people have. You can hack my Facebook or my Huffington Post account; it doesn't really threaten my safety.
I think the one thing nobody does that would actually matter to them eventually is keep offline backups. Facebook might lose all your pictures and FB messages tomorrow. They have zero responsibility to keep that crap for you. If you do get hacked and someone deletes all your pictures, don't go crying to Facebook; they have enough problems.
At the end of the day, the biggest threat to your online safety in general is malware. Once malware is on your device it's game over.
Our systems can be hacked, expert or not. Minimize online footprint. Do not keep years and years of email and other stuff on Internet connected devices, back it up to external media.
A plug for our paper, also at the SOUPS conference. We tackled a similar topic, but with a different method and broader focus (how experts and non-experts in general conceptualize the internet as a system): https://www.usenix.org/system/files/conference/soups2015/sou...
It's great to see a large company like Google focusing on this kind of work though.
I've had my mom use lastpass for a couple of years, and just recently enabled grid multifactor auth (free). The main thing about their multifactor options is you can optionally "trust" a computer and only do multifactor on it once. So she won't have to ever use multifactor, but it's mandatory elsewhere which essentially keeps everyone else out while not changing her workflow.
Seems like another attempt of google to get phone numbers from users. Experts are using two factor authentication, review your security settings and give us your phone number. Maybe I am a bit paranoid...
What kind of security experts are they talking to... My personal list of most important things to do:
1. Run a version of Linux ( Windows is simply insecure )
2. Use Firefox + NoScript and only ever temporarilly allow JS to run as needed. ( JS is -not- safe and at any point in time there are at least a handful of zero day exploits )
3. Use an offline password manager ( KeePass )
4. Use a secure anonymous non-logging VPN for all internet use
5. Use a paid private email account, not some free one
6. Use VMs for running software that may not be safe
Those sounds good but I'm shying away from Firefox at the moment for security. I love their open source approach and would prefer my browser to be open source.
However Firefox does not have tab sandboxing, extension sandboxing, or process isolation. These are pretty standard features in most browsers now (except for process isolation which seems to be Chrome only at present).
I think this betrays a lack of thought about the risks to non-experts. Tons of malware masquerades as legitimate updates, and non-experts don't always have the knowledge to distinguish legitimate updates from malicious ones. Therefore, to non-experts software updates are a security risk.
Edit: And this is why Chrome's policy of updating automatically and completely silently is the right thing to do, and everyone else (Adobe, Oracle, Microsoft, looking at you) is doing it wrong.