Simple solution for tech savvy users. All system prompts should include a photo of a user selected image. If the incorrect image is displayed you know its a scam.
For example when I install Windows 8 or Mountain lion one of the first prompts I must address is:
"Please choose an image to help you identify
valid system prompts"
The user is then presented 10 images (a tiger, a house, a moose, etc) from a library of 10,000 images.
"The user decides to use an image of a tiger"
Next time a user gets a system prompt if the system prompt doesn't have the picture of a tiger they know its a fake prompt.
Tech savvy users are not the main problem in malware.
The whole SiteKey/tiger image solution only gives you an illusion of the solution. What happens when the system displays "System error, unable to display the image?" How will a convincingly-written error message prevent your average gullible or below-average competence computer user from logging in to a phishing site?
Think of how many things can go wrong on a computer. Think of every time when someone asked you why something works one way in this situation, but another way in another situation, and you had to use a technical explanation (excuse, really) for that inconsistency. Computing is full of that. Until we get to a place where people can actually TRUST and expect consistent behavior in their computing devices, the SiteKey/tiger will be well circumventable.
As far as I'm concerned, SiteKey is a brilliant business idea for selling to satisfy the regulatory two-factor requirement, but a terrible idea in practice.
Take a cue from banks, and add a "confidence word". The user enters a special phrase such as "myspecialword". If "myspecialword" does not appear in the corner of the dialog box, they will know it's fake. I doubt there would be many technical issues that would prevent a simple phrase like that from displaying in the corner of the box.
What if it shows "PHP Parse error: syntax error, unexpected T_VARIABLE in ..." where the confidence word should be? Or better yet "ConfidenceWord database is empty" - something pseudo-techy that clearly implies a temporary f#ckup on bank's side.
The problem is not if the bank's site breaks; the problem is what happens when a phishing site displays "error: connection to ConfidenceWord database failed". What percentage of users will say "oh, the bank's site is messed up; let's go in anyway"? A high percentage.
I hardly believe any technical solutions on the bank's website is going to prevent any phishing sites to mimick it. People have to learn to recognize phishing sites and electronic communications phishing tactics just like they have to learn to spot fake ATM.
Frankly, I believe it's not something you can make happen. I remember a story here not long ago about honeypots in China and businessmen getting full briefing and warnings by the MI5 before leaving the UK and some would still leave their computers and smartphones powered on near the bed. I think it's the same with some users: they just don't learn and never will (I have another theory that states they don't want to learn anything about computers and that it should magically read their minds but I always end up cursing when I try to explain it and besides it's not the point :).
What I never understood is why an attacker couldn't just mirror the user's actions to the real site and scrape the confidence image or word from there to show on the phishing site. What am I missing?
I use (unfortunately) Bank of America online banking and if I don't see the SiteKey or really if there is any error at all during the signon process then I leave and immediately start Googling for Bank of America security breeches in the news. If I don't find anything, then I try to login again the next day.
Username: ____
Password: ____
Please note that as of June 7th 2012 the system prompt
image identification system has been deprecated and
being replaced with new security measures.
If you have any questions or require assistance, contact
technical support at support@bank.com
How many tech savvy persons would not be even a bit surprised by their bank legitimately doing something as retarded as this?
Some fairly large banks here in Norway have at times ran with a not-completely-valid SSL certificate - making the bank login indistinguishable from a man-in-the-middle.
Answer from their phone support? "Oh yeah whenever you see that warning, just click 'allow' or 'ignore'."
My bank used to do this, and I never quite understood why. An attacker could easily mimic the site's behavior.
1. Attacker prompts me (or my grandmother) for login name.
2. Attacker gives login name to bank.
3. Bank serves proper image to attacker. Attacker stores image.
4. Profit.
Yes, that type of security image is vulnerable to man-in-the-middle attacks but that is not what was proposed.
The parent poster suggested that all system messages have the security message. The user is not prompted for some sort of id first, they're already using the computer and are presumed to be logged in.
This is the right way to use security images, IMO, although they're still not perfect as others in the thread have pointed out. The way you describe, which I believe BoA uses (just hearsay), is bad security.
It does in that if the real site properly stores a cookie that records that you've logged in from there before, the number of times that the user is asked for such questions goes down, increasing suspicion when the user actually IS asked for them.
Security is never about 100% guarantees. It's about reducing the exploitability and impact of weaknesses.
It mitigates a little. It should make you a little suspicious if the site suddenly starts complaining that you're accessing it from an unrecognized computer if you really haven't. I'd close the tab in that case.
"The user is then presented 10 images (a tiger,
a house, a moose, etc) from a library of 10,000 images."
I should have made this more clear. The ten images are chosen randomly from the group of 10,000
The question is: are there 10,000 images that are different enough people won't be fooled. Say my picture is a green house. And a prompt has a picture of a red house, will I accidentally think its the right site key?
The good news is most people won't even have a picture of a house as their site key so it will protect a large percent.
That is a fair attempt. The weak point of course is the bit of data which stores which image you chose. If the attacker is able to read that, then he can display the right image.
1) If the attacker can scrape the screen, they can detect which image you are using - securing the entire pipeline to the screen is hard.
2) 10,000 images is way too few.
Even if we can assume an even distribution of images, as an attacker I can serve the same image to all targets, 1 in 10,000 will now think that they are interacting with a trusted component
[T]he obvious giveaways are used as a pre-qualifier, to ensure with the least possible effort that the ONLY people who respond to the scammers' initial mass mailings (and therefore have to be brought along individually during the later stages) are the absolutely most gullible, ignorant, susceptible, suckers they can find.
I thought that too, but it doesn't apply. This is malware. It doesn't need someone to be gullible beyond the click of the button. Scams, on the other hand, require actually convincing the mark to send money, which is why they need to be sure they have a gullible person on the hook.
I've thought about this quite a bit. As HTML applications continue to evolve we should make them feel "appish". Things like selectable buttons take the user away from experience the app. Also graph labels shouldn't be selectable.
One of the big ideas of the web is selectable content. However UI elements shouldn't be included in this set.
A simple example is an image-cropping system. The user has to click and drag. If you don't disable the selection and the user clicks in just a slightly wrong way they can end up selecting or dragging the image, which looks totally wrong for someone who wanted to select a region of an image. Both actions have the same user input (click, drag, release) but your intent is that it have a very different behavior than the browser default.
I travel a lot and have to use UIs in several languages, and being able to copy the text on button labels (or graphs!) to paste it into a dictionary is very important to me. On some sites I have to fall back to using the web inspector and it's just as annoying every time.
For the Github example, you want to disable selecting the file list header and, bizarrely, you tend towards disabling selecting the file list itself. Considering the files and their meta information are Github's content, disabling copy-pasting file names and commit messages is incomprehensible to me -- that's the last thing I'd consider disabling. And if I want to copy/paste the entire file list, I might want to copy the list header along with it for the benefit of the recipient. I wouldn't disable the selection on anything on your Github example.
The graph label example is just as strange. I tried the linked Morris.js example, and I can't select the label text. How is that a better user experience? What if I want to IM a friend the 2011 Q3 numbers? What if I want to search for similar data? Both quintessential web actions.
I think braking selection is almost as bad as breaking the back key, the cardinal sin of web apps.
If you are making a WebGL game, you do not want UI elements to select each time you try to rotate the camera.
I have used this property quite a few times for perfectly legitimate reasons. Any time that the user needs to click and drag to accomplish an action other than selecting text, you would want to use this.
You don't want the user to accidentally drag your images all over the screen when he is trying to click on them (happens daily when I test the games I work on).
If something asks you about update/downloading/etc., reject it. You decide what to do and when, and you type the URL into the browser, or go to the normal menu/dialog/tool for updating.
(This is partly why Chrome browser is right and the normal approach is wrong: if/when it needs update, it just does it.)
I think you're really on to something here. A root problem of this is that a lot of legitimate software communicate via random popups out of the blue - training users to just "do what the computer says".
People are used to the computer being in charge and commanding them. This is bad from a UX point of view, but now I see it also affects security.
Yet another reason popups of all kinds should be forbidden.
When all application-initiated communication come from the OS notification area, this kind of dialog will make people wary. Which is a win.
The OP's point is that displayed content can be made to be indistinguishable from visual elements of the browser even for technically sophisticated users in the near future.
This reminds me of login spoofing of yesteryear. How do you know if the login prompt on a shared computer or terminal is really from the OS or is a user-level program trying to steal passwords?
The usual solution was to hit a special attention key--like the "break" key under UNIX or Ctrl/Alt/Del for Windows--that user-level programs could not intercept.
Could we use the same idea here? Holding the "break" key will highlight genuine messages from the browser or the OS.
Long time ago I wrote a small program that would mimic the entry point of a DEC terminal server, slow baud rates screen refresh and all, and with the permission of the computer lab manager I installed it in a few PCs, next to the original dec VT terminals that were actually connected to the server.
It didn't save any passwords or such, just display some random funny non-sense message to the user after s/he inserted login and password and then loop back again to the login prompt with a failed error message.
Even with this obvious message that would warn an alert user for the suspicious terminal, we (my friends and the lab manager) got a few laughs when people coming to the lab and finding all the VT terminals taken would use the PCs to login and tried several (many!) times until giving up, at which point we would tell them the truth. Mind you, these were people comfortable with VT terminals and unix cli and somewhat computer savvy!
Easy solution. Logging in takes two passwords. After you enter your first password (first 8 chars of your 16 char password) you are presented with an image of a Tiger. You now trust the system. (The picture of a tiger was your secret image). You now enter your second password (the remaining 8 chars of your 16 digit password).
Great, you've now effectively reduced your password complexity to a measly 8 characters, while forcing the user to remember a 16-character long password.
If the user selects their 'secret image' from a known pool of images (as would probably be the case if this is at the OS-level), then the attacker just has to select one of those images (preferably a cute one) and then they know that at least some of the users they snag will have that as their security image.
SiteKey is completely susceptible to Man-in-the-middle (unless the user is a scrupulous cookie-manager and refuses to re-authenticate a computer more than once), so adds minimal value over regular SSL.
The problem is that the people who click on these also have lousy grammar and don't notice, don't care, or won't actually read all of the text.
There's only so much we can do if the end user refuses to think. I suspect a lot of these people will be migrating to locked down/walled garden devices soon anyway.
One of the programs I inherited once had been written by a programmer who loved alert boxes of the form 'Are you sure you want to Delete X'.
I was watching a user a month or so afterwards to notice they just pressed enter every time an alert box popped up, immediately, without reading and without thought.
Alerts on computers aren't there to be read any more. They're confusing annoyances that you just click yes to. They're usually badly written in that they tell a normal person nothing, they're without context and usually ultimately exist because a programmer was prevaricating on making a decision.
We nagged our users too much as programmers, to turn around and blame them for not thinking is a sublime irony given that we were the ones not thinking and constantly asking for reassurance that it was us not making a mistake.
Instead of using alert boxes to confirm that a user wants to perform a destructive operation, you should support undoing the change after it's done, perhaps for a limited time.
I forget what the term for it is, but there's a principle that any dialog that's asking the user for credentials or authorization must be clearly delineated from the rest of the UI and thus "unspoofable".
The example I recall was a "ribbon" in the OS that slide out to reveal the dialog. If a dialog presented itself but the ribbon remained along the edge you could immediately tell it was spoofed. Of course this requires the OS not allow untrusted code to reposition/hide the ribbon or present a full screen display without prompting the user.
Another example is iOS grays out the background (including status bar at the top) when presenting a modal password prompt. However this could easily be spoofed by a full screen native app. The only way to solve that is to require authorization to enter full screen mode.
Browsers are improving. At least Chrome shows the URL at the top of all popup windows. Entering full screen mode requires user authorization.
That of course doesn't solve the OP's problem of spoofing a floating window purely inside a webpage, but that really needs to be solved at the OS level.
This assumes that users are computer-savvy enough to (a) expect an unspoofable dialog at all, and (b) know which parts are unspoofable and how they should look.
We tried this with "AOL Certified Mail", which had an unspoofable official chrome, and I don't remember any serious drop in phishing.
This is the reason why I hated the move of the notification bar in Internet Explorer 9 from the top to the bottom of the window. And the UI is too simple to be spoofed.
But then of course, to relieve users from the burden of making security decisions one needs the whole chain of authentication of executables, access control and a trust system to dispense privileges.
Except you don't confirm anything. A fake UAC doesn't have any magic powers, nor can it pass your click on to the real UAC.
The problem UAC solves is that you click on a harmless dialog, but suddenly an important dialog is swapped in under your mouse. A fake UAC can't do that.
That's what happens when programmability (JavaScript, Java, Flash, plugins...) is added to a presentation format (HTML). Maybe this is the right time to revive Gopher and give Usenet a fresh breath of life.
In moments of distraction I've had a couple of near-misses where I nearly clicked on malware.
But when I'm trying to explain to my dad how to know what to trust and what not to trust I realise it's completely hopeless. You can fake almost everything that a non-techie would know to check.
Ever think to link to a file (e.g. excel spreadsheet) for a forum, like you can for an image with imgur?
other than Dropbox public url's the services that exiist have so many images with the word "Download" in the resulting link, all of which look exactly like a UX element, that you have to click about half of them or play Sherlock Holmes to uncover the real download link. It's like a scratch-off lottery.
You could get a hosting account (e.g. NearlyFreeSpeech or S3) and hotlink to them. I guess people don't often intentionally click ads when downloading files, unlike viewing images, so setting up a free file host isn't profitable.
The spelling heuristic doesn't work very well in much of the world. I live in a country where English is the primary language of commerce, government, etc. but only very few people (I think less than 5%) speak it at home. So the people writing the genuine banking websites, etc. are almost as likely to make mistakes as the phishers.
He's right. Most of the time, the things that tip me off are the misspelled words and poor grammar; also, the conflicting information. For example, getting an email message from Chase Bank with a signature from a Wells Fargo employee. A lot of people are one well versed phisher from losing a lot of time and money.
There was a story on HN not too long ago where a stereotypical 419 scammer explained why the schemes haven't gotten more sophisticated. Basically, it's a waste of time and resources targeting people who aren't either senile, naive, highly religious, or just plain morons. Using proper grammar and a well-thought-out background story will just bring your scam to the attention of people who might have the ability to investigate or otherwise interfere with it.
Put another way, if you want to steal a million dollars, do it by stealing $100 from 10,000 people. Much safer than stealing $100,000 from 10 people.
Once I got a letter from Bank Of America that they had noticed weird activity on my home equity line of credit, with a weird phone number to call to talk. No one answered that number, and I don't have an equity line.
Back when "Verified by Visa" first came out, and I first saw such a page, I called my credit card company to see what was up.
The customer service people at the card had no clue what was going on. They'd never heard of it either. They told me they'd escalate the question to a manager and call my back, but they never did.
This is a rather sophisticated scam indeed. Unfortunately you don't need such kind of sophistication level when it comes to non-tech users or as the OP puts it his "Mom".. because for them even a banner ads which say "you have one message waiting" or the ever popular "emoji in email" just works equally well.
I've suggested for years now that someone could make a killing selling copywriting services to spammers. Poor spelling, bad fonts, random crap, etc - these are all the hallmarks of spam which makes it easy to classify as spam. Well-written, intelligent-sounding, professionally-produced spam would likely get past more filters, and be harder for people to dismiss out of hand, and likely get more sales.
A suggestion to browser vendors: add a key combo that will turn all of the screen real estate managed by your browser into yellow diagonal stripes.
Then we just have to educate users to press this panic button whenever something that looks like a popup is on screen. If it's a real popup, it'll do the modal flash thing; otherwise the browser -- and everything in it -- turns yellow.
Malware authors have learned to spell -- many apps I install under the mistaken assumption that they will not run background services or send information back to the company, in fact do.
I also note that Dell's laptop division has a number of malware authors hard at work.
Yeah exactly. I always set my theme to the most minimal, "classic", non-effects, whatever options I can turn off. That way this kind of stuff really stands out.
Don't see why this was downvoted. Once your opponent is executing arbitrary JavaScript on your browser, there's no real reason for them to try tricking you into clicking a link when they can just use one of the outstanding security vulnerabilities for your browser to install malware.
Looks like trying to move the "popup" is a great way to defeat this kind of thing for now.
Me: Okay mom, if you ever get a popup that you were not expecting, try to move it outside of the browser before clicking on it. If you can't, it's fake.
mousedown is an event yes. So is mouseover. I'm not even sure why any action is necessary though. If you can present the popup, you can just feed a <script> tag to the browser and do whatever the click was going to do, right?
But to the advantage of the good guys, anybody with the brains and discipline to do a better job of this kind of thing, is much more likely to be able to make a better living honestly, than through fraud and deception.
I don't think the business of selling "single mom makes $700/day online" business plans is an honest living, but judging from the amount of ads and comment spam at least a few people are making a living at it (and google of course takes a nice cut as well). From what I've read, even though it converts better than online pharmacies and rogue anti-virus, you can get more traffic to the latter two.
malware is a numbers game. All you need is a few thousand older people that don't know any better and you can have your password sniffer / bot net , up and running in no time.
While poorly written english is a red flag to some of us. Not all computer users are native speakers of english even in english speaking countries. They are much less likely to notice usage and spelling errors.
Well, you can already do that through extended validation. I'm not sure requiring it would be desirable, and doing this would need a better reason than it being better than nothing.
The philosophy of HTML5 seems to be allowing applicants to do a lot of things which don't require much trust to be placed in them (and most applications don't need much), rather than security through asking the user's permission (e.g. most desktop OSs), when they are unlikely to have much idea which developers they should trust, or through accountability/review (e.g. iOS), which adds barriers to entry.
For example when I install Windows 8 or Mountain lion one of the first prompts I must address is:
The user is then presented 10 images (a tiger, a house, a moose, etc) from a library of 10,000 images. Next time a user gets a system prompt if the system prompt doesn't have the picture of a tiger they know its a fake prompt.See site key: http://en.wikipedia.org/wiki/SiteKey