Hacker News new | past | comments | ask | show | jobs | submit login
Reflected File Download: A New Web Attack Vector (drive.google.com)
285 points by lelf on Nov 5, 2014 | hide | past | favorite | 76 comments



So to summarize:

(1) You can use semicolons to get some web services to ignore the end of a request URL and respond normally, while tricking browsers into downloading the response as a file with an arbitrary name. This allows you to send a victim to a mainstream site (Google or Bing, e.g.) and have them end up with a file with the name of your choice in their Downloads folder.

(2) If the web service responds with user-submitted data, you can potentially get the contents of that file to be a valid executable. For example the author demonstrates a JSON response that is also a valid Windows shell script.

(3) By combining these two exploits, the author speculates that you can trick users into executing files that they wouldn't execute if they were hosted at g00gl3.com or similar.

The last part I'm not totally convinced of -- are there examples where attackers gain a big advantage by having a downloaded file come from a trusted URL?

Even setting that aside the first two parts are pretty neat, and I wouldn't be surprised if there are other interesting ways to exploit them.


> are there examples where attackers gain a big advantage by having a downloaded file come from a trusted URL?

Yeah. I hope Adobe is all over this.

It's not hard for me to imagine a shady website that offers streaming videos prompting users that they need to update Flash, then redirecting the user to an adobe.com URL that downloads an installer. I bet even some savvy HNers could fall for that.

Or how about a similar attack on enterprise users by prompting them to update Adobe Reader.


Yes, yes I might, especially if it goes to a download-esque page and I'm not looking too closely.


> Are there examples where attackers gain a big advantage by having a downloaded file come from a trusted URL?

Some operating systems, like Mac OS X, will tag downloaded files with the domain they were downloaded from. A prompt asking the user whether they want to download a file that "was downloaded from google.com" will sound much more convincing than one with an unrecognizable domain name.


But with the proliferation of domain names (.business etc) anybody can have a convincing name?


People take alt-tlds seriously? Even older TLDs like .info or .biz seem seedy and low-rent compared to .com


I think you greatly overestimate the degree to which non-technical people understand domain names and TLDs. There are a lot of people who think "www." goes on the front of their email address.


Yeah, but to avoid detection, botnets register random character domain names that are not going to appear legitimate, so this would be a nice tool in their arsenal.


"Google downloaded a file for me. That's never happened before. Oh well, guess I better run it!"


You have to understand that, seen from the perspective of non-technical users, the Googles do weird unpredictable things all the time.


Exactly. People that have a hard time understanding this, should maybe spend some time helping non-technical users use their computers and carefully pay attention how they interact with it.

Help a friend clean up their adware-infested Win7 laptop. Just show them how to remove unwanted browser extensions, and use PC-decrapifier to mass-uninstall the crapware. Nothing too fancy, because it will take the better part of an afternoon or evening anyway, because 1) these computers will be slow and most importantly 2) you're going to let them do all the clicking and typing (they will learn a lot, even if only more confidence in using their machine).

I don't like doing this because it always takes way more time than I planned, but if you do it right, the speed difference will make them really really happy and thankful for months :)

Anyway the point is, if you pay careful attention, you will first-hand notice all the idiosyncrasies with which non-tech people use their machines. It's fascinating, in a way.


Help a friend clean up their adware-infested Win7 laptop.

If you do this, then you become the "go to guy" whenever they have a problem - there is precious little appreciation of the amount of time and effort it takes to clean up a system.

I now claim "it's a specialization" and give out the contact info of local people who do this for a living. After the end-user has to drop a couple of bills every few months to get the dancing gorilla removed, they finally begin to pay attention - otherwise they treat the free advice you gave them as valued at what it cost.


I understand this worry. I only occasionally do this sort of thing for friends that I know (or expect) to appreciate the amount of time and effort enough to not consider me just a "go to guy".

Yes this sort of clean-up job costs at least 3 hours or so (because the machine will be slow).

So I make sure whoever I'm doing it for is present during this time. I'm not going to sit in a cold home office room battling spyware alone (that's setting yourself up for the scenario you describe). It's also not very difficult work (or interesting), so I can easily do it while having a beer or a smoke, chatting, enjoying music, having dinner with my friends. Often that means there's more than one tech-savvy person around, and we can take turns pressing the "Next" and "Are you sure?" buttons, and have some fun making up weird stuff for the occasional "Please tell us why you're no longer using Power Clicky Pro Live Updater" feedback forms. In the mean time I give them some general computer advice (Windows key shortcuts you thought everybody knew), replace Acrobat with SumatraPDF, WinRAR with 7-Zip, etc.

In return I can call upon them for other favours. As I said, often I get the occasional "thank you our laptop is still much faster", months afterwards.

If they won't appreciate what you do, the time you spend applying your knowledge on their problem, then by all means, don't do it. Compare it with a friend helping you out with some technical DIY task at home, applying their knowledge, time and tools for your benefit. Does that automatically make them the "go-to guy" for fixing your sink or toilet? Just make sure people understand what you're doing for them is in the same category.

If you find that hard to explain, or make clear, then don't do it. Good call on giving them contact info for local shops that will do it for money, it's a great alternative, better than nothing. But just like some random friend who knows plumbing or electricity, even if that shop's hourly wage x time spent is perfectly fair (and it's often cheaper than that), I still have a weird feeling telling my friends to pay $75 (or whatever) to get their machine cleaned.


Is there a particular decrapifier you recommend?


It's literally called "PC Decrapifier": http://pcdecrapifier.com/ :)

It's basically a multi-uninstall tool, with a sort of crowd-sourced knowledge-base to classify installed programs into two categories "stuff you probably want to remove/don't need" and "everything else".

I like how it's very straightforward and pretty much "does one thing and does it well" (as opposed to being also a registry-cleaner, resident whatnot-shield, defragmentizer, antivirus RAM scrubber, etc etc).


Unfortunately I could easily see that happening.


The confidence game here is the same as any other.

1> Google is a legit, law abiding, legal accountable entity

2> Because of (1), the download likely has the attributes associated with google, not more commonly with "bad guys"

3> The probability of google being spoofed is low enough to not empirically validate the premise or conclusion of (1)

4> Smart people therefore do dumb things as a result of (3)

5> Smart people doing dunmb things is a lucrative proposition, because smart people have money/wealth


Send it to their Gmail account. Tell them to download the attached file, which will "come from google.com."


It comes from https://mail-attachment.googleusercontent.com, so it's not super useful for the sort of attacks that this approach would be used for.


The important point to me is "some web services", which translates to 'some web servers', but which? Through a cursory browse I can't find one.

Correction: this seems to rely almost entirely on the content-type sniffing of the client-side, provided the content-disposition is 'attachment'.


Mac os x, for instance, lets you know where the binary was downloaded from when you launch it the first time. 3 would help people trust the binary more when actually choosing to launch it.


I think the author is claiming that clicking on https://www.google.com/s;/ChromeSetup.bat;/ChromeSetup.bat?g... results in a file ChromeSetup.bat being downloaded, but in chrome and firefox the file downloaded is f.txt.

Has anyone tried this on other browsers?

EDIT:

Here is the portion of the paper explaining why this no longer works:

"However, a common implementation error could result in Reflected File Download from the worst kind. Content-Disposition headers SHOULD include a "filename" parameter, to avoid having the browser parse the filename from the URL.

This is the exact problem that multiple Google APIs suffered from until I reported it to the Google security team, leading to a massive fix in core Google components."


The author mentions a mitigation of specifying a filename in the Content-Disposition header, which that particular url actually does:

    Content-Disposition: attachment; filename="f.txt"
Perhaps Google has fixed the problem for that URL -- I would hope the author contacted them in advance.


Stories like these make me never want to make a http webservice again. HTTP(S) is just way too complicated for me to ever be confident I've done everything right. It's getting to the point where webservices are like crypto: only experts should touch them.


Being aware of exploits and protecting against them comes with the territory. Luckily there are things like owasp.org to help developers keep up on web security. However, security is hard and it can't be done absent mindedly. There is no getting around that.


If the standards were more strict, some of these issues would not exist. I see this as exploiting a lot of slop in protocols. It should not be possible to interpret a URL as anything but a URL, yet here it's being reflected back and interpreted as something else entirely.


It has nothing to do with the standards being strict; the standards can be as strict as they want. If the standards are strict and useless, no one will follow them, instead implementing something less strict and more useful.

For example, when downloading a file from a website, what default name should you use for it? There is a header to tell you, but not ever page supplies such a header; so the browser needs to do something. It chooses to pick the last component of the URL as that filename. However, URLs are somewhat more complex than you might expect, so this becomes more complicated and can lead to attacker controlled ways to manipulate this filename.

Now, you could make a more strict spec, for example by forbidding downloading files unless the filename is properly specified, or forbidding using any kind of default filename and making the user choose it themselves, or something of the sort. But if any browser vendor implemented this more strict spec, they would instantly annoy a lot of users who would find things breaking that used to work, and they would be likely to switch to another more permissive browser.

Security, compatibility, and robustness are hard factors to balance. Just blaming this on "slop in protocols" is a vast over simplification.


>> For example, when downloading a file from a website, what default name should you use for it? There is a header to tell you, but not ever page supplies such a header; so the browser needs to do something. It chooses to pick the last component of the URL as that filename.

Yeah, and that's slop in the protocol. If the header was required everything would still work, web sites would just have to fill in the header. What's easier to do, comply with a protocol where your site brakes if you don't, or to have swiss cheese and then make site developers learn a bunch of security best practices and hope they get it right?

Also in there is the good old "this site wants to blah blah" and ask the user to decide. If you have to ask, the answer is "No! fix your site so it's not on the user to decide". Broken certificates? Not my problem, browser should just say "sorry site security is busted" and leave it at that. It's an old debate, but AFIAC there is no debate, only lazyness.


I get that. I just dread the days when malpractice for programmers is as common as it is for doctors. I like building functionality, not fortresses.


Yes, the author mentions that in his paper. He contacted both Google and Microsoft; sounds like Google rolled out fixes before publication, while Microsoft is still working on them:

  On March 2014, I reported a security feature bypass to 
  Microsoft which enables batch files (“bat” and “cmd” 
  extensions) to execute immediately without warning the 
  user about the publisher or origin of the file. Hence, 
  RFD malware that uses the bypass will execute
  immediately once clicked.

  ...

  Microsoft is working on a Defense-in-Depth fix to solve 
  this issue.
And:

  This is the exact problem that multiple Google APIs 
  suffered from until I reported it to the Google security 
  team, leading to a massive fix in core Google components.


§2.3.2 mentions that the author reported the security problems to Google, and Google fixed their APIs.


I wonder if this can be mitigated by marking your JSON actions as HTTP POST only. Since this utilises HTTP GET the request would never be actioned and since JSON uses HTTP POST almost exclusively it wouldn't break existing code.


It's not so hard to build a form and submit it automatically with JavaScript in order to get a user's browser to do a POST request to any URL you want.


That form wouldn't be on the same domain and therefore would hit CSRF protections.


Yes, if you require a user-specific random token in the request, the exploit doesn't work. But that's independent of GET/POST and not what you said in your earlier post.


On Safari 7 at OSX file is downloaded as f.txt.json.


I read somewhere recently that Google used to be affected but have since patched their servers.


Ahh...that makes sense


I can confirm that on Safari, the file downloaded is f.txt


during the RFD research I discovered that all [Windows security] warnings are dismissed if one of the following strings appear in the filename:

- Install

- Setup

- Update

- Uninst

That's pretty amazing – is this still the case? It's obviously a deliberate decision, and seems to totally negate the value of those warnings.


With programs that need UAC elevation, there's no "Internet zone" warning because there's already the UAC warning, and it would be rather annoying to have to press "Yes, really" on two warnings per program. I guess if you disable UAC, it's possible that you get no warning at all.


Do programs with names matching that pattern automatically request UAC elevation? Because the author doesn't mention that he received a UAC warning. If you are able to name an executable that way, and not request UAC elevation, and therefore bypass the warning, it sounds like an issue.


Yes, they do.

However, anyone running something called ChromeSetup.bat would expect a UAC warning to come up since they are expecting to install something anyway.

I've actually run in to this issue myself when I had a program called "Patcher.exe" (an internal dev tool) that didn't require UAC elevation. Turns out that name was on the list. You can include a manifest in the executable to say that you explicitly don't require UAC elevation to prevent that.


And the page 18 of the document is even scarier: the name of the program is not even displayed -- the (bad) logic was obviously "normal users wouldn't know the difference."


The gist:

"The URI specification[1] defines the ability to send parameters in the path portion of the URI by inserting the semicolon character (before the query portion that starts with a question mark "?"). Many Web technologies support this feature [a.k.a. "path parameters"].

In simple words, if a web server accepts path parameters it does not really consider them to be a part of the path, which means we can inject any content, as it will be ignored. However, when it comes to determine the filename of a download the vast majority of Web browsers (all browsers but Safari) parse and set a filename from path parameters."

[1] http://tools.ietf.org/html/rfc3986#section-3.3

A fairly obscure feature of URIs, apparently Correctly handled by some web servers, but apparently overlooked by most browsers. Argh. Again.


I found this presentation a bit more helpful to understand the concept

https://www.blackhat.com/docs/eu-14/materials/eu-14-Hafif-Re...



A similar technique used to build the payload, but the linked paper does have a more sophisticated technique for setting the target's filename arbitrarily, without having to somehow craft a link with a "download" attribute on a target website.


I really enjoyed the tone of this paper. If only more technical articles can be written in a matter-of-fact voice like this.


The linked document describes all the obvious parts that have been known foreer, but doesn't mention the interesting part: what webservices respond to user input (URL) by serving a previously nonexistent (server-side) document with a name derived from the URL.


There are a lot of URLs that echo back the content from the path/parameters. google.com/s/whatever mentions /s/whatever is not found for example.


It seems browsers are making a poor assumption here: that if HTTP/HTML say to download, the browser should immediately begin downloading the file to the user's computer.

The content-disposition filename is an effective hack to fix RFD. But as other commenters pointed out, just linking to evil.com/worm.jpg.exe achieves a similar effect to RFD, and can be just as effective on many users.

Windows has failed to warn users about what is happening when random executables are run (and RFD attacks that in particular). They should improve on this.

Perhaps the browsers should also change their behavior? They could prompt users with information about what is happening when a protocol specifies that a download should begin.


If the downloaded payload would auto-execute without warning then this would be serious. Otherwise (if it needs intervention) it feels like a far fetched threat.

1) Aren't the people who would execute files that randomly download exactly the people who can never find the files they download?

2) Aren't the people who execute random stuff from the Internet also the people who won't be able to tell whether a URL feels trustworthy or not?

So by 1) you could just as well serve funny.jpg.exe to the victim, and by 2) you can reach a wide enough audience by serving it from your bad guy domain rather than trying to masquerade as Google.


I think the point of this exploit is that the download does not need to feel random from the user perspective.

A user can be prompted to update flash or chrome itself and then be served a (somewhat) legitimate looking file from the respective website.


> Having the ability to control some of the content that is returned by the server in the response body is crucial for an RFD exploit to be successful.

This sounds like an XSS attack against downloaded files as opposed to rendered HTML.


The bit about the semicolon separator was new to me. Are there many web services using the semicolon to send parameters?

In any case, it seems that the real bug is that browsers don't properly recognize `;` as a separator and can derive the resource name from what comes after. That's definitely a problem; it would be crazy if, for example, you could craft a querystring ending with "&/file.bat" and the browser would parse it as a file download.


Parameters I'm not sure, but there was a hot minute back before Rails 2.0 shipped where it was using them:

https://github.com/rails/rails/commit/0cac2806a6fd9f1f63cdce...

That 2007 commit rolled back to just using slashes.


I'm sure there's some sites, but even if the percentage is in the low single digits (i.e. a smallish but still very significant percentage), I still think that browsers is probably the right place for this to be fixed.

Getting everyone to go through every part of their app and properly harden up their url routing to protect against this seems unlikely to happen - it's simply too much work for many companies.


I'm not sure I understand how this is "worm"-able - it still requires the user to manually execute the downloaded file? How is this any different from pasting a link to a "lol.jpg.exe" malware?


Compare "Are you sure you want to run 'lol.jpg', downloaded from hackers.com a minute ago?" With "Are you sure you want to run 'Windows Security Update 3.1', downloaded from update.microsoft.com a minute ago?". It would be even greater if that second alert showed that a certificate guarantees the file to come from a Microsoft site (would it, if this attack succeeded?)

The more you make your malware look like legit, the likelier that people fall for it. It's not a huge difference, but I guess more people would fall for the latter.

[and of course, it is unlikely that microsoft.com is suspectible to this attack. I don't even know whether it works anywhere at all anymore (from a comment elsewhere in this thread, Google fixed it on their site)]


Yeah it sounds a bit exaggerated to me, but it's still nice work. It definitely abuses the system and spoofs things that should not be possible to spoof, but it's not as big as I was first afraid it might be after reading just a few lines. It won't silently worm through your social network if you don't execute things that randomly start downloading. However if someone targets you, sends you a link to a .exe or .bat from your own company's website with a good story... yeah that is tempting to click.


I believe because the download "actually" (via reflection) comes from google.com .


You could craft an url that makes your browser download a file named 'chromesetup.exe' that comes from a google URL, but is a worm.


"The user executes the file which contains shell commands that gain complete control over the computer."

Perhaps someone could verify the following.

If a user is logged in without privileges (not the admin user for example on Mac but a "standard user") then there is no (is there?) way to "gain complete control over the computer" without entering an admin user and password later in the process.

Typically I operate two (or more) logins under OSX. One is "standard" user and one is "admin" user. I only browse under "standard" user never under "admin" user. To me "admin" user really serves no purpose but needs to be there for obvious reasons.

This way I always have to enter the name of an admin user in order to install or make any system changes.

Further, from the command line I would need to do:

su <admin user name> [password]

and then

sudo -s [password]


Lots of interesting things can be done without root.

The author gives an example where he quits and re-launches Chrome with flag "--disable-web-security" which disables the same-origin policy. He launches Chrome to a webpage which then steals your Gmail session cookies.

Most of the useful things you do on your computer, accessing all of your data, etc. doesn't require root.


`cat ~/.ssh/id_?sa | ...`

...but we all use unbruteforcable passphrases, right?


The obvious is that you think you're running "su" but you're really running some other command because your PATH is ~/.trojans:/bin:/usr/bin. They may not have immediate control, but they'll get it eventually.


Just FYI, PATH is always reset by su to prevent exactly this. Same with LD_LIBRARY_PATH and other security-critical environment variables.


That's after you run su. Just to be clear, I'm talking about an attacker fiddling with your path so you run fake-su, stealing your password, then calling su and making it look like nothing shady happened. By the time su is running, it's far too late for it to do anything.


An attacker can use a privilege escalation attack to execute as superuser, so even though there should be no way to "gain control of the entire computer" without the admin password, in practice this isn't the case.


actually it is. when you are sudo you can practically do whatever you want, including opening a backdoor, adding another admin user, forwarding ports etc.


If you think you're downloading an installer you might very well enter your password in the pop-up dialog, since many installers ask for admin privileges and people are conditioned to think that is normal.


Chances are that you are storing any data you care about in a way that's accessible to your user.

If that data is the target of the attack, user privileges alone won't help you.


Where is the file downloaded from if it wasn't uploaded to the targeted site?


Oh I see, the content of the file is in the url... Nvm my question!


Dont click on that link, it might contain a virus!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: