"When you allow third parties to run script on your site, you’re entirely beholden to them; they can run anything they like in the context of your site"
I've seen a fair few Internet banking web sites pulling scripts from over a dozen third parties, mostly for tracking and advertising, but even for trivial things like social media. On their customer login pages. It's beyond me how they can consider this to be an acceptable risk.
The issue here isn't so much that they can run anything they like - it's not implied that the provider intentionally put a cross-site scripting vulnerability into the code. The issue is that you don't get to audit the third party code for vulnerabilities, and vulnerabilities can be introduced without your knowledge.
But that is part of the tradeoff you make when you agree to accept money from someone to add their code to your site.
If someone offers you a hundred bucks to carry a bag onto a plane for them, you'd be right to be suspicious. Ah, but it's not just a random stranger. You were introduced to them by your buddy, the ad network. So that's okay then. Your ad provider wouldn't be dealing with anybody who was actually criminal. But what about negligent? Did they pack the bag themself? Have they let it out of their sight?
When you run ads on your site, remember which direction the money is flowing in, and remember who is the customer.
Ad networks are a trust broker. The ad provider wants to know their ad really was shown to a real person in a real web browser, who really does meet the demographic profile they are paying to target. They do not trust you, the publisher, at all. And they are paying.
You, as the publisher, are getting money from the ad network to compensate you for the inconvenience of surfacing the advertiser's content on your site. And of proving that you aren't defrauding them. To do that, you are going to give up some control.
And that includes control over whether the code you're serving is secure.
I've been saying this for years. It's not even just security: you're also potentially leaking all your visitor stats to a third party (IP, user agent, the pages they visit (via the Referer)), and effectively giving jquery.com a third party supercookie over thousands of domains.
I surf with third party cookies and referers (via RefControl) disabled, and these should be the defaults
I've been called an idiot (even here on HN) for being paranoid about loading scripts from all over the web. I think it's a losing battle and my side is going the way of the dinosaurs.
As tools like uBlock and uMatrix get widespread adoption more and more people are realizing how much extraneous junk is being loaded by webpages. I think your side is slowly but surely gaining followers.
I use uMatrix, and I have already whitelisted loading scripts from common CDNs as global rules. They're everywhere and I found myself just constantly whitelisting them anyway.
The trend for a long time was to cite using a CDN as a best practice, but no one ever calls out the downsides when making such statements. In this case, you lose control of the code and allow third-party access to your users' browsers.
To be fair, it has some positives: like said CDN being able to patch somelib.js to fix a security issue and thereby protect thousands of sites at once.
At the moment though, proposed solutions to trusting third parties with your Javascript, like the W3C proposal to put cryptographic hashes in to your <script> tags etc, don't even consider these potential positives. So we're likely going to end up with the worst of both worlds.
If anything this is just enough facet of the weakness of the web as an app platform (Real solution: all sites serving client side libs should use package management, scripts should be digitally signed by the authors). As it stands it's far too common for people to just unzip WordPress or whatever in to their docroot, and so server-side code doesn't even get updated, let alone client-side code.
Maybe, but hotlinking images has historically been considered rude at best, and theft at worst. And most people still host their own CSS because it tends to come with whatever app, or theme, they're using.
In any case, the defaults I live with cover all the other web resources too.
How would that help? The "integrity" packet is provided by the now hacked server, and thus can never provide extra security? Or am I missing something?
The integrity value is part of the script tag. You add a hash of the script content as a property of the script tag, and the browser only executes if it matches.
I hacked together something that implements this sort of behavior in a script loader a while ago, if you're interested: https://github.com/ryancdotorg/VerifyJS
So, though this does allow one to safely circumvent the hosting cost associated with bigger third party scripts, it means giving up some of the advantages like dynamic updates (as the hash would now be incorrect), right? This would therefore not work when ad providers want to be able to supply content they get dynamically from others right?
Many of the CDNs will let you reference a specific version of the script. If you didn't do this, and there were an update, the script wouldn't load and you'd have to update your site. My script allows callbacks to be specified for a bad hash, so you could be notified of this, and the subresource integrity draft also mentions this as a good idea.
It seems not uncommon for ad networks to dynamically load further scripts/content, which would not fall under the hash. You can just sandbox them off in an iframe, though.
An obvious extension would be signed scripts, which would re-enable trusted updates of a script in a CDN, but there is the question of how that would be implemented.
No need; the browser will block http:// included from https:// pages. Including from a http:// page? Then a compromised jquery cdn is the least of your worries.
In short; no, just a compromised dns record is not enough.
>"they can run anything they like in the context of your site"
Some of them actually depend on that. We seen weird stuff show up on our sites because we made a deal with a company that sells ads for us. Suddenly we're pulling in stuff from companies we never heard of because the ad reseller made a deal with a third-party, which then again outsource part of their infrastructure to a fourth company. It's hours of detective work when you need to figure out who has misconfigured https.
If you're going to put adverts on your site, always put them within an iframe, pointed at a separate "adverts" only domain. This will ensure they can't execute javascript within your own website context.
> Is it violating program policy if I place ads on iframe webpages in my software?
> Yes, it does violate our policies. Firstly, you’re not allowed to place ads in a frame within another page. Exceptions to our policies are permitted only with authorization from Google for the valid use of iframes. Secondly, you’re not allowed to put ads in your software, e.g., if you control both a website with ads and an app that loads that website, we will take action against it.
Does it have to have a different domain or would a `src`-less iframe also work? You'd have to write the ad code into the iframe from the outer page, but that's not hard.
The throw away comment on how ad networks are a cesspit at the end of that article really spoke to me - if it weren't for the abundance of "Recommended Stories" and "From elsewhere on the web" crap selling weight loss pills and click bait I'd be far less inclined to run with an ad blocker.
The fact that these ads disguise themselves as content that the site owner is recommending is particularly insidious, since it will likely encourage people to click through thinking that they can trust the content.
In retrospect it seems he could have saved himself a Fiddler session if he just opened console debugger in browser and used `?"-(function(){debugger}())-"` in URL instead of `?"-prompt()-"`. (I would not have guessed this either, but may come handy next time.)
I am not that sure of that. Advertisers need to see the connections going through their servers, otherwise they are exposed to massive fraud from the websites they advertise on. So proxy-ing their script might not suit them at all.
There's a limit on the number of simultaneous requests per domain with http1, which will not be present in http2 [0]. This limit meant that for best performance, static files should be served from multiple (sub)domains.
It's 2015, though, not 1996. If you seriously can't afford to send your visitor a few-dozen-kilobyte file once in a blue moon (because you do have far-future expire dates, right?), you've got bigger problems than CDN'ing your jQuery is going to solve.
If you're serving megabytes of JS such that that is enough to matter... no, it's still true, if that's not sustainable you've got bigger problems than a CDN is going to solve. Even on very JS-heavy sites, the amount of your bandwidth taken up by JS shouldn't be that large on a properly-configured site. (Yes, I can construct some rare exceptions... you've got a demo site for WebGL and your average viewer hits you once with no cache, grabs megabytes of JS and textures, then moves on never to return. But they are rare, even if you can construct them in your head.)
I've resisted using an ad-blocker for years because I'm happy for the sites I visit daily to earn revenue that way, and for many it's the only way they can. I limited myself to running Privacy badger and blocking Facebook/Twitter tracking cookies, that kinda thing.
But this is the straw that's broken my camel's back and it spoils things for those of us who don't mind a few ads here and there. uBlock now installed, sod the ad networks.
Whenever I talk to people who work for ad networks or similar companies, I'm, without fail, impressed by how little technical knowledge they possess. If you work for a company that sells internet services, you should at least have some basic understanding of how the internet works.
as far as i can tell, adsafeprotected isn't actually for your or your visitors' protection, but for the advertisers (it seems to run a huge gob of incredibly slow scripts to "ensure" visibility, that there is actually an eyeball on the ad and that it's not hidden or collapsed or something)
I've seen a fair few Internet banking web sites pulling scripts from over a dozen third parties, mostly for tracking and advertising, but even for trivial things like social media. On their customer login pages. It's beyond me how they can consider this to be an acceptable risk.