It's not a bad solution, but I don't think it's the best solution. Obviously, the thing that sticks out as being the most inefficient is that the small (or default) image is always loaded. This means that for tablets and desktop browsers there will be an extra image request that is never used. Not too bad, but on a page with a lot of images this could add up.
Another thing I would suggest the team think of is not loading images by browser width only. If we're tying these libraries to the idea that they improve performance by optimizing which images are loaded - so that you only transfer the necessary amount of KB per page view - browser width is a bit removed from what you want. What you actually want to measure is the user's network speed (which can be done with libraries like Foresight.js : https://github.com/adamdbradley/foresight.js). Loading large images on a slow network isn't going to be good for performance. By using both browser size and network speed, you can optimize images for mobile devices over 3G or those connected to WiFi. Or desktops on broadband versus desktops on 56k modems.
Checking network speed is actually a great idea - we'll investigate this. The reason we use the img src (which as you noted means you might load two images) is twofold: so that if your image is the same aspect ratio, you'll get an immediate load of something before the better image comes in (without which you'll have a really nasty reflow). This also guarantees you'll get something that works if JS is disabled or unavailable.
Please submit pull requests or issues for ways we can make this better, we're all ears.
this solution does not work as domload is executed way too late and image replacement cause nasty reflow.
RWD images problem is much more complicated than this.
you have to consider things like:
- dom reflow (img tag without width/heigh)
- requests (additional request to load default image)
- internet speed (you can use wifi on your phone, or 3g on laptop)
- choosing breakpoints
i wrote a blog post about this some time ago http://gondo.webdesigners.sk/responsive-images/
apparently i'm not the first one thinking about using base tag. i've come across some old video (dont remember the link anymore) where it was refused because of "browser bug". unfortunately i never found out what bug is the guy referring to so i can just assume that it was already fixed.
This looks great! How do I specify a media query for different screen densities? For instance, if I wanted to serve a 2880px wide image to a 15" Retina MacBook Pro and a 1440px to a 'normal' computer?
There's a media query available (with prefixes, right now) for pixel density – a retina MBP reports it as 2 whereas a standard one reports it as 1. So you can have a media query for a pixel density >= 2, etc.
Right, which is what concerns me. So is this the expectation?
<img src="small.jpg" data-interchange="[normal.jpg, (only screen)], [medium.jpg, only screen and (min--moz-device-pixel-ratio: 2),
only screen and (-o-min-device-pixel-ratio: 2/1),
only screen and (-webkit-min-device-pixel-ratio: 2),
only screen and (min-device-pixel-ratio: 2),
only screen and (max-width: 749px)]">
Their demo page fails to take into account pixel density, which is a shame since serving the right image to the right device is their entire justification.
Obviously this is client side, does that make it better? Timing my page load around a set of loading images seems counter intuitive; there's lots of other stuff to worry about.
Could someone explain to me why this is better/easier than just using CSS media queries myself?
Also:
> Whatever image you put inside the src of the image element will render by default. Then, the Javascript will progressively load larger images based on media queries that you pass into the data-interchange attribute.
So, for larger screens, there would be many requests for a single image? For example, if I drop a mobile-optimized image into the src and then view the page on a retina macbook, wouldn't this mean many image requests from the "mobile" version up through "full-size retina"?
I'm guessing looks like it's built to solve the problems of serving HD images via a CMS?
For static hand coded images media queries will suffice but once you start dynamically serving images it gets tricky fast, especially when faced with users that don't know/care about HD images issues.
Combined with some code to scale images to the correct size on the fly (with caching, etc) I think it could be pretty useful.
So, for larger screens, there would be many requests for a single image?
The demo page doesn't seem to work that way. In fact, if you resize the window gradually, it won't load new images at all. You have to wait for a second after resizing for it to bother loading the new image.
If you take a look at the Foundation source, the Interchange event responsible for swapping in/out images is setup with a 50 millisecond throttle. That's probably why you're seeing a slight delay. Additionally, they seem to also cache previously-loaded images to speed up future replacements.
From a practical standpoint, this is designed to address device-specific uses where the screen-width is less-likely to experience size-adjustments than a desktop browser, so you'll probably see the initial HTTP request for the small-resolution image, followed by the HTTP request for the appropriately-sized image for whatever device/screen you're using, and you're less-likely to see the same number of requests made while wildly resizing a demo page.
I'm a huge fan of Zurb and their work on Foundation, this looks like a good stopgap implementation. I just hope it isn't long until there is a standard implementation everyone can settle on, we don't want to end-up with http://xkcd.com/927/
I also am a huge fan of Zurb and Foundation but I'm not sure why this implementation is any better than picturefill. I still have to write a whole bunch of stuff I shouldn't have to write.
Markup for an image should not be several lines long.
I agree it shouldn't, this is why I call it a stopgap measure. Beyond the fact we both dislike extra markup I do find this solution cleaner than picturefill however, at least there are no extra divs inside the markup itself just a data attribute.
Unfortunately the browser vendors get to make this up as they go along and they seem unwilling to take into consideration the many developers which are more in favor of the <picture> element because its so much easier to write.
Naturally, if/whenever it is implemented we'll all scuttle back to our keyboards and play with srcset .
It looks like the cool part of this is not serving or making the images, but in the JS to request a specific pre-made image. imgix seems to be more about making new images that fit a page or device exactly. You could probably use them together: zurb would request a specific image size and imgix would generate and serve the image on the fly. But imgix seems to have code for that already, and it's more fine-grained since you don't have to specify your sizes ahead of time.
The two go together like peanut butter and jelly. As an imgix customer, I can tell you that the service is a godsend. It looks like you would just pass your imgix URLs into the data-interchange tag.
Or for bonus points, fork Interchange and have it pipe every image request through Imgix.
I just heard about this today... the pricing seems a little off kilter unless I'm understanding it incorrectly. How prohibitively expensive would this be for a SMB to implement?
The tech behind the service sounds fantastic, I'd love to be able to use them.
I don't have a lot of experience with imgix, but http://filepicker.io will resize your images on demand server side, and its all hosted in your S3 bucket/pretty reasonable pricing.
I haven't personally tried it yet, but it seems the best option out there at the moment is the Capturing polyfill by Mozilla (https://hacks.mozilla.org/2013/03/capturing-improving-perfor...).
Another thing I would suggest the team think of is not loading images by browser width only. If we're tying these libraries to the idea that they improve performance by optimizing which images are loaded - so that you only transfer the necessary amount of KB per page view - browser width is a bit removed from what you want. What you actually want to measure is the user's network speed (which can be done with libraries like Foresight.js : https://github.com/adamdbradley/foresight.js). Loading large images on a slow network isn't going to be good for performance. By using both browser size and network speed, you can optimize images for mobile devices over 3G or those connected to WiFi. Or desktops on broadband versus desktops on 56k modems.