Does anyone know if the browser will download a higher-res image once the user zooms in? E.g., in Chrome on OSX, if I zoom from 100% to 130% on a non-Retina display, will it download the 1.5x version and downsample it? Likewise, in iOS, if the initial zoom setting is "zoomed out", and "retina" is almost meaningless from a web perspective because people are zooming in and out all the time, what will be downloaded?
I've never really understood the attention paid to exact pixel alignment, and retina versions of images, when zoom levels are all over the place these days on all sorts of devices.
The browser can do whatever it wants (read: whatever's best for users) in this case. Current Blink (Chrome/Opera) code wouldn't download the higher res image for 'x' descriptors, but that may change in future versions, perhaps tied to user preferences,network info and other possible conditions.
Honestly, we're (collectively) never going to adopt this on a large scale. Everything I read in this article is just horribly complicated. I don't have a solution, but none of these are the right one. There's so much complexity going on that no developer will get it right without devoting a massive amount of time and effort just to load the "right" image at the "right" time.
Honestly, I need to support retina screens for mobile and desktop. For non-retina screens getting a larger image doesn't really matter. Non retina mobile devices are rapidly disappearing. So therefore I'm comfortable delivering higher resolution images to all devices.
I optimize our images by making them as small as possible and using lazy loading. These two techniques are more than adequate to suit out needs right now.
I am (loosely) of the opinion right now that this is an answer in search of a problem.
It is worth noting that the file size of jpegs doesn't increase quickly as one might expect as the dimensions increase. It's not uncommon to see a 2x image (so 4x the number of pixels) resulting in a file that's only ~200% larger.
An example I just tried was 38kb at 1x and 105kb at 2x
> For non-retina screens getting a larger image doesn't really matter. Non retina mobile devices are rapidly disappearing. So therefore I'm comfortable delivering higher resolution images to all devices.
Must be nice to only have big-city first-worlders as users :)
I don't think this is a solution either. In most cases we don't need multiple images of vastly different sizes. Sizing images can be handled entirely through media queries. We aren't using multiple versions of each image, so there's no need for a js shim.
> Sizing images can be handled entirely through media queries.
Unless something's changed since the last time I looked, media queries are only screenwise so they don't gracefully handle having a block in a small sidebar versus a "central" content section. The sidebar images could actually grow from a big to a small screen (because content is linearised and the "sidebar image" now takes the whole width of a small display, rather than a small part of a big display)
There's a simple rule I follow to solve that problem: If it goes in the sidebar (or a toolbar) it should be an SVG. It's the only way to ensure it will fit and scale no matter the screen size or device. With media queries you can even ensure that they get spaced properly (need more space between icons on smaller screens).
Pictures are they only problem that needs to be solved at this point.
> Unfortunately some browser vendors were reluctant to add new content negotiation-based solutions, because of past bad experience with this kind of solutions.
Can anyone provide more details of what they're talking about? I'm actually not too interested in naming names of browser vendors (which they were trying to avoid clearly) -- I'm more interested in details of the 'bad experiences', what was it (some people) thought went wrong with what particular aspects or areas of con-neg-based solutions in the past?
There are various people in various vendors (Mostly Mozilla and Apple) that oppose conneg based solutions. I didn't mean to be vague, just didn't think it's very interesting :)
It's important to note that the opposition is not unanimous, and things may change in the future.
That would add unnecessary data overhead as clients would get image data they may not need, e.g. a low-resolution device would not want a high-DPI version of all images on the site.
> A subset video bitstream is derived by dropping packets from the larger video to reduce the bandwidth required for the subset bitstream. The subset bitstream can represent a lower spatial resolution (smaller screen), lower temporal resolution (lower frame rate), or lower quality video signal.
The `vw` unit is unrelated to responsive images but a new unit type that reflects the Viewport Width (100vw = 100% of browser’s viewport). There’s also a not so good supported vh unit, all specified under CSS Values and Units Module Level 3 (http://www.w3.org/TR/css3-values/).
I used to rely on this, until some browsers suddenly stopped supporting it. It's a nice idea, but I recommend using retina.js (https://imulus.github.io/retinajs/) for consistency.
...and if you don't want to pre-scale/crop all those different image sizes, you can have Pixtulate (http://www.pixtulate.com) generate them on the fly from one high resolution image.
Two things: Being half the comments in this thread (2/4 as of right now) makes it seem like you're spamming. And you spelled "tailored" wrong on your site.
I've never really understood the attention paid to exact pixel alignment, and retina versions of images, when zoom levels are all over the place these days on all sorts of devices.