Hacker News new | past | comments | ask | show | jobs | submit login
Native Responsive Images (opera.com)
76 points by adamnemecek on Aug 24, 2014 | hide | past | favorite | 26 comments



For things like retina-aware images, e.g.:

    src="cat_500px.jpg" srcset="cat_750px.jpg 1.5x, cat_1000px.jpg 2x"
Does anyone know if the browser will download a higher-res image once the user zooms in? E.g., in Chrome on OSX, if I zoom from 100% to 130% on a non-Retina display, will it download the 1.5x version and downsample it? Likewise, in iOS, if the initial zoom setting is "zoomed out", and "retina" is almost meaningless from a web perspective because people are zooming in and out all the time, what will be downloaded?

I've never really understood the attention paid to exact pixel alignment, and retina versions of images, when zoom levels are all over the place these days on all sorts of devices.


The browser can do whatever it wants (read: whatever's best for users) in this case. Current Blink (Chrome/Opera) code wouldn't download the higher res image for 'x' descriptors, but that may change in future versions, perhaps tied to user preferences,network info and other possible conditions.


<meta viewport> has existed for years, though.


Honestly, we're (collectively) never going to adopt this on a large scale. Everything I read in this article is just horribly complicated. I don't have a solution, but none of these are the right one. There's so much complexity going on that no developer will get it right without devoting a massive amount of time and effort just to load the "right" image at the "right" time.

Honestly, I need to support retina screens for mobile and desktop. For non-retina screens getting a larger image doesn't really matter. Non retina mobile devices are rapidly disappearing. So therefore I'm comfortable delivering higher resolution images to all devices.

I optimize our images by making them as small as possible and using lazy loading. These two techniques are more than adequate to suit out needs right now.

I am (loosely) of the opinion right now that this is an answer in search of a problem.


It is worth noting that the file size of jpegs doesn't increase quickly as one might expect as the dimensions increase. It's not uncommon to see a 2x image (so 4x the number of pixels) resulting in a file that's only ~200% larger.

An example I just tried was 38kb at 1x and 105kb at 2x


> For non-retina screens getting a larger image doesn't really matter. Non retina mobile devices are rapidly disappearing. So therefore I'm comfortable delivering higher resolution images to all devices.

Must be nice to only have big-city first-worlders as users :)


Yeah I hear ya. Literally just going based on our traffic numbers though. For the vast majority of our audience, this technique is just fine.


Yeah...we thought of that too:

https://github.com/pixtulate/pixtulate.js


I don't think this is a solution either. In most cases we don't need multiple images of vastly different sizes. Sizing images can be handled entirely through media queries. We aren't using multiple versions of each image, so there's no need for a js shim.


> Sizing images can be handled entirely through media queries.

Unless something's changed since the last time I looked, media queries are only screenwise so they don't gracefully handle having a block in a small sidebar versus a "central" content section. The sidebar images could actually grow from a big to a small screen (because content is linearised and the "sidebar image" now takes the whole width of a small display, rather than a small part of a big display)


There's a simple rule I follow to solve that problem: If it goes in the sidebar (or a toolbar) it should be an SVG. It's the only way to ensure it will fit and scale no matter the screen size or device. With media queries you can even ensure that they get spaced properly (need more space between icons on smaller screens).

Pictures are they only problem that needs to be solved at this point.


> Unfortunately some browser vendors were reluctant to add new content negotiation-based solutions, because of past bad experience with this kind of solutions.

Can anyone provide more details of what they're talking about? I'm actually not too interested in naming names of browser vendors (which they were trying to avoid clearly) -- I'm more interested in details of the 'bad experiences', what was it (some people) thought went wrong with what particular aspects or areas of con-neg-based solutions in the past?


The canonical link in that aspect is http://wiki.whatwg.org/wiki/Why_not_conneg

There are various people in various vendors (Mostly Mozilla and Apple) that oppose conneg based solutions. I didn't mean to be vague, just didn't think it's very interesting :)

It's important to note that the opposition is not unanimous, and things may change in the future.


Thanks!

What's interesting to some is not to others. :) You could add a link to that doc from the relevant sentence if you wanted, woo, the web.


I can see a simplified <picture> element possibly working, but scrset just seems like I'm writing inline stylesheets again.


What about a proper image format with support for multiple resolutions? Similar to the way it's done in SVC (http://en.wikipedia.org/wiki/Scalable_Video_Coding)


I looked into that in http://blog.yoav.ws/2012/05/Responsive-image-format

But image formats are hard, since they require an eco system, investment from large cos, etc. Maybe in a few years...


That would add unnecessary data overhead as clients would get image data they may not need, e.g. a low-resolution device would not want a high-DPI version of all images on the site.


Please read the wiki op linked.

> A subset video bitstream is derived by dropping packets from the larger video to reduce the bandwidth required for the subset bitstream. The subset bitstream can represent a lower spatial resolution (smaller screen), lower temporal resolution (lower frame rate), or lower quality video signal.


Then we are not talking about an image format but instead a network protocol.

Inventing a new protocol if the problem can be solved by serving the proper markup, seems misguided.


Did I read that not carefully enough, or did the article really not explain what that "vw" unit is/stands for?

And what does the "calc(33vw - 100px)" do, apparently in void context?

Somehow I feel more confused after reading the article.


I believe it stands for "viewport width" which is specified in percents over the width of the browser viewport.

Google has a first-class definition, too: https://www.google.com/search?q=css+vw+unit


The `vw` unit is unrelated to responsive images but a new unit type that reflects the Viewport Width (100vw = 100% of browser’s viewport). There’s also a not so good supported vh unit, all specified under CSS Values and Units Module Level 3 (http://www.w3.org/TR/css3-values/).

The `calc()` function is a CSS function that lets you calculate values like 100vw - 100px, similar to what you can do in JavaScript. See more: https://developer.mozilla.org/en-US/docs/Web/CSS/calc


I used to rely on this, until some browsers suddenly stopped supporting it. It's a nice idea, but I recommend using retina.js (https://imulus.github.io/retinajs/) for consistency.


...and if you don't want to pre-scale/crop all those different image sizes, you can have Pixtulate (http://www.pixtulate.com) generate them on the fly from one high resolution image.


Two things: Being half the comments in this thread (2/4 as of right now) makes it seem like you're spamming. And you spelled "tailored" wrong on your site.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: