Hacker News new | past | comments | ask | show | jobs | submit login

For the technical side, instead of the historical one: http://responsiveimages.org/

An example from the homepage:

  <picture>
    <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x">
    <source srcset="small.jpg 1x, small-hd.jpg 2x">
    <img src="fallback.jpg" alt="">
  </picture>



I understand from the article that the img srcset was somehow horrible, but the following (presumably the WHATWG proposal) looks more intuitive to me:

  <img src="small.jpg" srcset="large.jpg 1024w, medium.jpg 640w">
Can someone explain the drawbacks?


The syntax was confusing, but still didn't cover all use cases.

To authors it wasn't clear whether "w" declared width of the image, or min-width or max-width media query.

It's a media query, but doesn't look like one. Values look like CSS units, but aren't.

On top of that interaction between "w", "h" and "x" was arbitrary with many gotchas, and everybody assumed it works differently.

With <picture> we have full power of media queries using proper media query syntax.

srcset is still there in a simplified form with just 1x/2x, and that's great, because it is orthogonal to media queries ("art direction" case) and processed differently (UA must obey MQ to avoid breaking layouts, but can override srcset to save bandwidth).


How does the browser know to grab the 1x version or the 2x version?


I assumed 2x was for Retina-like screens. The browser already knows it and it's exposed via devicePixelRatio.

If I understood your question correctly.


The browser knows about the device it's running on, and specifically its display density.


file names can have spaces and JPEG can end with "640w" in URL. It's really weird to see a tiny DSL invented in a DOM attribute.


This all seems horrific. Why can't HTML be properly extended to support attribute arrays or dictionaries as values? Having a value encode a DSL is so messed up. This is yet more to parse...

HTML keeps getting pulled in so many directions. I wish XHTML had won. It was modular, pluggable, extensible, and semantic. The last bit might have eventually made entering the search space easy for new competitors, too.


Fully agree. XHTML was a sane way to have a proper application development model, instead of this document as applications hack.

But the browser vendors failed to make a stand against bad HTML.


'bad HTML' could easily have just been an ego clash and pissing contest between developers of competing browsers. It was arguably more difficult to implement than just well-structured syntax.


this is why attributes are really a stupid ass way to do things

  <img>
    <srcset>
      <source><width>1024</width><src>large-image.jpg</source>
      <source><width>512</width><src>small-image.jpg</source>
    </srcset>
    <src>image.jpg</src> <*>fallback</*>
    <alt>My image</alt>
  </img>


That isn't well formed, you're missing two </src>.

I dislike XML, the confusion between attributes and sub elements is one of the worst bits.


"1024 large-image.jpg 512 small-image.jpg image.jpg fallback My Image"

That is what your code would look like to browsers that didn't know about the new elements. HTML is defined such that browsers can ignore unknown elements for compatibility and still display the text. Using contents for the metadata means that browsers need to know about the elements to at least hide the text.


Holy crap that's verbose.

This is why *ML is a stupid-ass way to do things. "the problem it solves is not hard, and it does not solve the problem well."


"Attributes are stupid" is also Maven's approach, but this results in unnecessarily verbose XML files.


<imgset w1024="large.jpg" w640="medium.jpg" />


not practical since you'd have to define attributes for every conceivable size in the spec and that's just asking for trouble. e.g. w2048, h1024, w320, w240,h320, wPleaseShootMe :)


But now it's a PITA properly handle and escape for any toolset that don't have good xml support. Imagine people starting to put <![CDATA[ ]] blocks into this.


I meant if I had designed it from the start. Then everything is a tag, no attributes, no quotes, equal signs, etc.


How about JsonML; i.e. XHTML but in JSON format to make it less verbose / further improve integration with javascript?


JsonML is pretty efficient when auto-generated from HTML source. I use it as an intermediate form for client side templates ( http://duelengine.org ) but I don't write it by hand. Its regularity makes it a perfect candidate for output from a parser/compiler.


You'd want to kill yourself pretty quickly.

JSON is great as an interchange format, but there are many reasons editing it by hand is painful, lack of comments and lack of newlines in strings not being the least of them.


There's no syntactic difference between an attribute, an object and an array.


you can't nest tags into attributes


I really dislike your approach.


XML Parsing Failed


I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.


You hit the nail on the head.

HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

In going with the WHATWG over the W3C, we ultimately chose "easy to author, (slow to evolve) living standard" over "strictly typed yet developer extensible". I was disappointed, but it's good for some parties I suppose. (It certainly keeps the browser vendors in charge of the core tech...)

The W3C over-engineered to a fault. They had a lot of the right ideas, but were too enamored by XML and RDF.


It wasn't really a choice in favor of "easier to author." It was a choice in favor of "will this actually get implemented, or just be fancy theorycraft?"

No browser vendor was going to ship new features only in XML parsing mode, because that was author-hostile enough that it would lose them authors, and thus users. (Browser game theory.) The choice of HTML over XML syntax was purely practical, in this sense.


HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

It was browsers that did that in the first place. HTML5 just standardized the exact behavior on failures.


Incorrect. HTML5 synthesized the exact behavior that was closest to the majority of browsers. But not all browsers agreed (e.g. Mozilla would change its HTML parsing behavior depending on network packet boundaries), so there was still effort aligning with the newly-specced common parsing algorithm. At the time there was much skepticism that such alignment was even possible.


> Mozilla would change its HTML parsing behavior depending on network packet boundaries

I want to know more...



Which is what I said, right?

HTML 4 - vendors implemented the spec incongruently and failed in their own special ways. XHTML strict - standard parsing rules with strict failure mode. HTML 5 - standard parsing rules, suggested (but not required) rendering behavior for browser uniformity, and well-defined failure behavior.


> I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.

There's something you're overlooking in the above. If a compiler was smart enough to know what to do with your erroneous code and compile in spite of the errors, that would be the end of programming and programmers.


I'm pretty sure that comment was sarcasm. It's a complaint about how HTML5 isn't just specified to fail on bad input, but instead gives rules on how to recover.


And that would be a good thing!


Well .... now that you mention it ... yes, it would. :)


Sarcasm? I can't tell anymore :/

I love it when my code doesn't compile (i.e. if I've made a mistake). Much worse if when something tries to be "intelligent" and makes my code do something I never asked for - then I spend hours trying to figure out what the issue is (assuming I've noticed) rather than seeing that I made a mistake and fixing it.


Yes, I was being sarcastic. Web designers should stop whining and write proper markup code.


The problem has nothing to do with web devs but rather that no one wants to use a browser that spits out "error 5" on malformed HTML, which is necessarily what you're implying. The other option is to do your best with the bad HTML, and now we're right back where we started, regardless of how "strict" you make the rules.


Here, don't confuse what was XHTML1 with XHTML2.


I'm speaking more in terms of the goals the markup dialects had, irrespective of the ultimate implementation. I think we can all agree that those suffered from misguided engineering choices (bloaty XML culture).

Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

If it were done right, anyone could have defined a markup module/schema with parsing rules and scripting. The evolution of those extensions would have been pretty damned fast due to forking, quick vetting/optimization, etc. It would have been well timed with the recent javascript renaissance, if it had happened. It might have meant browser vendor independence at the level of the developer.

HTML should really have been modular with an efficient, lightweight core spec. It should have also paid lots of attention to being semantic so that others could compete with Google on search. I am still curious if that's why Google got involved in the WHATWG. I'm rambling about things I don't know about though...


> Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

This is exactly what happened, except without the XHTML nonsense. JavaScript polyfills of the picture element were created and in use before native implementations eventually caught up. (And native implementations are very necessary, in this case, because they need to hook in to the preload scanner, which is not JS-exposed.)

More generally, custom elements and extensible web principles in general enable all of this. Again, without XML being involved.


Spaces can be escaped as %20 in URLs. I do agree that the domain specific language is weird though, and would even require new DOM APIs to manipulate it directly (like the style attribute does).


Surely the way one would represent this in XML, rather than `srcset="foo 1x, bar 2x"`, which strikes me as odd, would be:

   <picture>
      <srcset media="(min-width: 40em)">
         <source size="1x" src="big.jpg" />
         <source size="2x" src="big-hd.jpg" />
      </srcset>
      ...another srcset...
      <img src="fallback.jpg" />
   </picture>
Fractionally more verbose, but really a lot less fiddly.


The article states that the final problem on the Boston globe redesign (meant as a proof of concept for responsiveness) was that image prefetching feature to speed up rendering which happens before html parsing. Thus they needed a way for browsers to parse that information separately ahead of time.

I guess that it should be possible though for browser to parse a html fragment rooted on the picture tag, and then plug that tree back later on in the full document tree when it is constructed. Or is it simpler to search for picture/img attributes? Oh, there's this whole implicit tag closing business in html though...How do we know where to stop parsing a fragment? At least attributes values stops at the end of a string literal, or on a tag end. Perhaps that's the reason why they went for a dsl in attributes.

I agree with you though, it's cleaner your way, and perhaps xhtml could use that approach in the future?


Quite, and it has the enormous advantage that I can extract all data using nothing more than an XML parser, rather than having a two stage [parse XML -> parse embedded DSL] parser for special cases. Even the media aspects of the srcset could probably be better expressed (with more verbosity though) as a standard XML structure.

I really wish it was - though I'm far from a fan of XML for most cases, it does work rather well for this when used as intended...


Verbosity was a huge argument against <picture>. People were ridiculing it with complex use cases that required awful amounts of markup.

Hixie was against using elements, as it's harder to spec (attribute change is atomic). Eventually <picture> got a simplified algorithm that avoids tricky cases of elements, but at that point srcset was a done deal.

At least we've got separate media, sizes and srcset instead of one massive "micro"syntax.


Why, it is just the web being the hack upon hack, of this bending into application framework story.


Would this be the first element that actually varies based on media size? Seems like a strange precedent.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: