Also, being slow in IE is a good thing. Maybe people will switch to a better browser.
I also think it's a rare case where your bottle neck is speed of updating the dom. IMHO, having clean non-ugly js code is more important in most cases.
2x-3x improvement is not "negligible" if you're doing heavy caching + js replacement.
> Also, being slow in IE is a good thing. Maybe people will switch to a better browser.
I'm going to assume you're kidding.
> I also think it's a rare case where your bottle neck is speed of updating the dom.
Don't forget that your average user isn't on a brand new 2.4GHz Macbook Pro. The average computer is -slow-.
> IMHO, having clean non-ugly js code is more important in most cases.
Something that's fast, predictable, readable, and one line seems great to me. Being w3c compliant has nothing to do with how "clean" or "non-ugly" it is.
I'm not trying to debate you because I like getting into silly syntax wars. I'm trying to point out that no matter how ugly, noncompliant, un "future proof" something is, doing the simplest thing that works now and in the foreseeable future shouldn't be vilified. In this case, innerHTML is, and still will for a while, be a great way to insert elements into the dom.
2x-3x is negligable when the overall time taken to build a 50x50 table is 30ms. I don't really care if it's 30ms or 60ms. Either is fast enough unless you're doing crazy masses of DOM manipulation.
I wasn't kidding in the least about IE. IE is a small minority on my webapp, thank goodness. If they get a degraded user experience, then that's MS's problem to fix.
"fast, predictable, readable, and one line"
I guess you haven't got into all the issues with innerHTML. Tables, event handlers being borked, etc etc. html entities are ugly and have no place in code IMHO.
I guess some of it also comes down to the fact of mixing languages. I'd much rather write javascript in a single file. Having HTML fragments dotted around the place is, in my view, very very ugly.
If users get a degraded experience, I've always thought it was my responsibility to do what I could to make it better for them. If that means I can't read the code easily, then so be it.
Agreed, I'm surprised they decided to use a non-standard attribute. But it is a simple and elegant technique if you want to save the server from doing extra work it doesn't need to.
Sure, but it's just as easy for the server to output a javascript structure with the data in really. eg
<script>
var data = {time1: "Nov 5, 1955", time2: "Jan 1, 1970"};
function init() {
// Update the spans with the "ago"s
}
</script>
<span id=time1></span>
Using the DOM to store program data just doesn't seem nice to me.
>> Using the DOM to store program data just doesn't seem nice to me.
I disagree. I think the DOM is where data should be (look at meta tags, for example). Storing data in json style or xml-like attributes sound a bit like repurposing things.
Also, according to Steve Sounders, sprinkling script tags everywhere (which I suppose is the easiest way to get the json approach working in a site of arbitrary complexity) is not a good idea in terms of loading speed.
What if your data contains html entities, newlines, unicode etc etc etc.
HTML is a hornets nest, whilst js is pretty sane.
I would never advocate sprinkling script tags everywhere :/ 1 is enough usually. (I would obviously in this case, have 1 script tag at the end of the body, with the data structure, and the function to initialize the span tags).
As far as encoding goes, my experience has actually been worse on the javascript side, specifically with french text and regular expressions buried in 3rd party libraries. Compared to that, grabbing data from the DOM using DOM text node methods is a walk in the park.
The thing with the json solution is that once we have things being ajaxed in, it becomes very hard to maintain the code. Has id="date7" been used? What if I add a comment while the "shoutbox" feature is updating?
Having the data in HTML would let us use something like Jquery's livequery and it would work regardless of how the data and how much data got into the page (plus we get the benefit of the no-javascript scenario I mentioned before).
The solution I posed was a simple inline javascript at the end of the html. The js data is generated at the same time as the html output. The IDs all match, and are used once.
I've never seen any issues with encoding in js, so I'm not sure what you mean about french text, regexps...
What I'm saying is that once a site becomes more complicated (e.g. if the data comes after page load, in an ajaxed popup, for example), that simple solution becomes harder to maintain (or to implement without wasting bandwidth). To be fair, if you don't work on complex web apps, you'll probably never run into this.
>> I've never seen any issues with encoding in js, so I'm not sure what you mean about french text, regexps...
Good for you. I hope you never see these types of bugs, they are nasty to troubleshoot.
I found that strange too. I'd probably have the date as a text node, so that there is still time information visible in the rare case when javascript is turned off.
There are a lot of good alternatives proposed and debated on that page, so I'm confident that they've come up with the best way to represent the date in the DOM. Additionally, it's readable from Javascript-less environments and and uses only standard elements and attributes.
How far can we generalize this? If an application has millions of users, can just a little of your server-side load into their browsers can end up saving real money in server/scaling costs?
More evil-ly, how much computation/storage capacity could Google get by pushing off work/safety advantage of GMail users' machines? Of course, a single computer isn't reliable, but with millions of users, it's a simple matter of failure/redundancy probability. A distributed file system in browers' Gears databases would work just like GFS, but with higher failure rates.
To address your first point, GitHub are already doing some of this: they reduced server loads by lazy-loading commit data when cached data isn't available.
At work we do a fair amount of client-side form validation. While we have server-side validation too, doing it in the browser as well vastly reduces the number of times invalid inputs make it through to the server and have to be expensively processed. Of course, this creates other problems (making sure the client- and server-side validation routines precisely mirror one another, for example), but for some use-cases it's worth it.
Those 2 things really have no place anywhere IMHO.