I always wondered why this was never implemented. It seems such a basic functionality which would allow to comfortably build websites in a modular way without scripts or serverside processing.
The last example reminds me of the once-proposed html imports feature. It was being pushed by Google back when Polymer was a thing and would've given us actual web components delivered as html documents with full support for html, css, and js just like we use normally. If I remember correctly there were a few issues with the approach that needed to be resolved and nobody stepped up to make it a reality. So instead we got the custom elements spec years later which only implemented a small percentage of what made html imports useful.
xslt is a native template system that has includes: <xsl:copy-of select="document('/header.html')"/>
xhtml/xslt still works today (though browsers are stuck on v1). It's a declarative template system with a pretty powerful pattern matching system (xpath). You can also extract fragments from external documents or store them into variables. e.g. <xsl:copy-of select="document('/content.html')/body/div[2]/ul/li/a/@href"/> will copy all urls from a list inside the 2nd div. Or instead of second div in the body, you could search for a div with an id or specific attributes or specific children or whatever pattern you want.
Why webdevs forsook xml will always be a mystery to me. Particularly given that they now use things like JSX.
I can only speak for myself, but the few times I encountered xslt in the past it was in the context of bulky and verbose SOAP services. The syntax felt off to me (and still do).
I think this was at the time I was learning jquery and using css selector syntax for finding elements, and the difference between that and xpath was pretty stark.
I basically (probably with insufficient knowledge) filed xslt away with vb and tables for formatting as old, complex and soon to be replaced
The syntax felt off to Juniper, too, so they made their own: https://juniper.github.io/libslax/slax-manual.html. Loses homoiconicity to some extent, sure, but outputting XSLT using XSLT never was particularly pleasant.
I just recently moved my personal homepage to XML + XSLT. Not a pro at it at all though. I did enjoy the process, I like the idea of building the website out of data + templates, but that's not unique to XSLT.
Besides some papercuts / browser bugs like "refreshing the page desyncs the inspector", the main issue I find with it is I don't see how I can modify things dynamically?
By the time JS runs, it's on the resulting HTML DOM, not the XML source, so I can't add elements dynamically respecting the XSLT stylesheet?
Also, XML must be well-formed and have a single root, I can't stream it out on the server piecewise?
With those 2 limitations, I don't see any advantages over any other server-side templating language, it just duplicates work that could happen once in a SSG or at least cached in a CDN, onto every client computer.
Right, unless you run a javascript xslt processor, you can't do things after page load. There's at least one javascript implementation that lets you do data binding to do dynamic page updates, but I'm not familiar with it.
The main advantage for simple things is that you don't need an application server or compiler. It's all static files, and it's all built right into the browser so easy to get started with. For less simple things, it should be easier to cache since you can separate out user data and have the bulk of the information you send be the static template (which the user can cache).
I suppose maybe that last point is why people use javascript frameworks for static pages (so they can send a static, cached js file and then small payloads for user data), which seems like overkill to me.
It would be nice if browsers supported streaming xml. Of course streaming xml exists (e.g. XMPP). It's not really any different than json: the spec said you need to have a single root object, and then someone wrote a spec that says you don't need to do that.
One of the cool things that you can do with client-side includes and shadow DOM is render the included HTML into a shadow root that has <slot>s, so that the child content of the include element is slotted into a shell implemented by the included HTML.
This lets you do things like have the main page be the per-page content and the included HTML be a heavily cached site-wide shell, and then another per-user include with personalized HTML - all cached appropriately.
> This lets you do things like have the main page be the pre-page content and the included HTML be a heavily cached site-wide shell, and then another per-user include with personalized HTML - all cached appropriately.
If you can do your includes at page load time, you've been able to do this for 25 years with xslt; it's been built into the browser almost since the beginning of the web!
I've played around with doing exactly that: include a `/myinfo.xml` document that has information about your user, and then the rest of the page template just grabs `$myinfo/user/@name` or whatever wherever it needs. The neat thing is it has graceful degradation by default: if the request fails, then your include will be empty and you can treat it as a logged out page. So you can e.g. display the username and logout button in the top right if logged in, or a login button if not. You can also e.g. include a CSRF token in your info document and plop that into any forms in your page by just doing `value="{$myinfo/csrf-token}"` or whatever.
The "template" use case was a very common one from iframes/framesets in the early 2000s, and the markup often looked a lot like your desired example.
My memory's hazy, but I remember it becoming more and more complex to maintain as the web's security and performance model evolved, as you needed to manage and secure all of these disparate, discrete DOM trees in the same page context.
Wouldn't what you suggested just been "reinventing" frames/framesets if you added the ability to arbitrarily pull in templates via the network?
> I always wondered why this was never implemented
Wasn't this in HTTP/1.1 (1997), where you could send chunks to anything that could have a target attribute? (As multipart HTML relied on http chunks, which suffered poor support by the Windows http stack, this quickly vanished and in the 2000s support was stripped from browsers. I think, Chrome was the first to do so. Personal note: I once wrote a chat server, which relied on this technology – and it worked pretty well, outside IE.)
Another way this had been once supported was in Netscape 4.0's layers (1997), which could have a `src` attribute and worked just as you would expect. (However, support for this vanished with the first iterations, about the same time JS-styles were switched to CSS. It had been definitely a feature in the NS 4.0 prerelease versions. Especially the capability to re-link the `src` attribute by JS or by a link target was nice.) NS 4 also featured extended HTML entities, which could render JS expressions in place and also provided a mechanism for conditional comments – so, had we followed that route, HTML would have been already a templating language.
Probably because HTML specs were quasi frozen for a long time. As another comment said, XSLT supports it. XHTML conformance would have made the web development world stricter, and more flexible. Thank you Microsoft for holding the web back.
Funny enough, I've used SSIs first and last time around 2020. The project I worked on had a heavily cached landing page (for performance reasons), but needed a way to swap out the homepage banners based on product releases, and the easiest way around that was to output a SSI directive for a separate banners endpoint.
The lack of native HTML includes is a big reason why I picked up just enough PHP to write functional pages back in the early-mid 00s. Not having to manually propagate changes to e.g. navbars across a whole site was mind blowing.
I’m all for standardizing JSX, but I think we should be clear that its only similarity to E4X is syntax. The latter corresponds directly to the browser/HTML/XML DOM, whereas the former corresponds to nothing in particular except the semantics you might give it. I’d also be interested in a revival of E4X as a potential implementation of standardized JSX, but I’d be disappointed (and I think many JSX users would be too) if the standardization coupled the two completely.
All of the libraries you’ve listed offer features above and beyond what we’re talking about here. Plus, web components exist, they do most of it and yet the libraries still exist.
Yes, many frameworks including svelte took heavy inspiration from various parts of the Web Components specs, including named slots. So, this is the original thing that svelte and others copied.
That was like what ESI (Edge Side Includes) allowed from Akamai. Perhaps had the founder[1] of Akamai not been killed in the world trade towers we'd have had that in the standard...
Includes are different than this. Pretend the template is site header, site nav, body content, and footer; each being a slot. The server then streams body content, site header, site nav and footer in that order. Now more important content gets rendered first without any JS.
This technique would be a lot more usable though when shadow roots become open style-able. It is kinda funky having to apply your CSS separately to the layout regions that live in your shadow dom.
Real fun begins when you realize that the streamed content can react to interactions that user submits from the streamed page: form targetted into hidden iframe or any other mean of issuing HTTP request back to the server and handling data it can react to in next streamed chunk. This way you can have for example "CSS only chat" [1], or basically anything, since the chunk with new data can contain style directives hiding the old data…
That's cool and definitely the future of HTML streaming,
it simplifies things a lot: js enabled out-of-order streaming leads to SEO problems and frameworks usually need to come up with workarounds - detect bots and turn of streaming for that case.
With such technique no workaround is needed, less things to worry about.
There already is one in ARIA that supports this: aria-owns [1] and to some degree aria-flowsto [2]
But with everything in ARIA, it always depends on real-world support of screen readers, which is very poor. You have to work with what actually works unfortunately.
Imo changing order just to change the streaming order for a tiny performance gain is not worth breaking the UX for people with accessibility needs.
tabindex only affects tab order (of interactive elements), not the order that a screen reader navigates and reads out content (including non-interactive elements)
Let's say I streamed the first chunk with a little JavaScript, would that JavaScript be able to listen to future chunks coming in? In a real application it is crucial to be able to update your frontend state once new data is ready.
Another approach would be to correlate observation with specific custom elements. You can define them in the first chunk, then use their connectedCallback method to observe them as they arrive. The benefit of an approach like this is that the custom element lifecycle APIs are synchronous (which is also the drawback of the approach, and where MutationObserver is more appropriate if that synchrony is a concern).
Perhaps the most horrible JavaScript I ever wrote did something like this, and has been very useful for over a decade and still going strong.
It's a log file that starts with a header: just a script tag with some JS, followed by a special string (I used a special HTML comment as a boundary indicator). The server then appends its logging to this file in a web hosted directory.
The JS periodically does an XHR of it's own location.href, polling itself. It then splits on the special boundary string, thus collecting the latest log data, parses it to generate pretty/coloured/linked HTML, and updates the current page according (and controls scrolling if required).
This gives you a log file that's continuously being appended to, but can be visited in the browser using any static file serving and automatically updates as it's populated.
I'd assume that the only reason one would need to know if chunks were incoming is to indicate a loading state in disparate parts of your webpage, e.g. chunk B has some widget that indicates chunk A is loading. In this case, you can probably work off the assumption that loading is the default initialization state, and use JS to communicate to the disparate parts of your page once on chunk A load. There may be something different that you're thinking about here though.
Neat, but what would be the intended use case for something like this? I can imagine a scenario where one part of the UI may take extra time to load, so you would defer it to let the other parts load first. Is there another reason I'd want this?
I used similar streaming HTML for my RSS reader. When you import subscriptions it can take a while as each feed is fetched so I use streaming HTML to show the progress of each feed as it is imported. This provides meaningful progress updates to the user.
However previously the limitation was that the content needed to be more or less in order (you could use some CSS tricks but they were limited). Using this trick I would be able to render the full list of pending feeds then insert a result as each finishes being fetched.
In fact it seems that for each slot the last element will be used. So you can even create live-updating pages based on this which is really cool. For example imaging you had a score ticker. You could push an update to that region of the page every time the score changes.
I think this is definitely a niche use case, but it is very nice to support this without JS. Of course in all but the simplest use cases JS may still be the right solution. For example if the connection drops for any reason there is no mechanism for showing the user an error, let alone retrying.
Dealing with loading in a smart way (even small amounts of loading), without extra JS overhead, is the fundamental use case.
For a comparison, basically the entire bajillions of man-hours that have gone into React Server Components [1], and all the many, many complications and footguns inherent in their approach, is trying to solve the same problem.
then watch the bottom of the page as the new `<span slot=...>` elements come in. As they come in, the browser's, well, slotting the contents into the corresponding slot in the template.
<slot> has been around for a while in Web Components; the relevant feature here is probably the (much newer) shadowrootmode attribute: https://caniuse.com/?search=shadowrootmode
But you can already do a lot with regular streaming and using CSS Grid, inline <script> tags, etc to move stuff around the page.
The site loads pretty slowly for me [1], and when they site has fully loaded the list has already been completed, there is no streaming from what I can see. Is Firefox the reason here?
The site[0] works for me in Edge, and Firefox, as intended. The slow loading is the point, you should see how it updates real time, which doesn't happen to you for some reason. As other comment suggests, maybe something on your network doesn't like tiny (~42 characters) HTML fragments.
Also for me, both on Firefox and Edge - I'm wondering if it's to do with being behind a corporate proxy that MITMs everything and might be buffering rather than passing through partial respones, or has a much higher threshold before streaming?
If that is the reason, it'll provide an (even more than usually) poor experience for those behind such a proxy
The Javascript code in the article is server-side and not essential - it's just one way to take advantage of open shadowrootnodes and it could be written in PHP or anything else. It's showing how you can output your HTML in a very different order, with arbitrary delays, and still get the page layout you expect.
The downside here is the need to proxy everything through the same server. That will add latency and possibly throttle bandwidth if that data is
coming from third parties.
The scenario where this could be good is something where there are background
jobs running behind the service that
would have been API-ed through the same web server anyway.
But I have an ingenious solution for the best of both worlds! An Iframe.
What I have been wanting for decades is a native template system in HTML which supports urls. Similar the the script tag, but which loads html:
I always wondered why this was never implemented. It seems such a basic functionality which would allow to comfortably build websites in a modular way without scripts or serverside processing.