I always wondered why this was never implemented. It seems such a basic functionality which would allow to comfortably build websites in a modular way without scripts or serverside processing.
The last example reminds me of the once-proposed html imports feature. It was being pushed by Google back when Polymer was a thing and would've given us actual web components delivered as html documents with full support for html, css, and js just like we use normally. If I remember correctly there were a few issues with the approach that needed to be resolved and nobody stepped up to make it a reality. So instead we got the custom elements spec years later which only implemented a small percentage of what made html imports useful.
xslt is a native template system that has includes: <xsl:copy-of select="document('/header.html')"/>
xhtml/xslt still works today (though browsers are stuck on v1). It's a declarative template system with a pretty powerful pattern matching system (xpath). You can also extract fragments from external documents or store them into variables. e.g. <xsl:copy-of select="document('/content.html')/body/div[2]/ul/li/a/@href"/> will copy all urls from a list inside the 2nd div. Or instead of second div in the body, you could search for a div with an id or specific attributes or specific children or whatever pattern you want.
Why webdevs forsook xml will always be a mystery to me. Particularly given that they now use things like JSX.
I can only speak for myself, but the few times I encountered xslt in the past it was in the context of bulky and verbose SOAP services. The syntax felt off to me (and still do).
I think this was at the time I was learning jquery and using css selector syntax for finding elements, and the difference between that and xpath was pretty stark.
I basically (probably with insufficient knowledge) filed xslt away with vb and tables for formatting as old, complex and soon to be replaced
The syntax felt off to Juniper, too, so they made their own: https://juniper.github.io/libslax/slax-manual.html. Loses homoiconicity to some extent, sure, but outputting XSLT using XSLT never was particularly pleasant.
I just recently moved my personal homepage to XML + XSLT. Not a pro at it at all though. I did enjoy the process, I like the idea of building the website out of data + templates, but that's not unique to XSLT.
Besides some papercuts / browser bugs like "refreshing the page desyncs the inspector", the main issue I find with it is I don't see how I can modify things dynamically?
By the time JS runs, it's on the resulting HTML DOM, not the XML source, so I can't add elements dynamically respecting the XSLT stylesheet?
Also, XML must be well-formed and have a single root, I can't stream it out on the server piecewise?
With those 2 limitations, I don't see any advantages over any other server-side templating language, it just duplicates work that could happen once in a SSG or at least cached in a CDN, onto every client computer.
Right, unless you run a javascript xslt processor, you can't do things after page load. There's at least one javascript implementation that lets you do data binding to do dynamic page updates, but I'm not familiar with it.
The main advantage for simple things is that you don't need an application server or compiler. It's all static files, and it's all built right into the browser so easy to get started with. For less simple things, it should be easier to cache since you can separate out user data and have the bulk of the information you send be the static template (which the user can cache).
I suppose maybe that last point is why people use javascript frameworks for static pages (so they can send a static, cached js file and then small payloads for user data), which seems like overkill to me.
It would be nice if browsers supported streaming xml. Of course streaming xml exists (e.g. XMPP). It's not really any different than json: the spec said you need to have a single root object, and then someone wrote a spec that says you don't need to do that.
One of the cool things that you can do with client-side includes and shadow DOM is render the included HTML into a shadow root that has <slot>s, so that the child content of the include element is slotted into a shell implemented by the included HTML.
This lets you do things like have the main page be the per-page content and the included HTML be a heavily cached site-wide shell, and then another per-user include with personalized HTML - all cached appropriately.
> This lets you do things like have the main page be the pre-page content and the included HTML be a heavily cached site-wide shell, and then another per-user include with personalized HTML - all cached appropriately.
If you can do your includes at page load time, you've been able to do this for 25 years with xslt; it's been built into the browser almost since the beginning of the web!
I've played around with doing exactly that: include a `/myinfo.xml` document that has information about your user, and then the rest of the page template just grabs `$myinfo/user/@name` or whatever wherever it needs. The neat thing is it has graceful degradation by default: if the request fails, then your include will be empty and you can treat it as a logged out page. So you can e.g. display the username and logout button in the top right if logged in, or a login button if not. You can also e.g. include a CSRF token in your info document and plop that into any forms in your page by just doing `value="{$myinfo/csrf-token}"` or whatever.
The "template" use case was a very common one from iframes/framesets in the early 2000s, and the markup often looked a lot like your desired example.
My memory's hazy, but I remember it becoming more and more complex to maintain as the web's security and performance model evolved, as you needed to manage and secure all of these disparate, discrete DOM trees in the same page context.
Wouldn't what you suggested just been "reinventing" frames/framesets if you added the ability to arbitrarily pull in templates via the network?
> I always wondered why this was never implemented
Wasn't this in HTTP/1.1 (1997), where you could send chunks to anything that could have a target attribute? (As multipart HTML relied on http chunks, which suffered poor support by the Windows http stack, this quickly vanished and in the 2000s support was stripped from browsers. I think, Chrome was the first to do so. Personal note: I once wrote a chat server, which relied on this technology – and it worked pretty well, outside IE.)
Another way this had been once supported was in Netscape 4.0's layers (1997), which could have a `src` attribute and worked just as you would expect. (However, support for this vanished with the first iterations, about the same time JS-styles were switched to CSS. It had been definitely a feature in the NS 4.0 prerelease versions. Especially the capability to re-link the `src` attribute by JS or by a link target was nice.) NS 4 also featured extended HTML entities, which could render JS expressions in place and also provided a mechanism for conditional comments – so, had we followed that route, HTML would have been already a templating language.
Probably because HTML specs were quasi frozen for a long time. As another comment said, XSLT supports it. XHTML conformance would have made the web development world stricter, and more flexible. Thank you Microsoft for holding the web back.
Funny enough, I've used SSIs first and last time around 2020. The project I worked on had a heavily cached landing page (for performance reasons), but needed a way to swap out the homepage banners based on product releases, and the easiest way around that was to output a SSI directive for a separate banners endpoint.
The lack of native HTML includes is a big reason why I picked up just enough PHP to write functional pages back in the early-mid 00s. Not having to manually propagate changes to e.g. navbars across a whole site was mind blowing.
I’m all for standardizing JSX, but I think we should be clear that its only similarity to E4X is syntax. The latter corresponds directly to the browser/HTML/XML DOM, whereas the former corresponds to nothing in particular except the semantics you might give it. I’d also be interested in a revival of E4X as a potential implementation of standardized JSX, but I’d be disappointed (and I think many JSX users would be too) if the standardization coupled the two completely.
All of the libraries you’ve listed offer features above and beyond what we’re talking about here. Plus, web components exist, they do most of it and yet the libraries still exist.
Yes, many frameworks including svelte took heavy inspiration from various parts of the Web Components specs, including named slots. So, this is the original thing that svelte and others copied.
That was like what ESI (Edge Side Includes) allowed from Akamai. Perhaps had the founder[1] of Akamai not been killed in the world trade towers we'd have had that in the standard...
Includes are different than this. Pretend the template is site header, site nav, body content, and footer; each being a slot. The server then streams body content, site header, site nav and footer in that order. Now more important content gets rendered first without any JS.
This technique would be a lot more usable though when shadow roots become open style-able. It is kinda funky having to apply your CSS separately to the layout regions that live in your shadow dom.
What I have been wanting for decades is a native template system in HTML which supports urls. Similar the the script tag, but which loads html:
I always wondered why this was never implemented. It seems such a basic functionality which would allow to comfortably build websites in a modular way without scripts or serverside processing.