Is there a fundamental difference to template systems such as erb? Because I'm having a hard time seeing it (except the academic prose and the subtle horror xml always causes me).
It's a language powerful enough to let you design a custom template system, i.e. it's second-order meta-programming, pushing the leveraging potential way farther than most people would even consider offhand. This kind of thing doesn't matter so much when you have a system already described within a few thousand lines, but it gains credence as more of the code consists of business logic and data models - stuff that has its own programming ruleset.
An internal, interpreted DSL can be written to do similar, but I think the point of this tool is to ease the process of writing the DSL itself, and to make the resulting runtime efficient.
It was a good night... I skipped that article as I avoid unnecessary negativity on subjects I already know about. To heck with a depressing piece, ya know? Then, I click on the link to find Pieter wrote it and is about to die. I hoped I'd meet him one day as some awesome project or paper of insights would come out of it. Couldn't be any other way with him as his work was like the enterprise version of Bernstein: series of techs balancing effectiveness, correctness, and speed unusually well. His work was a steady set of attempts at The Right Thing mindset with quite some impact.
I'll miss him. (sighs) Wrote that here in case he sees the higher ranked article. He said he didn't want negativity. ;)
That readme is a massive intro that doesn't explain very much at all.
tldr; Define a model in xml, and a provide it a template script and it'll output some text.
Sound like XSLT? yep. With a custom DSL taped on the side for scripting.
Also...
Template-driven code generators that use symbolic insertion to inject
meta- data into a template. This technology makes it much easier to
write new ad-hoc templates. Typical examples are any technology that
produces dynamic web page.
Scripted code generators that use a high-level language to manipulate
meta- data and then inject it into templates. This technology makes it
much easier to write new ad-hoc code generators. Typical examples are
XSLT, GSL and some other scripted code generation languages.
These are just arbitrary terms you're inventing.
Lots of template engines support the ability to manipulate the data before injecting it into a template.
Having a special limited little DSL for scripting that is an antipattern, not a good thing.
Lots of people use translation tools (coffee script, typescript, babel, sass, etc) that map from one language to another 'similar' language, even code, and it works well.
I think we can comfortably say that these days, people are pretty happy with the idea of code generation; ...but I think few people will find anything in GSL that is not covered elsewhere, better.
Can you imagine modeling your typescript in XML so you can generate some javascript from it? ugh.
Perhaps there's some use for this sort of stuff in protocol serialization, but you have to do so much of the heavy lifting yourself for it, I'm not sure why you'd bother over an existing solution like protocol buffers.
See my links in main reply to OP to get the big picture. Far as XSLT, did you meant the 1985-1995 work he did was "just a 1998 technology?" Or the other way around? ;)
Note: I used XML and XSLT back then when they first came out. I had to write custom tech in Perl, etc to get them to work and speed was not in the description. While they were garbage, his methods produced Xitami web server that kicked all kinds of butt. Too bad he switched to XML-based tech as that was my only gripe with it.
Look, this can be hard to grasp. In zproject we write models of our library APIs. From these we get stuff as exotic as bindings in Java, python, Ruby, node, and all scripts to build them.
Think of it as as generic templating library/system. It has no knowledge of the output, or the differences. The benefit is that you write one "unified" say model, or some sort of input. And then write the translation to different languages (if you want, it can just be one), and have it generate for you automatically based on the rules you coded into the translation logic.
"I'm not sure why you'd bother over an existing solution like protocol buffers."
For context, protocol buffers did not exist when GSL was devised in 1995 (3 years before google was founded). So in fact GSL _was_ the existing solution when google came up with protocol buffers.
Given GSL is still going strong 21 years after inception is a pretty solid testament to its utility.
There is more to GSL than XML processing. GSL was used to write ZeroMQ along with many other pieces of software. GSL can also be used to write FSM (finite state machines) and an earlier variant called Libero, was used to write the very well regarded, Xitami multithreaded web server.
I just installed it yesterday so haven't been able to fully suss out how I will use it.
Hintjens also wrote the CLASS style guide/ subset of C usage which is used for ZeroMQ etc. I don't program in C, thus, can't offer an informed opinion about it - but perhaps another HN'er can chime in on the quality of it.
The ZeroMQ core library does not use gsl. We use it in two projects, zproto and zproject, which are nice examples of what is possible without going too meta. I started explaining zproject in my [Scalable C](http://Scalablec.org) book.
Are there useful heuristics on where GSL is most applicable? Where it is relevant and powerful and not too meta; just the right amount of abstraction to represent a problem and its solution.
For example, "GSL is likely a good choice at <abstract problem description>, such as <some very concrete examples>. And it may not be appropriate for <abstract problem description>, such as <other concrete examples>. Because of <some specific reasons>."
I looked through the GSL website and also Scalable C and wasn't able to find any descriptions that place limits on the applications of GSL. A description of what a complex something isn't, from an expert, is almost as important as what that thing is.
Your writing style in Scalable C is incredible and enlightening. I will be studying and referencing that book, even if I'm not directly coding C.
Given how much focus he places on code generation and metaprogramming it's ASTounding that there is not a single reference to Lisp to be found in that wall of text. I agree with a lot of what he says, but his total ignorance (must be ignorance rather than deliberate) of Lisp metaprogramming weakens a lot of his arguments for GSL specifically.
Back in 1982 Synon Inc. introduced its Synon/2 CASE tool based on similar ideas. The product targeted minicomputers using green screens. The same model could produce target code in COBOL, RPGIII or PL/I. We then toyed with the idea of meta-models which could model vertical applications such as accounting systems or inventory control.
We believed that the next generation of modeling would use logic programming with constraint solving to make smart models (At the time the Japanese 5th generation project was underway). Alas, the project proved too difficult and it was abandoned. Perhaps it is time to revisit the idea.
I submitted GSL in the list of static website generators a while ago. That was one of the few static generator written in C, which is nice for small embedded systems.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...
Their website is a slide-show demonstrating their amazing work:
http://www.imatix.com/
Generating servers from state machines and such:
http://hintjens.com/blog:75
SMT kernel for portable, multi-threaded, fast code:
http://legacy.imatix.com/html/smt/
Web server (old and new) https://en.wikipedia.org/wiki/Xitami http://xitami.wikidot.com/main:start
One of best middleware ever http://zeromq.org/