So, the situations in which I would use LaTeX rather than something cruder but friendlier are these: (1) I am typesetting something with nontrivial mathematical formulae in it. (2) I want the excellent typographical quality that comes from, e.g., the very nice Knuth-Plass dynamic-programming algorithm for line breaks. (3) I want to use some clever thing someone has implemented in LaTeX (say, to add Feynman diagrams to what I write). (4) I want to interoperate with other people who are using LaTeX. (5) I want to typeset something big and complicated that triggers misbehaviour in things like Word.
This system, at least so far, (1) doesn't do formulae, (2) doesn't have Knuth-Plass line breaking, (3) isn't compatible with LaTeX, (4) isn't compatible with LaTeX, and (5) takes almost 1s/page to do its thing.
It may very well be an excellent system, and deserve to take over the world of typesetting -- but at the moment it surely shouldn't be called "a modern LaTeX".
I would add that the original title "Introducing RinohType the Python document processor", is way better than the one of the HN submission. It is not a "modern LaTeX", let alone a "LaTeX rewrite" (which is implied by the "in 6500 lines of Python").
Yes, I plead guilty to tweaking the title a little for marketing purposes. I'm sorry if that got some people overexcited.
RinohType should eventually be able to replace LaTeX though, so it is not that far-fetched either. Also, the first paragraph of the article should help temper that excitement.
Well, I think we all expected a "modern LaTeX" based on the title, not something that's much closer to a PDF producer than a LaTeX equivalent. The title is the issue.
I might want to use LuaTex, if I knew what it was. It seems they can't be bothered to actually say what it is in their website, in any of their links, or in the introduction chapter of their manual.
I therefore take it that no-one who actually writes ever uses it.
Not sure if you're trolling, but luatex.org explains it right in the first paragraph:
LuaTeX is an extended version of pdfTeX using Lua as an embedded scripting language. The LuaTeX projects main objective is to provide an open and configurable variant of TeX while at the same time offering downward compatibility.
No, I'm not trolling. I don't know what pdfTex is either. Why on earth are you telling me what a project does in terms of another project I'm not familiar with?!
Edit: that's what I meant by "none of their links", the pdfTex page doesn't say what it does either!
Maybe it's not what you intended, but to me you seem to be proud of your ignorance about a piece of software that's part of computer science history. Instead of writing that comment, you could have learned something by reading about TeX on Wikipedia.
I know what TeX is. I should know what luatex is because I've been to their website, read through some links and even read through the introduction to their manual, and still I don't know.
And just listen to yourself: if I want to know what luatex is I shouldn't find out on their own website, I should have to go to Wikipedia?!
Do you know what it is? You've managed to reply to me twice and in neither of those replies have you simply stated: "luatex is software that does x. You'd want to use to to do y."
Coming from a position such as yours, here's how I interpreted it:
pdfTex will take TeX input and output PDF (and apparently other file formats as well).
LuaTeX should do the same (being a drop-in replacement), the difference being that it can also use Lua scripts to control how the output is formatted (versus everything being written in TeX).
Ok, well, maybe I don't understand TeX at all, because I thought that's exactly what it did.
EDIT: Ahh, wait, I have it! When they say "extension" they mean "implementation"! They actually replace TeX as an engine. Is that right? So, luaTex is just an implementation of TeX written in Lua?
Because people do not care about their TeX interpreter. Today, nearly every LaTeX writer uses pdfTex. If they switch to luaTex nothing changes, because luaTeX is (intended to be) fully backwards compatible. If you start writing your own packages, then luaTex becomes interesting.
If you're curious, I wrote implementations of the Knuth-Plass line breaking algorithm in ruby [1] and clojure [2]. Nothing you'd want to use in production but it was a fun experiment.
Microsoft Word (alas) or one of its generally even worse competitors. For business documents that need sharing with other people who have to use Microsoft Word so they can share them with other people who, etc.
TeXmacs, a more-or-less-WYSIWYG document editor with approximately TeX-compatible formulae and suchlike. I actually use this much more often than TeX or LaTeX.
Plain text :-).
HTML (either edited by hand -- horrible but in some contexts less horrible than (La)TeX -- or using a WYSIWYGish tool like BlueGriffon).
Someone mentioned Markdown. I've somehow managed to avoid ever using Markdown (not counting things like Stack Overflow and occasional blog comments) but it certainly belongs in the list.
While I understand you generally shouldn’t rewrite
software from scratch, maybe TeX should be one of
the few exceptions to this rule?
I think the rule is better stated as, "It should be a long time before you choose to rewrite usable software from scratch. But after 35 years, you can maybe consider it." TeX is great, but it is definitely showing its age, and it shows a lot of the inconsistencies many (other) programming languages of its age also exhibit. Besides, it's not really a rewrite; this is a new piece of software that does something similar.
Time is pretty much irrelevant, at least directly. The time to rewrite software is when one of the following is true (which are really the same thing from different viewpoints):
1) It has critical defects in serving its intended usage domain that can be resolved by incremental updates because they are caused by fundamental architectural features, or
2) You need something to be used in a similar-but-not-identical domain to the originally intended domain of the software, to which it cannot be incrementally adapted due its fundamental architecture.
And, really, if the software is designed well for maintainability, either of these should be rare, since if its is loosely coupled, you can change either the low-level implementation details that need to change without touching the high-level organization, or change the high-level organization while preserving the low-level implementation details that are staying the same, in an incremental change.
That's a double-edged sword. On the other hand, TeX isn't changing much which, given it's space, is necessary: (La)TeX can still typeset documents written in the 90s. If you would start shaking those foundations, you are in a world of pain, probably.
XML format is a non-starter for me. It adds too much cruft to the text; it requires putting everything inside of open and close tags, including paragraphs.
Thats what I thought too. However, there's a plethora of tools that can generate (and parse!) XML for you, e.g. HAML or Markdown. I write all my TeX docs in Markdown and then convert it to TeX, but stuff like figure placement has to be done in weird comments containing TeX, which'd be easier with RhinoType.
Plus it's already been done to death in XML with XSL:FO, which does the job quite nicely. I don't think it'd be too tricky to rig up an XSL:T for the XML format here to produce FO and run that through Apache FOP.
LaTeX is much nicer to hand edit, and if you're not hand editing you may as well use Word or something.
That's the thing, though: environments are not the norm.
I'm writing a document. I want the default text entry to be that document. There's already a lot of noise in my Latex sources; I want less, not more. (And I recognize that we actually agree on this.)
This looks great - hopefully wouldn't be too hard to implement Markdown as well. I really want to be writing my academic papers in Markdown or some other simple markup (my papers usually have a surprisingly simple structure), and still have them look nice. Making it _much_ easier to create nice-looking templates would be great (maybe even a GUI editor for templates).
However, I would also love to see more movement towards more flexible, semantic document formats for for example academic papers... I want to be able to read papers in my preferred style, whether on my iPad, Kindle or Mac. But still good support for citations, highlighting etc. Maybe epub is the way to go, not sure.
Pandoc does almost exactly what you want. Much of my academic writing is written in markdown and then pandoc converts that to LaTeX. For conference papers it doesn't work as well (due to the conferences having specific style requirements) but for notes and presentations it works wonderfully.
Markdown is very limited and you will soon hit a wall, when you get down to writing a thesis rather than a paper. For example if you want to write something to the Table of Contents which is not the section title itself.
\section[short]{Long Title}
It is also possible within pdfLaTeX or LuaLaTeX to write shorter commands.
A GUI will eventually slow you down and is impossible to cater for the hundreds of commands including commonly used packages.
You could look at Org Mode.
You can write your document in a similar fashion you would with Markdown, but then you can export in many other formats (including exporting to LaTeX and PdfTeX).
I think Markdown and semantic markup are nearly incompatible, unless you want some kind of highly hacked version of Markdown. You might look into Textile though, which looks similar to Markdown but has built-in support for annotating with HTML classes and IDs and whatnot.
Ah nice! I considered something like this when writing my honours project, but ultimately moved on to other things. I really hope you see this through - but I'd recommend implementing Knuth-Plass basically straight away, I reckon it's what makes the difference!
I looked at the sample output (http://www.mos6581.org/files/intro_template.pdf) and is it just me or does the vertical spacing seem off? (for example, look at the tables and how the text goes right up to kiss the horizontal bars).
To me the sample output doesn't produce any aesthetic pleasure at the moment, I can't imagine using it even for the simplest material, provided I can choose between LaTeX and it. I hope it will improve.
The title is definitely misleading: "modern LaTeX" it still isn't.
For me, using PDF.js in Firefox for Android, it looks even worse; most of the text is garbled junk characters (only code blocks appear correctly). It's substituting characters and using a different font which PDF.js could not load. (Another renderer is displaying it correctly.) Using character substitution as a standard technique is a really bad idea, and is the only thing that is preventing me from taking a deeper look at it with a view to using it as a backend for a play script/musical/opera libretti format that I wrote; it makes the document inaccessible to things like screen readers and prevents ready copying of text and search. Is there a good reason why substitution is used? If you're willing/compliant to the notion of removing it, I'd probably be pleased to help out with the project.
(This script format tool I wrote is also written in Python, and at present I'm writing to HTML and using wkhtmltopdf to produce a PDF, but that tool is making a mess of the text kerning so that the final result isn't as much of a pleasure to read from as it should be; your tool, however, is producing what are in my eyes quite nice-looking documents---I can easily overlook minor spacing issues at this comparatively early stage.)
Wow, that really is hideous, e.g. the table at the beginning of section III. I wish this guy the best, but TeX was written by a very smart and utterly obsessive man, and typography is an ancient and complex field. I doubt this will work out.
Note that this is a WIP version; you should not judge RinohType on the basis of the example documents for now. I hope you'll understand that at this point I'm focusing on functionality, not on style.
The table rendering got messed up when I rewrote the line spacing code. Fixing the table rendering would just require an adjustment of the table style definition.
"you should not judge RinohType on the basis of the example documents for now. I hope you'll understand that at this point I'm focusing on functionality, not on style."
Then please be more honest. It's not "A modern LaTeX in 6500 lines of Python" -- it's 6500 lines of Python implementing a typesetting language with vastly inferior to TeX, or even Microsoft Word. It may have a syntax more comfortable to Python or Ruby programmers. Just say what you've done.
"There is no strict separation of content and style. This is mostly an issue for publishers that want to ensure a consistent style across articles in a journal. With LaTeX, academic authors can always reduce the margins or change the interline spacing to be able to squeeze in more half-truths."
In my research group, there's a story that one of their collaborators once tweaked the interline spacing to get under the page limit. Apparently the end result was that you could riffle through the printed proceeding and tell where his article was, just because the pages were darker.
So yeah, it probably happens. If it's half an hour before the submission deadline and there are four lines left on the page you have to cut, you'll do what you have to to get the job done. =)
I know lots of people who do it with conference papers (including those published and nominally typeset and printed by Springer), and it's easy to spot. I haven't seen anyone get away with obvious tweaks in journal papers.
When my papers get too long, I take it as an opportunity to obsess over superfluous words and sentences.
And more to the point, shouldn't this be caught by a sub-editor prior to publishing? I find it incredibly hard to believe that publishers would blindly publish what they're sent without first checking it...
> It is worth pointing out the other gripes I have with (La)TeX at this point. I will probably regret this later (I have the impression TeXies can be quite fanatical)
Probably. I applaud efforts to try to rethink typesetting, and this is clearly a substantial accomplishment. Tex is, however, an outstanding design, both in a number of of its aspects and as an overall design, and a lot of later effort has gone into remedying its limitations.
> TeX is not transparent. It is a huge, complex system ... With hundreds of megabytes and seemingly millions of files for a typical TeX installation, I have no idea what is going on when TeX processes a document.
Knuth's contribution is, in fact, quite small in terms of Mb. This covers just the Computer Modern Roman fonts and the eufrak maths fonts, and Plain Tex. Most of a system such as a full Texlive install is made up of fonts. It covers a huge number of specialist journal styles, and support for the needs of specialist fields (linguistic glosses, setting code, chemical notations, Feynman diagrams, etc.). The complexity is that of a library, and kpathsea is its index system.
I have the idea that maybe you think a package whose value you don't see is fluff: there is fluff in Texlive, but probably much less than you think.
> The arcane TeX macro language is not accessible to a broad audience. I believe this is why most LaTeX-generated documents you come across have exactly the same (retro) look; very few people are capable of creating new document styles.
There is a point here, but I think the reason is that most people don't want to fuss about with style. CSS makes it easy to experiment with page styles, and still most web pages either follow a standard formula or look very ugly. Understanding markup conventions is not usually going to be the biggest obstacle to achieving good design.
> TeX is not very modern
There is not a real problem outlined here: you could equally say that the basic layout engine has proven itself by being capable of accommodating all these later technologies. The Office 2007 team even rewrote its formula layout engine so that it would conform to that described in the Tex book, and employed Knuth as a consultant.
I'd put it this way: Knuth's code combines a very deep insight into the nature of the requirements of computer typesetting with outstanding implementation skills. The basic design has not really been surpassed.
> TeX’s warnings/errors are often very cryptic. It can sometimes take a long time to figure out what’s wrong.
Yes, very much so. This is probably the best reason to consider alternative document description languages. But note that Tex is very successful at allowing "code" invocation to be mixed with text - I'd like to see rivals that challenge Tex on this point.
> But why do I even need to mess around with all these extension packages when all I’m doing is writing a simple article? Doesn’t this mean that LaTeX should include at least some of the most commonly used packages by default?
The Latex3 team concede this. Context, a rival Tex-based system, does not have this problem.
> This might largely be a solved problem by now, but I remember often running into input and font encoding issues with LaTeX in the past.
Use Xetex or Luatex, and standardise everything on Unicode; these problems go away.
> There is no strict separation of content and style.
I'd put it this way: the system offers the possibility of such a separation, but does not enforce it. This is bad if you often have to work with other people's Latex, although in practice, you learn how to deal with it.
The Tex-based algorithm for formula layout used from Office 2007 on is documented in http://www.ntg.nl/maps/38/03.pdf (some of those links have fallen to linkrot).
Very nice project, however for everybody listening:
I believe that in such project, but in every project, you should post some kind of document online explaining what is the big design of the project, the biggest problem you have encountered and how you fixed, the limitation of your design choice and why you made those choice.
Why that ?
1) Because if I want to code something similar I don't have to read all your code to understand what is going on, I just need to read a simple(r) English document where you explain it.
2) Because if someone else what to get involved in your project will have a way easier/happier time to understand the whole project and won't spend days just hacking around, s/he understand the project in the first couple of hours and at the 3rd hour is ready to start code some useful feature or refactor the code.
At the end, such document will make easier to everybody to be involved.
I originally planned to release RinohType into wild only after I had finished a full review, refactoring and documenting of the code. I also wanted to write some API documentation and a small tutorial, all in the spirit of making a good first impression. Turns out all this takes a huge amount of time! So I’ve decided to dump the code in its current state onto GitHub, write a blog article about it and await some valuable feedback while I resume my refactoring and documenting chore.
If you decide to play around with the code, please bear in mind that this is just a preview; your experience might not be as smooth as I intend it to be eventually.
The rendering of the test document is very strange in FF on RHEL 6 on Linux. Acrobat also gave me garbage for a few seconds and then somehow spontaneously figured it out.
I'm skeptical of these sorts of enterprises, but I'm glad the author took the challenge and I look forward to seeing where it goes.
I've noticed these problems on Linux too... On Windows, The PDFs render correctly on SumatraPDF, Adobe Reader, PDF-XChange Reader and IIRC also Foxit PDF Reader.
But I'm sure there's still a bug in my PDF font handing code :)
That’s probably due to a known issue in pdf engines improperly handling fonts. Even Adobe’s own InDesign 4’s built-in pdf engine was plagued with that bug. [1]
In the sample pdf, only the regular font gets messed up by affected pdf renderers (pdf.js in FF, …). That font is TeXGyreTermes-Regular, and it’s the only CID (composite) font. In the source, that font was an OpenType font (.otf), which, during pdf-creation, got converted into a CID double-byte font, resulting in messed-up Identity-H encoding. (CID double-byte fonts are really meant for huge Asian charset fonts, only.) Likely, the pdf engine used, wrongly assumes that all Unicode encoded fonts (.otf) contain huge Asian charsets, and thus converts them into CID.
I guess most newer pdf renderers know to handle the issue, while the renderer used in FF e.a. doesn’t. The quick fix, at pdf creation, is to avoid fonts being converted into CID at all.
LaTeX does a lot of stuff - it'd take a long time to replicate all that functionality.
However, on my todo list is to write some sort of "TeX-down" wrapper for LaTeX - I want to write my dissertation in a modified form of Markdown, with inline LaTeX equations and sectioning of "theorem" and "proof"-type environments, and then run a make to generate the TeX and PDF based off of a type of TeX template. I'm familiar with Pandoc but haven't delved into it far enough to know if it's robust enough to do what I want.
But to expand a little, orgmode has two modes of LaTeX-integration. The first one gets normally passed from text, so you can just use e.g. \emph{} in your orgmode source file. The other is a dedicated LaTeX-environemt (#+BEGIN_LaTeX ... #+END_LaTeX) that puts the appropriate literal LaTeX in the appropriate place within the document. I use its beamer-mode and do any tikz-graphics as literal LaTeX code within the org-file.
Though I do my normal writing simply in LaTeX, because that just deals more neatly with edge-cases, really.
Great project! Congratulations! I hope you will see this through and make it into a usable typesetting solution in real-world production environments.
Likely a clean, general public friendly GUI (web app) can leverage RinohType’s back-end power, and thus it’s adoption. Maybe monetize that as a hosted service.
A set of professionally designed stylesheets/document templates would give wider adoption a head start.
From your roadmap, I’d be glad to help you achieve some goals (if you would want me to be of any assistance):
- Provide a number of standard document/page/font styles
- Include font definitions for freely available fonts
I’d be happily designing/creating those stylesheets. Selecting high-quality fonts, and extend them with missing glyphs and OpenType features, for high quality typography. However, speaking as a typographer, don’t do this, ever:
Fake small capitals for fonts that do not provide any
- Advanced typesetting features such as Knuth-Plass … and microtypography
Very interesting and great effort you put into it! I know exactly what you mean by cleaning up taking time. I am in the same stage (making it look good) of a document processing toolchain I am working on, which supports HTML, (La)TeX and plain text output backends. I thought I could be done this year, but I will probably take longer. I have started working and thinking on this problem about three years ago and am very careful about it's design (it's focus is on developer friendliness and extensibility, while keeping it dead simple and modular). It's not even that I expect others to jump on it, after all people seem to happy with Org mode and Markdown, but I want to solve my own documentation toolchain needs once and for all.
I am looking forward to see how your project turns out, choosing TeX as the print backend wasn't an easy choice and I would definitely consider another print media backend, if there was one. :)
I am doing it in Common Lisp, I haven't released it yet but if you drop me an email to max on mr.gy I can let you know when I progress and also tell you about how I designed my system if you like. To summarize: Come up with a model for documents (e.g. datastructure, what is it made up of, how is it represented). Then think about how to input a document (makes a big difference, I made a markdown like language) and how you can render documents (html, paper, audiobook, video...). I concentrated on the idea that my documents should represent structure of content while the type of content (mixed?) is irrelevant. I then started writing backends that support rendering of sets of content types. E.g. HTML can do a lot, including video. But on paper it's hard to play back audo or video data so we fall back to a simple url (which sucks on paper too, but we could render QR codes on paper for instance). So there is a lot of room for specialization, I am trying to keep the core concepts as generic as possible so I can extend the system later on, and maybe use my document format to write a song after I implement musical content types in some backend some day maybe... You get the idea I guess.
Regarding hiccup: I wrote an HTML DSL myself, which I also use in my document pipeline, it is released and can be found here:
http://mr.gy/software/macro-html/
By the way, I work prefessionally with Clojure at the moment, and I'd rather advise against using it. Ihmo it is not ready, I like some ideas, some I like less, but its still a mess and the (only!) implementation is not on par with a real lisp yet.
> One aspect that I’m not so enthusiastic about is RinohType’s performance. On my modest Celeron T3000 1.8 GHz laptop, the average rendering time for a page in the RFIC example is a disappointing 0.8 seconds.
Such a project should have not been started using Python as all current implementations have performance issues, as long you are not gluing C wrapper invocations together.
Since I'm working right now on a report generator using LaTeX, I was all excited until I hit this: "Because of the Unicode requirement, I opted to skip Python 2 and go with Python 3 which uses Unicode for all text strings."
Really? Python2 is still extremely common, it's too bad I can't this now, if it's not backwards compatible.
So is Python3. They are not mutually exclusive. Python3 is in Debian oldstable (released in 2011), Ubuntu 10.04 LTS, Fedora since version 13 (from 2010), and in openSuse since 11.2 (from 2009).
ah, fair enough. i used lout briefly and had a pretty good experience (went back to latex due to better equation support); from what i recall one of its design goals was being easy to machine-generate, so it was at least plausible that "compiling to lout" would be a better option than doing it from scratch. you're right about the lack of momentum and community for the project, though.
I respect Latex users, but I believe it should die. However I see no actual replacement now or in near future :/
I wish one of the big guys eliminate it once and for all. But there is little value in developing a replacement for academic paper publishing.
The value of (La)Tex has very little to do with academia. There is no other (end-user) accessible system that produces typesetting anywhere CLOSE to quality of TeX.
RinohType is interesting to me because I too am frustrated by TeX/LaTeX. I agree with others that RinohType isn't (yet) a "modern LaTeX". But we need more efforts like this (see for example the excellent Lout by Jeffrey H. Kingston).
First, let me make it clear that I'm a big big fan of Knuth and Lamport and I respect the amazing impact that TeX/LaTeX have had in the Math/Science/Engineering communities. They are clearly a lot smarter than I am, and I owe them a debt for their contributions to CS. I've read Knuth's original papers on Alpha-Beta search, LR-parsing, etc., I own almost all of his books, and I've studied the Art of Computer Programming in depth since the first editions.
TeX is an amazing feat of programming. I had a friend that was in the same fraternity as Knuth as a student. This friend was a great programmer and even he was in awe of Knuth's abilities as a programmer. Look at the history of TeX, the decisions and approaches that Knuth considered and undertook. It's absolutely amazing, and then he essentially gave it away, one of the first and most important Open Source projects!
But, please, can't we come up with a new replacement for LaTeX? By now, it should be obvious that there are only about 7 people in the world that really understand LaTeX/TeX (and I'm not one of them). Hang out on http://tex.stackexchange.com and see what I mean. I've used LaTeX for almost 30 years; it's the first tool I reach for when preparing a document. Thirty years of it is enough.
Now, I write my own styles (.sty files) and complex macros, but the experience isn't pleasant. I think it's here that something needs to be done. I've never been able to convince more than ten percent of the software engineers that have worked for me to use LaTeX. It's not that they don't like markup or programming. It's the mess caused by TeX's macro based extension language. This leads to less than helpful error messages, cryptic behavior, and is a barrier to anyone wishing to take advantage of the underlying power of TeX.
I don't think anyone will try to argue that the TeX macro system is so elegant and powerful or that I simply haven't spent enough time with it, but let me say this, I've programmed for 45 years and I've used macros plenty. I've written packages of macros running on IBM (big machine) assemblers, did assembly language real-time programming for years, implemented macro programming systems like Calvin Mooers Reactive Typewriter [1] and concatenative programming languages like FORTH. Nothing, to me, is more frustrating than trying to write a sophisticated LaTeX package. I've patched boot loaders from the front panel switches of mainframes that wouldn't boot, debugged real time programs containing thousands of lines of assembler, used C++ templates and Haskell types. All of this is easier than writing a fancy new TeX macro. I don't like writing as much as programming, why do the tools have to make it even harder?
Fundamentally, I think that using macros is the wrong way to write extensions and customizations in LaTeX/TeX, the abstractions used to construct anything complex leak too many implementation details. For some of my projects I've found it easier to write custom filters in Python that preprocess input files into plain LaTeX. This observation isn't unique to Latex [2].
LaTeX3 and LuaTeX kept me interested in TeX; I hoped that they would make programming TeX packages and customization easier. Unfortunately, I don't have enough time left to wait for them; they move at a glacial pace and are essentially indistinguishable from abandon-ware. As those 7 guys that understand TeX move on to get real jobs, these projects falter and don't go anywhere. The LuaTeX web-site has been stuck on version 0.6 for thirty months.
Please, can't the community come up with something? Or point me towards a project with a license that I could contribute to.
[1] TRAC (the reactive typewriter system) is a macro based programming system, described in Computer Lib/Dream Machines a 1974 book by Ted Nelson and by its inventor Mooers in the CACM, Volume 9 Issue 3, March 1966 Pages 215-219.
This system, at least so far, (1) doesn't do formulae, (2) doesn't have Knuth-Plass line breaking, (3) isn't compatible with LaTeX, (4) isn't compatible with LaTeX, and (5) takes almost 1s/page to do its thing.
It may very well be an excellent system, and deserve to take over the world of typesetting -- but at the moment it surely shouldn't be called "a modern LaTeX".