The spate of rewrites of JS tools in compiled languages continues. Here's my problems with them:
1. The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt. This community needs complexity analysis tools (and performant alternative libraries) more than it needs accelerated parsers that sweep the complexity problem under a rug.
2. (more oft cited) The most commonly and deeply understood language in any language community is that language. By extension, any tools written in that language are going to be considerably more accessible for a broader range of would be contributors. Learning new languages is cool but diverging on language choices for core language tooling is a recipe for maintainer burnout.
> The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt.
I don’t think this is the right way to look at it. The issue is that JavaScript developers have been writing servers, build tools, dev ops tools, etc, in JavaScript because that’s the language they are expert in, but JavaScript was never the right choice of language for those types of programs. The whole industry is caught in a giant case of “If all you have is a hammer…”.
I do web development in JavaScript because JavaScript is the language of the browser. But I write all of my own build and devops tools in Java, including SaSS compiling, bundling, whatever you want. There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.
I think it’s backwards to see this as a 50-100x performance boost because Rust was used. That same performance increase could be had in a number of languages. The real issue is a 50-100x performance hit was taken at the outset simply by using JavaScript to write tooling.
Edit: just to put it in perspective, a 50-100x speed up in build time means that what would currently take a minute and a half using JS tooling could be accomplished in a second using a fast runtime. A minute and a half of webpack in the blink of an eye.
As I almost always think to myself whenever I see some program braying about its 25x speed improvement in some task, the reason you can have a 25x speed improvement is because you left that much on the table in the first place.
I don't want to be too hard on such projects; nobody writes perfect code the first time, and stuff happens. But this does in my mind tend to tune down my amazement level for such announcements.
And your last edit is really the important point. That level of performance improvement means that you are virtually certain to move up in the UI latency numbers: https://slhenty.medium.com/ui-response-times-acec744f3157 Unless everything you were doing is already in the highest tier, this kind of move is significant.
> There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.
I don't mean to be facetious here, but... citation needed.
There are a lot of assumptions about language performance being made throughout comments threads on this page that seem more based on age-old mythology rather than being grounded in reality.
JavaScript is ~8x slower and Python ~30x slower on average vs Java / Go / C++ that are all quite close.
A funny aside: I always believed that Java is slow because I heard it repeated so many times. I internalized that bit of age-old mythology. But lately as I’ve gotten more focused on performance, I’ve come across a lot of hints in various talks and articles that Java has become one of the go-to languages for high-performance programming (e.g. high frequency trading). So, I hear you about the mythology point.
How often does an average X developer delve down to compiler details and contribute to static analysis tooling ?
Metaprogramming and compilers/language analysis tooling is a jump above your run of the mill frontend code or CRUD backends.
Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.
And Rust is really tame compared say C/C++. Borrow checker is a PITA, but it's also really good at providing guardrails in the manual memory management land, and the build tooling is really good. Don't know enough about Zig but I get the impression that rust guardrails would help developers without C/C++ background contribute safe code.
You could argue Go is an alternative for this use case (and similar languages) but it brings it's own runtime/GC, which complicates things significantly when you're dealing with multi language projects. There's real value in having simple C FFI and minimal dependencies.
> Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.
Not elitism, just an honest appraisal, though I think flawed as competency isn't linear it's heterogeneous - you'll find the most surprising limitations accompanying the most monumental talent. Language fixation is a common enough one, but even beyond that, the beginner-expert curve on each language shouldn't be underestimated regardless of talent or experience.
In particular when it comes to Javascript there's a tendency to believe the above by virtue of the community being very large & accessible - bringing in a lot of in-expert contributors, especially from the web design field. This isn't fully representative of the whole though: there are significant solid minorities of hard JS experts in most areas.
> How often does an average X developer delve down to compiler details and contribute to static analysis tooling?
I've done this a few times for Go. One of the nice things about Go is that this is actually pretty easy. I've written some pretty useful things with this and gotten good mileage out of it. Any competent Go programmer could do this in an afternoon.
I don't really know what the state of JS tooling on this is, but my impression is that it's a lot harder, partly because JS is just so much more complex of a language, even just on the syntax/AST level. And TypeScript is even more complex.
To be fair the AST structure can also be implemented more efficiently without better control over memory layout. The JS ecosystem standardized on polymorphic ASTs, which in retrospect seems dumb, but is not a result of any fundamental limitation in JS.
E.g. in ESTree evaluating such a common expression as `node.type` is actually really expensive -- it incurs the costs of a hashmap lookup (more or less) where you'd expect it to be able to be implemented using pointer arithmetic.
I get what you're saying but you've missed my point.
You're optimising your execution but there's trade-offs: you need to think about optimising your software development model holistically. There's little point in having the most efficient abandonware.
A JS tool may be technically suboptimal but that's not a problem unless AST size is a bottleneck.
> AST data structures can be implemented much more efficiently with better control over memory layout
I assume you're right but I'm not sure I fully understand why this is the case - can you give examples of how a data structure can be implemented in ways that aren't possible in JS?
Disagree with 1. Most large JS projects I’ve worked on have been relatively high in necessary complexity; probably because many JS projects are relatively simple applications and relatively new (by the standards of enterprise software).
There is also abundant complexity analysis tooling for JS too. When I worked as an architect at a large telco we had this tooling in CI. It revealed some code smells and areas needing refactoring but didn’t really signal anything especially terrible.
Software tooling is more productive than ever and product requirements have grown to use that capacity. It’s definitely not a load of tech debt.
Not sure where you've worked or what you've worked with but everything you've described is the opposite of the JS projects I've encountered (multiple companies, multiple 100s JS projects).
> There is also abundant complexity analysis tooling for JS too.
I would highly appreciate recommendations here; I wonder does your review indicate the projects being analysed had little wrong, or that the tools were not very good at identifying problems.
1. The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt. This community needs complexity analysis tools (and performant alternative libraries) more than it needs accelerated parsers that sweep the complexity problem under a rug.
2. (more oft cited) The most commonly and deeply understood language in any language community is that language. By extension, any tools written in that language are going to be considerably more accessible for a broader range of would be contributors. Learning new languages is cool but diverging on language choices for core language tooling is a recipe for maintainer burnout.