Hacker News new | past | comments | ask | show | jobs | submit | throwitaway1123's comments login

> I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.

Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...

> Yet projects inevitably get to the stage where a more native representation wins out.

I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s

We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.

[1] https://github.com/tc39/proposal-structs?tab=readme-ov-file


I think the Prisma case is a bit of a red herring. First, they are using WASM which itself is a a low-level representation. Second, the performance gains appear primarily in avoiding the marshalling of data from JavaScript into Rust (and back again I presume). Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.

As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.


> First, they are using WASM which itself is a a low-level representation.

WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.

> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.

My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman

> As for the "compilers are special" reasoning, I don't ascribe to it.

Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.

> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.

I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]

[1] https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...


I'm not denying the facts of the matter, I am denying the conclusion. The circumstances of the situation are relevant. Marshalling cost across IPC boundaries come into play in every single possible situation regardless of language. It is why shared memory architectures exist. It doesn't matter what language is on the other side of the IPC, if the performance gained by using a separate process is not greater than the cost of the communication then you should avoid the IPC. One way to avoid that cost is to share the memory. In the case of code already running in a JavaScript VM a very easy way to share the memory means you do the processing in JavaScript.

That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.

And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.

The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.


Rather than fixating on this single Prisma example, I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages.

First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.

Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.

This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s


> your larger point which seems to be that all greenfield projects are necessarily best suited to low level language

That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.

So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.

> Low level languages tend to have a higher barrier to entry,

I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.


> That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements.

The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.

> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.

And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.

I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.


Ah, now we're at the dictionary definition level. So let's check Google:

    Inevitable:
          as is certain to happen; unavoidably.
       informal
          as one would expect; predictably.
          "inevitably, the phone started to ring just as we sat down"
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?

It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.

edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?


I wrote 362 words on why language rewrites are a faulty indicator of language quality with multiple examples and anecdotes, and you hyper-fixated on the very first sentence of my comment, instead of addressing the substance of my claim. In what alternate universe is that a good faith argument? If you were truly arguing in good faith you'd restate your position in whichever way you'd like your argument represented, and then proceed to respond to something besides the first sentence. Regardless of how strongly or weakly you believe that "native representations win out", my argument about misusing language rewrite anecdata still stands, and it would have been far more productive to respond to that point.

> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?

If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.


You said: "Inevitable means impossible to evade. That's about as close to a black and white statement as possible."

I used Google to point out that your argument, which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement. An interpretation so narrow that it indicates you are arguing in bad faith, which I believe to be the case. You are accusing me of making an argument that I did not make by accusing me of not understanding what a word means. You are wrong on both accounts as demonstrated.

The only person thinking in black in white is the figment of me in your imagination. I've re-read the argument chain and I'm happy leaving my point where it is. I don't think your points, starting with your attempt at a counter example with Prisma, nor your exceptional compiler argument, nor any of the other points you have tried support your case.


> which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement.

My argument does not hinge upon the definition of the word inevitable. You originally said "I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language."

I gave a relatively thorough accounting of why you've observed this, and why it doesn't indicate what you believe it to indicate here: https://news.ycombinator.com/item?id=43339297

Instead of addressing the substance of the argument you focused on this introductory sentence: "I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages."

Regardless of how narrowly or widely you want me to interpret your stance, my point is that the data you're using to form your opinion (rewrites from higher to lower level languages) does not support any variation of your argument. You "can't think of a time a high profile project written in a lower level representation got ported to a higher level language" because developers tend to be more hesitant about reaching for lower level languages (due to the higher barrier to entry), and therefore are less likely to misuse them in the wrong problem domain.


> The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions.

Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.

> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.

Arguably Go is a scripting language designed for exactly that purpose.


I wouldn't think choosing a native language over a scripting language is a "gamble" but I suppose that all depends on ability and risk tolerance. I think it would be relatively easy to develop using Rust, Go, Zig, etc.

I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).


The fact that many software products are moving to lower-level languages is not a general point in favour of lower-level languages being somehow better—rather, it simply aligns with general directions of software evolution.

1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.

2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.

Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience.

Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, other things equal the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption. (And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)


I'm not here to predict the future, rather to reconsider old assumptions based on new evidence.

Of course, LLMs may stay as "autocomplete" forever. Or for decades. But my intuition is telling me that in the next 2-3 years they are going to increase in capability, especially for coding, at a pace greater than the last 2 years. The evidence that I have (by actually using them) seems to point in that direction.

I'm perfectly capable of writing programs in Perl, Python, JavaScript, C++, PHP, Java. Each of those languages (and more actually) I have used professionally in the past. I am confident I could write a perfectly good app in Go, Rust, Elixir, C, Ruby, Swift, Scala, etc.

If you asked me 6 months ago "what would you choose to write a basic CRUD web app" I probably would have said TypeScript. What I am questioning now is: why? What would lead me to choose TypeScript? Do the reasons I would have chosen TypeScript continue to make sense today?

There are no genies here, only questioning of assumptions. And my new assumptions include the assumption that any coding I would do will involve a code assisting LLM. That opens up new possibilities for me. Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?

Your assumptions about the present and near future will guide your own decisions. If you don't share the same intuitions you will come to different conclusions.


> Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?

Same reasons as with no LLM assistance. You would be choosing higher maintenance burden and slower development speed compared to your competitors, though. They will get it out faster, they will have fewer issues, and will be able to find people to support it more easily. Your product may run faster, but theirs will work and be out faster.


Lets imagine we are assembly programmers. You have a particular style of assembly that you believe gives you some advantage over your competitors. The way you structure your assembly gives you a lower maintenance burden and faster development speed compared to your competitors.

I show up and say "I have a C compiler". Does it matter at that point how good your assembly is? All of a sudden I can generate 10x the amount of assembly that you generate. And you are probably aghast, what crappy assembly my C compiler generates.

Now ask yourself: how often do you look at generated assembly?

Compilers don't care about writing maintainable assembly. They are a tool to generate assembly in high volumes. History has shown that people who use C compilers were able to get products to market faster compared to people who wrote using assembly.

So lets assume, for the sake of understanding my position, that LLMs will be like the compiler. I give it some high-level English description of the code I want it to run and it generates a high volume of [programming language] as its output. My argument is, the programming language that it outputs is important and it would be better for it to output into a language that low level native binaries. In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.


> In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.

A well tested compiler is far more deterministic than an LLM, and can be largely treated as a black box because it won't randomly hallucinate output.


Humans aren't deterministic. I've trusted junior engineers to ship code. I fail to see a significant difference here in the long term.

We have engineering practices that guard against humans making mistakes that break builds or production environments. It isn't like we are going to discard those practices. In fact, we'll double down on them. I would subject an LLM to the level of strict validation that any human engineer would fine suffocating.

The reason we trust compilers as a black box is because we have created systems that allow us to do so. There is no reason I can see currently that we will be unable to do so for LLM output.

I might be wrong, time will tell. We're going to find out because some will try. And if it turns out to be as effective as C was compared to assembly then I want to be on that side of history as early as possible.


> Humans aren't deterministic.

Exactly, which is why I would want humans and LLMs to write maintainable code, so that I can review and maintain it, which brings us back to the original question of which programming languages are the easiest to maintain...


Well, we're in a loop then because my response was "you don't care about maintainable assembly".

I want maintainable systems you want maintainable code. We can just accept that difference. I believe maintainable systems can be achieved without focusing on code that humans find maintainable. In the future, I believe we will build systems on top of code primarily written by LLMs and the rubric of what constitutes good code will change accordingly.

edit: I would also add that your position is exactly the position of assembly programmers when C came around. They lamented the assembly the C compiler generated. "I want assembly I can read, understand and maintain" they demanded. They didn't get it.


We're stuck in a loop because you're flip flopping between two positions.

You started off by comparing LLM output to compiler output, which I pointed out is a false equivalence because LLMs aren't as deterministic as compilers.

Then you switched to comparing LLMs to humans, which I'm fine with, but then LLMs must be expected to produce maintainable code just like humans.

Now you're going back to the original premise that LLM output is comparable to compiler output, thus completing the loop.


There are more elements to a compiler than determinism. That is, determinism isn't their sole defining property. I can compare other properties of compilers to LLMs. No "flip flop" there IMO, but your judgment may vary.

Perhaps it is impossible for you to imagine that LLMs can share some properties with compilers and other properties with humans? And that this specific blend of properties makes them unique? And that uniqueness means we have to take a nuanced approach to understanding their impact on designing and building systems?

So lets lay it out. LLMs are like compilers in that they take high level instructions (in the form of English) and translate it into programming languages. Maybe "transpiler" would be a word you prefer? LLMs are like humans in that this translation of high level instructions to programming languages is non-deterministic and so it requires system level controls to handle this imprecision.

I do not detect any conflict in these two ideas but perhaps you see things differently.


> There are more elements to a compiler than determinism.

Yes, but determinism is the factor that allows me to treat compilers as a black box without verifying their output. LLMs do not share this specific property, which is why I have to verify their output, and easily verifiable software is what I call "maintainable".


An interesting question you might want to ask yourself, related to this idea: what would you do if your compiler wasn't deterministic?

Would you go back to writing assembly? Would you diligently work to make the compiler "more" deterministic. Would you engineer your systems around potential failures?

How do industries like the medical or aviation deal with imperfect humans? Are there lessons we can learn from those domains that may apply to writing code with non-deterministic LLMs?

I also just want to point out an irony here. I'm arguing in favor of languages like Go, Rust and Zig over the more traditional dynamic scripting languages like Python, PHP, Ruby and JavaScript. I almost can't believe I'm fighting the "unmaintainable" angle here. Do people really think a web server written in Go or Rust is unmaintainable? I'm defending my position as if they are, but come on. This is all a bit ridiculous.


> How do industries like the medical or aviation deal with imperfect humans?

We have a system in science for verifying shoddy human output, it's called peer review. And it's easier for your peers to review your code when it's maintainable. We're back in the loop.


That is one system. Are there zero others?

Funny thing about this thread and black and white thinking. I feel a different kind of loop.


> Do people really think a web server written in Go or Rust is unmaintainable?

Things are not black and white. It will be less maintainable relatively speaking, proper tool for the job and all that. That’s why you will be left in the dust.


Again, your competitor will get there faster and with fewer bugs. LLMs are trained on human input and humans do not do great at low level languages. They churn out better Python than C and especially when it comes to refactoring it (have observed that personally).


>The main driver behind this project is that while Rust is very quick, the cost of serializing data between Rust and TypeScript is very high.

This sounds more like a "we're kinda stuck with Javascript here" situation. The team is making a compromise, can't have your cake and eat it too I guess.


i don't think this speaks to the general reasons someone would rewrite a mid- or low-level project in a high-level language, so much as to the special treatment JS/TS get. yes, your data model being the default supported, and everything else in the world having to serialize/deserialize to accommodate that, slows performance. in other words, this is just a reason to use the natively-supported JS/TS, still very much the favorite children of browser engines, over the still sort of hacked-in Rust.


They're going to add it once it stabilizes in Node: https://github.com/denoland/deno/issues/24828#issuecomment-2...


Preact is definitely a good choice if you're looking for something lightweight. React-dom was already relatively hefty, and seems to have gotten even larger in version 19. Upgrading the React TypeScript Vite starter template from 18 to 19 increases the bundle size from 144kB to 186kB on my machine [1][2]. They've also packaged it in a way that's hard to analyze with sites like bundlephobia.com and pkg-size.dev.

[1] https://github.com/facebook/react/issues/27824

[2] https://github.com/facebook/react/issues/29913


> Enums is going to make your TypeScript code not work in a future where TypeScript code can be run with Node.js

Apparently they're planning on adding a tsconfig option to disallow these Node-incompatible features as well [1].

Using this limited subset of TS also allows your code to compile with Bloomberg's ts-blank-space, which literally just replaces type declarations with whitespace [2].

[1] https://github.com/microsoft/TypeScript/issues/59601

[2] https://bloomberg.github.io/ts-blank-space/


Those flags have already started to show up in today's typescript: verbatimModuleSyntax [1] and isolatedModules [2], for instance.

[1] https://www.typescriptlang.org/tsconfig/#verbatimModuleSynta...

[2] https://www.typescriptlang.org/tsconfig/#isolatedModules


Those definitely help, but the proposed erasableSyntaxOnly flag would disallow all features that can't be erased. So it would prevent you from using features like parameter properties, enums, namespaces, and experimental decorators.

It would essentially help you produce TypeScript that's compatible with the --experimental-strip-types flag (and ts-blank-space), rather than the --experimental-transform-types flag, which is nice because (as someone else in this thread pointed out), Node 23 enables the --experimental-strip-types flag by default: https://nodejs.org/en/blog/release/v23.6.0#unflagging---expe...


Also worth noting that the eslint rules for what `erasableSyntaxOnly` might be are already well established, for those looking to do it today.


Agreed.

Frontend frameworks often do spend a lot of time thinking about the accessibility concerns associated with client side routing, so it's not absurd to consider this question in scope for a frontend library that handles DOM updates.

See for instance this 2019 study by Gatsby: https://www.gatsbyjs.com/blog/2019-07-11-user-testing-access...

Or even the modern Next docs on route announcements: https://nextjs.org/docs/architecture/accessibility#route-ann...

Some of this will have to be bespoke work done on a per-site basis, but I'm not sure I'm comfortable with the idea of completely punting this responsibility to developers using htmx, even if it does make philosophical sense to say "this is scope creep", because ultimately users with disabilities will end up being the ones whose experience on the web is sacrificed on this altar of ideological purity.


> They don't work offline

This isn't true. Offline functionality is the raison d'être for Service Workers. You can run an entire HTTP request router on the client to respond to requests while offline: https://hono.dev/docs/getting-started/service-worker


Are you guys okay? Don't get me wrong it's clever, but it's also insane.

If I pitched the idea of having SMB shares work online by shipping a driver that could intercept low level SMB calls and reroute them to a mock SMB server that holds the cache they would have assumed I'd lost it.

Surely the browser could help you a bit more to implement offline sites in a more integrated fashion.


It's ultimately just a little event listener function that accepts a Request object and returns a Response object. I bundled the service worker by running a quick `npx esbuild --minify --bundle --outfile=sw.js sw.ts` command, and it produced an 18.6kb JS file in 10 milliseconds. That's not even half the size of libraries like HTMX, Alpine, and jQuery.

You can of course use the CacheStorage API directly as well (you're not obligated to use a mock server): https://developer.mozilla.org/en-US/docs/Web/API/CacheStorag...

I've certainly seen crazier things though. People routinely include entire copies of Ubuntu LTS in their Docker images to ship tiny HTTP servers.


> I'm not sure that's even that correct. If you write code using browser APIs that are currently standardised, your code will work indefinitely, whether or not you use Web Components, React, or jQuery.

The most charitable interpretation of this argument is that framework specific component libraries assume for the most part that you're using that specific framework. The documentation for popular React component libraries like shadcn are largely incomprehensible if you're not using React. Libraries like Shoelace (now being renamed to Web Awesome) make no such assumptions. You can just drop a script tag into your page's markup and get started without having to care about (or even be aware of) the fact that Shoelace uses Lit internally.

> But without Javascript, Web Components are essentially empty holes that can't do anything. They don't progressively enhance anything.

This is not true if you're using the new Declarative Shadow DOM API [1]. You literally just add a template tag with a shadowroot mode attribute inside your custom element, and then the component works without JavaScript. When (or if) the JavaScript loads, you simply check for the existence of a server-rendered shadow root using `internals.shadowRoot`. If a shadow root already exists then you don't have to replace anything, and you can attach your event listeners to the pre-existing shadow root (i.e. component hydration).

[1] https://web.dev/articles/declarative-shadow-dom#component_hy...


At this point, I think it's better to point to the MDN docs.

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/te...

Your link uses some deprecated functionality that was only ever created on Chrome.

Anyway, it's good to know that browsers actually did implement that. It was a major complaint when web components were created.


The only reason I avoid linking to that MDN page is because I think it does a bad job of actually assembling all of the pieces together into a working example. That's why I linked specifically to the component hydration section, which assembles it all into a functioning component complete with an actual class and constructor. The code example in that specific section doesn't appear to use any deprecated or non-standard features. Otherwise, I normally do prefer MDN as an authoritative source of documentation.


The MDN page has the deprecation notice for the `shadowRoot` property.

Yes, the examples on your page are way more comprehensive. Even with compatibility support for old Chrome versions. Unfortunately, that support example will also break your code on other browsers.

People may want to read it anyway, and just fix the problem. It being outdated doesn't make the explanation bad.


> The MDN page has the deprecation notice for the `shadowRoot` property.

The `shadowRoot` property of the ElementInternals object is not deprecated: https://developer.mozilla.org/en-US/docs/Web/API/ElementInte...

What's deprecated is the `shadowroot` attribute on the template element, which the example I linked to does not use. The second line of code in that example is `<template shadowrootmode="open">`.

All of this is mentioned at the very top of the article where it says: "Note that the specification for this feature changed in 2023 (including a rename of shadowroot to shadowrootmode), and the most up to date standardized versions of all parts of the feature landed in Chrome version 124."

> Unfortunately, that support example will also break your code on other browsers.

No, it will not.


> You literally just add a template tag with a shadowroot mode attribute inside your custom element, and then the component works without JavaScript.

Wtf is a shadowroot?

I'm of increasing confidence that the entire project to unify document display and application runtime has utterly failed and there's no way (and no benefit) to resuscitate it. We need two different internets: one for interactive applications and one for document browsing.


> Wtf is a shadowroot?

In the Document Object Model (DOM), every document has a root node, which you can retrieve by calling `getRootNode()` on any node in the document (e.g. `document.getRootNode()` or `document.body.getRootNode()`).

Custom Elements can have their own "shadow" document that's semi-isolated from the parent document, and the root node of that shadow document is called the shadow root.

The idea is to be able to create your own HTML elements (which also have their own hidden DOM). If you enable a setting in Chrome's Devtools (Show user agent shadow DOM) [1], you can actually see the hidden shadow structure of built in HTML Elements: https://www.youtube.com/watch?v=Vzj3jSUbMtI&t=291s

[1] https://developer.chrome.com/docs/devtools/settings/preferen...


Promisify converts a callback based function into a promise returning function [1]. If the function has a `promisify.custom` method, `promisify` will simply return the `promisify.custom` method instead of wrapping the original function. Calling `promisify` on `setTimeout` in Node is redundant because Node already ships a built in promisified version of `setTimeout`. So the following is true:

  setTimeout[promisify.custom] === require('node:timers/promises').setTimeout
You could of course manually wrap `setTimeout` yourself as well:

  const sleep = n => new Promise(resolve => setTimeout(resolve, n))

[1] https://nodejs.org/docs/latest-v22.x/api/util.html#utilpromi...


TypeScript has the equivalent of what you're describing via the `Parameters` and `ReturnType` utility types [1][2], and I've found these types indispensable. So you can do the following:

  type R = ReturnType<typeof someFunction>
  type P = Parameters<typeof someFunction>
[1] https://www.typescriptlang.org/docs/handbook/utility-types.h...

[2] https://www.typescriptlang.org/docs/handbook/utility-types.h...


Yeah, now that you mention it, I remember using it a lot when I worked more in that language.


> In C#, awaiting for things which never complete is not that bad, the standard library has Task.WhenAny() method for that.

It's not that bad in JS either. JS has both Promise.any and Promise.race that can trivially set a timeout to prevent a function from waiting infinitely for a non-resolving promise. And as someone pointed out in the Lobsters thread, runtimes that rely on multi-threading for concurrency are also often prone to deadlocks and infinite loops [1].

  import { setTimeout } from 'node:timers/promises'
  
  const neverResolves = new Promise(() => {})
  
  await Promise.any([neverResolves, setTimeout(0)])
  await Promise.race([neverResolves, setTimeout(0)])
  
  console.trace()

[1] https://lobste.rs/s/hlz4kt/threads_beat_async_await#c_cf4wa1


> Promise.race

Ding! You now have a memory leak! Collect your $200 and advance two steps.

Promise.race will waste memory until _all_ of its promises are resolved. So if a promise never gets resolved, it will stick around forever.

It's braindead, but it's the spec: https://github.com/nodejs/node/issues/17469


This doesn't even really appear to be a flaw in the Promise.race implementation [1], but rather a natural result of the fact that native promises don't have any notion of manual unsubscription. Every time you call the then method on a promise and pass in a callback, the JS engine appends the callback to the list of "reactions" [2]. This isn't too dissimilar to registering a ton of event listeners and never calling `removeEventListener`. Unfortunately, unlike events, promises don't have any manual unsubscription primitive (e.g. a hypothetical `removePromiseListener`), and instead rely on automatic unsubscription when the underlying promise resolves or rejects. You can of course polyfill this missing behavior if you're in the habit of consistently waiting on infinitely non-settling promises, but I would definitely like to see TC39 standardize this [3].

[1] https://issues.chromium.org/issues/42213031#comment5

[2] https://github.com/nodejs/node/issues/17469#issuecomment-349...

[3] https://github.com/cefn/watchable/tree/main/packages/unpromi...


This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS.

Promises in JS always run to completion, whether there's a listener or not registered for it. The event loop will always make any existing promise progress as long as it can. Note that "existing" here does not mean it has a listener, nor even whether you're holding a reference to it.

You can create a promise, store its reference somewhere (not await/then-ing it), and it will still progress on its own. You can await/then it later and you might get its result instantly if it had already progressed on its own to completion. Or even not await/then it at all -- it will still progress to completion. You can even not store it anywhere -- it will still run to completion!

Note that this means that promises will be held until completion even if userspace code does not have any reference to it. The event loop is the actual owner of the promise -- it just hands a reference to its completion handle to userspace. User code never "owns" a promise.

This is in contrast to e.g. Rust promises, which do not run to completion unless someone is actively polling them.

In Rust if you `select!` on a bunch of promises (similar to JS's `Promise.race`) as soon as any of them completes the rest stop being polled, are dropped (similar to a destructor) and thus cancelled. JS can't do this because (1) promises are not poll based and (2) it has no destructors so there would be no way for you to specify how cancellation-on-drop happens.

Note that this is a design choice. A tradeoff. Cancellation introduces a bunch of problems with promise cancellation safety even under a GC'd language (think e.g. race conditions and inconsistent internal state/IO).

You can kinda sorta simulate cancellation in JS by manually introducing some `isCancelled` variable but you still cannot act on it except if you manually check its value between yield (i.e. await) points. But this is just fake cancellation -- you're still running the promise to completion (you're just manually completing early). It's also cumbersome because it forces you to check the cancellation flag between each and every yield point, and you cannot even cancel the inner promises (so the inner promises will still run to completion until it reaches your code) unless you somehow also ensure all inner promises are cancelable and create some infra to cancel them when your outer promise is cancelled (and ensure all inner promises do this recursively until then inner-est promise).

There are also cancellation tokens for some promise-enabled APIs (e.g. `AbortController` in `fetch`'s `signal`) but even those are just a special case of the above -- their promise will just reject early with an `AbortError` but will still run to (rejected) completion.

This has some huge implications. E.g. if you do this in JS...

  Promise.race([
    deletePost(),
    timeout(3000),
  ]);
...`deletePost` can still (invisibly) succeed in 4000 msecs. You have to manually make sure to cancel `deletePost` if `timeout` completes first. This is somewhat easy to do if `deletePost` can be aborted (via e.g. `AbortController`) even if cumbersome... but more often than not you cannot really cancel inner promises unless they're explicitly abortable, so there's no way to do true userspace promise timeouts in JS.

Wow, what a wall of text I just wrote. Hopefully this helps someone's mental model.


> This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS.

You've made an interesting point about promise cancellation but it's ultimately orthogonal to the Github issue I was responding to. The case in question was one in which a memory leak was triggered specifically by racing a long lived promise with another promise — not simply the existence of the promise — but specifically racing that promise against another promise with a shorter lifetime. You shouldn't have to cancel that long lived promise in order to resolve the memory leak. The user who created the issue was creating a promise that resolved whenever the SIGINT signal was received. Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)?

As the Node contributor discovered the reason is because semantically `Promise.race` operates similarly to this [1]:

  function race<X, Y>(x: PromiseLike<X>, y: PromiseLike<Y>) {
    return new Promise((resolve, reject) => {
      x.then(resolve, reject)
      y.then(resolve, reject)
    })
  }
Assuming `x` is our non-settling promise, he was able to resolve the memory leak by monkey patching `x` and replacing its then method with a no-op which ignores the resolve and reject listeners: `x.then = () => {};`. Now of course, ignoring the listeners is obviously not ideal, and if there was a native mechanism for removing the resolve and reject listeners `Promise.race` would've used it (perhaps using `y.finally()`) which would have solved the memory leak.

[1] https://github.com/nodejs/node/issues/17469#issuecomment-349...


> Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)?

In the particular case you linked to, the issue is (partially) solved because the promise is short-lived so the `then` makes it live longer, exacerbating the issue. By not then-ing the GC kicks earlier since nothing else holds a reference to its stack frame.

But the underlying issue is lack of cancellation, so if you race a long-lived resource-intensive promise against a short-lived promise, the issue would still be there regardless of listener registration (which admittedly makes the problem worse).

Note that this is still relevant because it means that the problem can kick in in the "middle" of the async function (if any of the inner promises is long) while the `then` problem (which the "middle of the promise" is a special case of "multiple thens", since each await point is isomorphic to calling `then` with the rest of the function).

Without proper cancellation you only solve the particular case if your issue is the latest body of the `then` chain.

(Apologies for the unclear explanation, I'm on mobile and on the vet's waiting room, I'm trying my best.)


I don't want to get mired in a theoretical discussion about what promise cancellation would hypothetically look like, and would rather instead look at some concrete code. If you reproduce the memory leak from that original Node Github issue while setting the --max-old-space-size to an extremely low number (to set a hard limit on memory usage) you can empirically observe that the Node process crashes almost instantly with a heap out of memory error:

  #! /usr/bin/env node --max-old-space-size=5
  
  const interruptPromise = new Promise(resolve =>
    process.once('SIGINT', () => resolve('interrupted'))
  )
  
  async function run() {
    while (true) {
      const taskPromise = new Promise(resolve => setImmediate(resolve))
      const result = await Promise.race([taskPromise, interruptPromise])
      if (result === 'interrupted') break
    }
    console.log(`SIGINT`)
  }
  
  run()
If you run that exact same code but replace `Promise.race` with a call to `Unpromise.race`, the program appears to run indefinitely and memory usage appears to plateau. And if you look at the definition of `Unpromise.race`, the author is saying almost exactly the same thing that I've been saying: "Equivalent to Promise.race but eliminates memory leaks from long-lived promises accumulating .then() and .catch() subscribers" [1], which is exactly the same thing that the Node contributor from the original issue was saying, which is also exactly the same thing the Chromium contributor was saying in the Chromium bug report where he writes "This will also grow the reactions list of `x` to 10e5" [2].

[1] https://github.com/cefn/watchable/blob/6a2cd66537c664121671e...

[2] https://issues.chromium.org/issues/42213031#comment5


Just to clarify because the message might have been lost: I'm not saying you're wrong! I'm saying you're right, and...

Quoting a comment from the issue you linked:

> This is not specific to Promise.race, but for any callback attached a promise that will never be resolved like this:

  x = new Promise(() => {});
  for (let i = 0; i < 10e5 ; i++) {
    x.then(() => {});
  }
My point is if you do something like this (see below) instead, the same issue is still there and cannot be resolved just by using `Unpromise.race` because the underlying issue is promise cancellation:

  // Use this in the `race` instead
  // Will also leak memory even with `Unpromise.race`
  const interruptPromiseAndLog = () =>
    interruptPromise()
      .then(() => console.log('SIGINT'))
`Unpromise.race` only helps with its internal `then` so it will only help if the promise you're using has no inner `then` or `await` after the non-progressing point.

This is not a theoretical issue. This code happens all the time naturally, including in library code that you have no control over.

So you have to proxy this promise too... but again this only partially solves the issue because you'd have to promise every single promise that might ever be created, including those you have no control over (in library code) and therefore cannot proxy yourself.

And the ergonomics are terrible. If you do this, you have to proxy and propagate unsubscription to both `then`s:

  const interruptPromiseAndLog = () =>
    interruptPromise()
      // How do you unsubscribe this one
      .then(() => console.log('SIGINT'))
      // ...even if you can easily proxy this one?
      .then(() => console.log('REALLY SIGINT'))
Which can easily happen in await points too:

  const interruptPromiseAndLog = async () => {
    console.log('Waiting for SIGINT')

    // You have to proxy and somehow propagate unsubscription to this one too... how!?
    await interruptPromise()
    
    console.log('SIGINT')
  }
Since this is just sugar for:

  const interruptPromiseAndLog = () => {
    console.log('Waiting for SIGINT')

    return interruptPromise()
      // Needs unsubscription forwarded here
      .then(() => console.log('SIGINT'))
  }
Which can quickly get out of hand with multiple await points (i.e. many `then`s).

Hence why I say the underlying issue is overall promise cancellation and how you actually have no ownership of promises in JS userspace, only of their completion handles (the event loop is the actual promise owner) which do nothing when going out of scope (only the handle is GC'd but the promise stays alive in the event loop).


For that matter C# has Task.WaitAsync, so waited task continues to the waiter task, and your code subscribes to the waiter task, which unregisters your listener after firing it, so memory leak is limited to the small waiter task that doesn't refer anything after timeout.


But if you really truly need cancel-able promises, it's just not that difficult to write one. This seems like A Good Thing, especially since there are several different interpretations of what "cancel-able" might mean (release the completion listeners into the gc, reject based on polling a cancellation token, or both). The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed.


Why this isn't possible is implicitly (well, somewhat explicitly) addressed in my comment.

  const foo = async () => {
    ... // sync stuff A
    await someLibrary.expensiveComputation()
    ... // sync stuff B
  }
No matter what you do it's impossible to cancel this promise unless `someLibary` exposes some way to cancel `expensiveComputation`, and you somehow expose a way to cancel it (and any other await points) and any other promises it uses internally also expose cancellation and they're all plumbed to have the cancellation propagated inward across all their await points.

Unsubscribing to the completion listener is never enough. Implementing cancellation in your outer promise is never enough.

> The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed.

I'll reiterate: there is no way to write promise cancellation in JS userspace. It's just not possible (for all the reasons outlined in my long-ass comment above). No matter how elaborate your implementation is, you need collaboration from every single promise that might get called in the call stack.

The proposed `unpromise` implementation would not help either. JS would need all promises to expose a sort of `AbortController` that is explicitly connected across all cancellable await points inwards which would introduce cancel-safety issues.

So you'd need something like this to make promises actually cancelable:

  const cancelableFoo = async (signal) => {
    if (signal.aborted) {
      throw new AbortError()
    }

    ... // sync stuff A

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      throw new AbortError()
    }

    await someLibrary.expensiveComputation(signal)

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      throw new AbortError()
    }

    ... // sync stuff B

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      // possibly cleanup for sync stuff B
      throw new AbortError()
    }
  }

  const controller = new AbortController()
  const signal = abortController.signal

  Promise.cancelableRace(
    controller, // cancelableRace will call controller.abort() if any promise completes
    [
      cancellableFoo(signal),
      deletePost(signal),
      timeout(3000, signal),
    ]
  )
And you need all promises to get their `signal` properly propagated (and properly handled) across the whole call stack.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: