Hacker News new | past | comments | ask | show | jobs | submit login
Analysis of the overhead of a minimal Zig program (zig.news)
186 points by matu3ba on Jan 2, 2022 | hide | past | favorite | 107 comments



No comment on content. However:

> What's up with the and rsp,0xfffffffffffffff0? That's because the function manually aligns the stack to the next 16-byte boundary. I'm not sure why the stdlib does this. The SystemV ABI (§2.3.1) guarantees an initial alignment of 16 already, so it should be superfluous.

Linux does not make any such guarantee, however; so in order for _start to call functions using the sysv ABI in a compliant fashion, it must provide them with an aligned stack.


Do you know offhand if the guarantees made by the Linux ELF loader is documented anywhere? I did search for it myself but didn't find anything authoritative.


No, but I can tell you as a point of practicum that when I forgot to align the stack, clang-generated code faulted under linux.


There’s a walkthrough of the relevant kernel code in the superb “How programs get run” series[1], and it seems to try for 16-byte alignment on all platforms.

Normally this kind of thing would be in the relevant System V ABI Processor Supplement (officially, by way of the SVID and gABI, but everybody just refers to the psABI directly) [2], in the “Process Initialization” section (the above-mentioned §2.3), but note e.g. the story of how the GCC developers accidentally defaulted to a stricter alignment than the spec said and then decided to just run with it and rewrite the spec after the fact[3].

[1] https://lwn.net/Articles/631631/

[2] E.g. https://gitlab.com/x86-psABIs for x86

[3] https://stackoverflow.com/a/49397524


Interestingly, I can't seem to get an empty nim programme below 32k.

   #> touch foo.nim
   #> nim c -d:release --opt:size -d:strip -d:danger foo.nim
   #> ls -lhs foo | cut -d\  -f1
   32K


With --os:standalone and panicoverride.nim it goes down to 15k

And down to *150 bytes* with some hacks https://hookrace.net/blog/nim-binary-size/


Very handy! Thank-you.


Same with Swift (on Mac)

    % touch foo.swift
    % swiftc -Osize -Xlinker -x foo.swift
    % ls -lh foo | cut -d' ' -f10
    33K
    % strip foo
    % ls -lh foo | cut -d' ' -f10
    33K


Related: Just how small can you make an executable on 32-bit Linux? http://www.muppetlabs.com/~breadbox/software/tiny/

(You'll notice that, despite the change from 32-bit to 64-bit, the figure of 297 bytes mentioned in the OP as how much you'd get from doing everything in assembly and aggressively stripping is the same figure that Brian Raiter comes up with in part III of the Teensy Files, where he attempts to do everything in a totally above-board manner.)


"Aggressively stripping out all the unnecessary trash that the linker puts into it makes it 297 bytes"

Reminds me of how impressive 256 byte intros are, like this one:

https://www.youtube.com/watch?v=w72MXbAIJVg


You cannot compare a DOS COM program to an ELF binary, though.

The ELF header alone is 64 bytes while COM files have no header and can fit more than a Hello World program into 64 bytes (uncompressed):

  mov ah,9
  mov dx,108
  int 21
  ret
  db "Hello world!$"
Using just debug.com (on DOS) this results in a 21 byte COM executable that prints "Hello world!" to the console. This simply isn't possible with formats like ELF, no matter what the linker does.

Since DOS also allows direct access to video memory, just switching to VGA mode (320x200, 8-bit colours, e.g. using mov ax,13h; int 10h) will allow you to create fancy graphics with just a few bytes. In fact, I can get a colour pattern on the screen, wait for a key press, and return to text mode in just 45 bytes (for use with debug.com):

  a 100
  mov ax,13
  int 10
  mov ax,a000
  mov es,ax
  xor di,di
  xor ax,ax
  mov cx,fa00
  mov es:[di],al
  inc ax
  inc di
  dec cx
  jnz 111
  mov ax,0600
  mov dx,ff
  int 21
  jz  119
  mov ax,3
  int 10
  mov ax,4c00
  int 21

  n vgademo.com
  rbx
  0
  rcx
  2D
  w
  q

This should work on any DOS machine and in DOSBox (I used VDos to create the COM file since I don't have a DOS assembler installed, and debug.com is included in vDOS) and is smaller than just an ELF header :D


Those are impressive but it's not really a fair comparison, 256 byte intros are typically DOS executables which only require a handful of bytes worth of boilerplate before you're putting pixels on the screen. The OP is working with Linux executables, which have a much higher fixed overhead to get running nevermind display any graphics, and that's reflected in the smallest category for Linux and Windows intros being 4096 bytes.


> I've watched friends try Go and immediately uninstall the compiler when they see that the resulting no-op demo program is larger than 2 MiB.

That seems a bit extreme

> Overhead breeds complacency — if your program is already several megabytes in size, what's a few extra bytes wasted? Such thinking leads to atrocities like writing desktop text editors bundled on top of an entire web browser

I'm still on board


I'm imagining that the Go people would have a hard time making the compiled program smaller, because they're bundling in a M:N threading system, inter-thread channels, a garbage collector, a stack resizer, a syscall parking and reactivation system, and all of the above are by necessity mutually interdependent.


What irks me most about people that complain about large binaries for empty programs is that every program past Hello World will actually use all these features.

So why waste the effort putting in a code path to disable these things except for some philosophical ideal?

Code in Asm or C if you need a hello world that can fit on a floppy disk, you're not really doing anything substantial anyways...

There's plenty of acceptable middle ground between Go binaries using 2mb minimum and full electron apps eating your ram.


> every program past Hello World will actually use all these features.

Not on embedded.


This is a bad argument for the same reason that "my yacht can't be used for cross country road trips" is a bad argument. You're 100% not going to use the normal Go compiler for embedded development. It's not a use-case that was considered nor engineered for.

TinyGo supports it, but they built a new compiler to support that use-case.


Not paying for what you don't use is not an unreasonable goal of a compiler, linker etc.


It is an unreasonable goal if (1) it's wildly outside the expected use cases and (2) it complicates things. Both of which would apply.


Sure. But as stated previously, even a basic hello world go program does use the runtime.


Pretty much all programs use the GC. Threading isn't used in everything, but Go, especially probably has at least 50% of programs over 1000 lines use threads (especially considering the standard lib).


If I remember right the GC lives on a thread, even if the program runs serially.


I use tinygo on microcontrollers and use channels. It's simpler than an RTOS.


Link for those interested: https://tinygo.org/.

> TinyGo brings the Go programming language to embedded systems and to the modern web by creating a new compiler based on LLVM.

> You can compile and run TinyGo programs on over 60 different microcontroller boards such as the BBC micro:bit and the Arduino Uno.

> TinyGo can also produce WebAssembly (WASM) code which is very compact in size. You can compile programs for web browsers, as well as for server and edge computing environments that support the WebAssembly System Interface (WASI) family of interfaces.


> What irks me most about people that complain about large binaries for empty programs is that every program past Hello World will actually use all these features.

I'm not so sure about that. A lot of embedded systems software works well without allocating any memory at runtime at all. Particularly microcontroller programs. The first version of Virgil targeted AVR (via codegen to C) and did exactly this; allocate all data structures up front, at compile time, and then bake them into the binary. This is still common in realtime systems.

No dynamic memory allocation = no garbage collector, no non-deterministic allocation/deallocation, no write barriers, no out-of-memory possibilities, no fragmentation. For a surprisingly large class of programs, this is a great situation!

> So why waste the effort putting in a code path to disable these things except for some philosophical ideal?

Virgil in particular, does the opposite; it only includes what the program uses. The analysis starts with nothing and then brings in things called and used from main, tracing through those new things, etc, until the transitive closure is included. Unreachable code isn't even seen, except by the parser and semantic checker, so it doesn't get included. Although, admittedly, the GC is a little special in that a single allocation in the program will drag in most of it. And most programs accept command-line arguments, which the default runtime boxes up into an array of strings for "main()", so most programs do end up with the GC included. You can turn that off with a flag or by modifying the runtime startup code (e.g. squirrel away the arguments array pointer and count, pass null to "main()", and parse the arguments as raw pointers without allocating. (but yuk.)


> No dynamic memory allocation = no garbage collector, no non-deterministic allocation/deallocation, no write barriers, no out-of-memory possibilities, no fragmentation. For a surprisingly large class of programs, this is a great situation!

I know you know this already, but your statement is a little too broad. Those problems all still exist, but are greatly reduced. Data structures still need to be compacted, caches evicted, scratch space cleared, etc. It is just that one class of intractable issues gets removed when dynamic memory allocation goes away.

On a side note, have you seen this? https://github.com/udem-dlteam/ribbit https://www.youtube.com/watch?v=A3r0cYRwrSs an extremely compact VM for a version of R4RS.


I wrote a Gigatron emulator in Zig for Win32. Compiled with ReleaseFast (not even ReleaseSmall!) it is only 446KB. The full-fat debug version is only just over a MB. 2MB is a lot for a binary.


Is it really though? 2mb is practically nothing on modern machines. And full-fat debug should have as much info as it can, why optimize it unless you're spitting out 400+mb binaries or something crazy.

It's like, yes, if you turn off the A/C in your muscle car, you'll technically get a few extra horse power.

But past some philosophical ideal of squeezing out every bit of horse power you can, it has a negligible effect on the operation of the car.

Now if you're working on a weed wacker engine or something, yeah that would matter. But you wouldn't pick a giant beefy engine and expect it to work as well as a tiny lightweight one.


> Is it really though? 2mb is practically nothing on modern machines.

In general, I agree actually. I think hemming and hawing over duplicated shared libraries in particular makes no sense for most modern systems. Still, 2MB is a pretty damn big overhead to put on every binary the way Go does. Not that it isn't a considered trade off or anything, I'm just saying.

> And full-fat debug should have as much info as it can, why optimize it unless you're spitting out 400+mb binaries or something crazy.

That's my point: there's no optimization in the full-fat debug and it does contain everything it can and it's still a little more than half the size of a completely empty Go binary. 2MB is huge. For the ReleaseFast version all I did was tell the compiler to optimize for speed and otherwise put no effort into making anything size efficient.


I'm still not really getting why "huge" matters though, unless you're doing very small embedded work.

Which I'd argue Go isn't a great choice in the same way that running full linux on a limited microcontroller is not a good choice. Every systems language shouldn't have to over complicate it's run time to squeeze kilobytes out if 99% of use cases are running on modern personal computers. (C or TinyGo, or a variety of languages are better suited for embedded)

Like, yeah, 5 cents is huge compared to 1 cent, but we're dealing in hundred dollar bills here.


> 2mb is practically nothing on modern machines.

There are plenty of embedded systems that have <= 1mb of memory. E.g. a popular one is Teensy (https://www.pjrc.com/store/teensy40.html).


The main issue is none of these though, it's that Go's DCE is generally quite bad so any package you import will lead to significant increases in binary size.

`fmt` is (or was) a big one in every sense of the word, and why using the `print` and `println` builtin functions was sometimes a workaround (probably still is): a few years back, converting a `fmt.Print` call to `print` would save you a cool megabyte (if that was the only cause for importing fmt obviously).

I've always expected that was a major reason for the Go compiler being so anal about unused import, it's essentially making the user perform DCE.


Exactly, if it’s not enforced, people often neglect it. As projects grow, so do unnecessary dependencies.


Perhaps functional chunks could be excluded from the final emitted binary when they're unused / unreachable from main?

How hard would it be to check the AST for a go program's usage of channels, goroutines, etc at compile time? As these are features not used by the go language itself, it seems nearly trivial (no jit or other super dynamicism).


The GC needs to mess with and start and stop and throttle threads. The threads resize their stacks and that needs to talk to the GC. System calls park and restart threads. And so forth.

If you use any of it you'd need all of it.


Yes, the idea is if you want to use X (e.g. the GC), it comes with the binary. Otherwise exclude it.

I'm proposing splitting the functional blocks into distinct pieces and including only what is actually needed in the final binary.

For the case of the GC, if your go program somehow avoids all allocations (wtf??? Empty main perhaps - a seemingly stupid case), perhaps it could be excluded.


Yes, I understand the idea, the point I'm making is that the runtime is a monolith, it can't be split because each piece has necessary and irreducible connections to the other pieces.


Ah, gotcha. Thanks for patiently clarifying :)


Here’s the classic story of when code can live without GC: https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...

A very basic “Hello world” could also be GC-less and just let the OS reclaim all memory when it terminates.


>That seems a bit extreme

Not to me, I would have done the same.

And I tend to look at the binary size of simple programs to get a first taste of the style of tech I am looking at.

Bloated binaries and slow compilation time are red flags in my book, I use them as simple heuristics to avoid losing too much time.

I know it is not perfect, but it is fast and simple, like all good heuristics.


This was pretty much it. (I am the author)

People do low-commitment exploration of new tech all the time, perhaps offhandedly download some new language compiler to see what all the fuss is about. Even a modestly negative experience can easily tip the scales for them to lose interest and go do something else.


You have to consider that go users can come from worst places(i.e. Java or Interpreted dynamic languages) so just getting a statically compiled binary to deploy it's already a big improvement.


It's fast and simple but not a good heuristic. It's like judging a car by how expensive it is. It has nothing to do with how the thing performs.

Optimising for size is such a waste of time in this day and age. I know I would rather choose a 5 MB statically linked binary than a 5 KB dynamically linked one, because they take pretty much the same amount of perceived time to transfer over the internet, yet the smaller file might take more of my time because I need to find and install missing dependencies.

On the list of priorities for language developers, size optimisation is towards the bottom, unless we're talking about embedded but we aren't.


You are right to some extent.

Except in very specific context, shaving bytes is pointless, and a few kilobytes are barely relevant.

But in the real world we're often talking about several orders of magnitude differences.

I would still accept a Hello World that is around 100Kbytes, but not an order of magnitude more.

And I am talking about statically linked app, this is an other subject but I tend to think that dynamically linking is way overused and rarely justified.


But that assumes that file size grows linearly with code length, which is not the case. Twice the code size for a hello wold will be barely any larger than the hello world itself.


Compiling dynamically is not really optimizing for size, since you have to include size of said dependencies.

As far as whether size matters - it really depends. Some parts of the world are still mostly on dialup (or very slow mobile).


I remember one of my first experiences with Facebook's HPHP, compiling a "hello world" PHP program. I can't remember anymore whether the binary was over 20 MiB or over 200 MiB (both seem plausible in retrospect!), but I do remember the existential dread that ensued. In my experience, massively bloated binaries should instill only slightly less fear than brittle ocean-boiling build systems. Tears will surely result from either.


> I'll allow it. ABI compliance is a very good reason for "wasting" 3 bytes of code, and should arguably be added to our original assembly program

I guess we all have some set of scope creep that we take as OK. ABI compliance for some, GC or other niceties for others, and then there is anything node/electron based.


Which Go compiler? gccgo produces smaller executables.


This is gold:

""" Overhead breeds complacency — if your program is already several megabytes in size, what's a few extra bytes wasted? Such thinking leads to atrocities like writing desktop text editors bundled on top of an entire web browser, and I think it would be nice to have a language that pushes people to be a bit more mindful of the amount of resources they're using. """


Reason why I stopped using Skype was that they started to use Electron. That was the last drop a decade ago.


Not sure why the article links to some runtimejs copy of musl [1]. It's quite old (2014 vintage) to the point where the referenced code is no longer present upstream [2].

[1] https://github.com/runtimejs/musl-libc/blob/master/crt/x86_6...

[2] https://git.musl-libc.org/cgit/musl/commit/?id=6fef8cafbd0f6...


Thank you. I've fixed it by linking to the proper upstream repository instead.


It's an interesting analysis, but I'm a little skeptical that there is any partical scenario where Linux would be runnable and a 1.2 kB program will significantly underperform a 600B program. The program would have to be small indeed for this difference to be material.


Between a 1.2 KiB and 600 Byte program, probably not, no. I wrote that section to confront the general idea that program startup performance is irrelevant and that optimizing it doesn't matter; a language that assumes so will start piling up features until it actually does, like it does for C in the example.


I've been seeing a lot of posts about Zig recently. Out of curiosity is there a specific reason these are more common (like a recent big project or release)? Not hating, just wondering.

EDIT: Just saw there was a new release 10 days ago, although it wasn't 1.0


The people who like zig really like zig so they upvote it to the top while the people who don't like zig are at worst indifferent to it. So no one really complains.

But we're starting to reach the point where zig is becoming popular and people will hate it for that alone very soon. Because we're all really just cavemen typing on keyboards and we can't even discuss how to encode text strings without descending into tribal war.


Ironically, V programming language has 2-3x the amount of github stars (decent measure of popularity) to zig/nim, yet has been the brunt of general hate here on HN from day one.

(Mostly, as I can tell, due to some single individual blog Christine something?;that everyone has taken as gospel).



never heard of V and this blog was the introduction I didn't know I needed. beautiful but brutal :D ... thank you


Don't forget the post is 2.5 years old... Lots has evolved.


That blog post put into words a sentiment that had been around for quite a while.


The V website did in fact contain impossibly wild claims that were taken down when called out. Add to the fact that it was closed source for a long time, but had an ongoing patreon, you could easily see the possibility of a scam in all that.

And I personally haven’t seen evidence to the contrary still.


Not only is the blog post from 2 and a half years ago, but it was bashing the very first alpha release of the V programming language (vlang) that just hit GitHub at that time.

Could you imagine doing a brutal bashing of Zig or Nim, the first week it came out? Then claiming they're vaporware? Make it make sense. An alpha of a programming language, much less a program, isn't feature complete on day one. This should be common sense.

Trying to label a brand new alpha as vaporware is inappropriate at best. A fair and unbiased critic gives a project a reasonable period of time to prove itself, not attempt to prematurely stomp all over it. Looks wrong to do so, at the least.

Even crazier is then using that very old blog post, as the "be all, say all" for the state of the V language versus actually looking on GitHub and the documentation. V has been consistently coming out with weekly releases (https://github.com/vlang/v/releases) and updates for years now.


Thankfully much of the controversy around V has escaped me. The clashes between proponents and detractors has mostly occurred on my periphery and it comes across rather silly to me. And I realize that will make me unpopular with both sides. It reminds me more of highschoolers arguing rather than than engineers actually trying to solve something, in that no one is fully wrong and everyone is a little bit right.

One side should realize they're deriding a project that's still in its infancy and it's bound to be unfinished, buggy, off-kilter, and missing features that are still more or less just rough ideas in the heads of the creators. The same can be said for any early project.

And one side needs to accept the creators were a bit overzealous and advertised features that we're really just rough ideas in their heads. And that fighting criticism with anger just makes those criticizing double down in their energy and aggressiveness.

I'm sure V will be a fine language some day. I actually like the simple style of Go's syntax. It's a breath of fresh air when you're up to your neck in "Value<double, SIUnits::m> distance;" Though I question if it actually brings anything new more than some nicer syntax. I might wait until it's closer to a v1 and make something in it for my blog.


Thanks for saying all this, probably one of the more mature insights. Personally I find V to already be a fine language - it improves on everything I feel Go just slightly missed the mark on.

Regarding the hostility towards the performance claims; even if they aren't reaching those claims, the language is nonetheless extremely fast - so fast, that missing the claims is a mute point from a user perspective. It uses TCC on the backend; which everyone knows, and nobody disputes, is a very small and fast compiler.

I understand/derive people are maybe upset the author is getting Patreon/Github donations based (in their opinion) on these 'false' claims. I don't think that's the case at all - people are thinking it's causation, when it's mere correlation.

It's a still-obscure language, and it's not like they are selling Norwex cleaning supplies at a convention of everyday people walking by. The landing page isn't attracting your family relative, compelling them to donate money on Patreon. It's an extremely small group of people that are going to pay even the slightest interest to the project. People who have an interest in a boutique languages and providing monetary support are more than likely doing so out of interest in the project's goals (obtainable, or otherwise), appealing syntax, ambitions of the author, github activity, and general goodness; not solely off some bullets on a landing page - especially in a world where we know 99% of that is always shilled bullshit from VC's, bigbox products, etc. etc.

Undoubtedly, the author could have maybe done better PR - who doesn't stumble at first though? Like, really. Everything I see referenced is from articles 2.5-3 years old... from a single blog to boot. Notwithstanding, the donations (which seem to be the target of most peoples comments), are around $1,000/month; this isn't really a large amount of money; but people are acting like this is $20,000/month.


To some degree, part of the thing which places V in the spotlight for such drama is it being brought up in quite a few zig (and of course other) posts with a note on how the community is in some way treating V unfairly despite this being around 2-3 years ago along with excuses [1] for what took place. It would be better if there were more V posts instead which focus on it's merits or projects rather than holding on to and bringing up that most would have forgotten/walked away from by now.

[1] not the best word but I can't think of a better one at the moment.


V was also called out by Andrew Kelly (Zig's originator) and Ginger Bill (of Odinlang fame). A lot of people who knew stuff about compilers and languages had things to say about it, it wasn't just one blog.


github stars are a pretty crappy measure of anything. maybe for "people who've seen it and thought it looked vaguely interesting" (which something like V that makes a big splash with gigantic promises easily hits), but even that is vague, given how differently people use them. (People making a big deal out of them is a pretty good negative signal though)


It is like Newton’s second law or something: anything deeply loved must be hated by an equal number of people with equal intensity. Otherwise there will not be balance in the force… or something.


Hahaha can this please become a internet proverb


Just fyi, zig 1.0 is likely years out.


Somehow zig has become the "new C" and Rust has become the "new C++" in the community, I guess. I don't think there is something strange about the recent days. Sometimes there are small waves about that and there have been in the past.


It has many of the virtues of Rust, and lacking some of the sins, probably HN likes it for the same reasons as Rust. Rust also took a while to make it onto the radar but then it was popping up more frequently.


>I've been seeing a lot of posts about Zig recently.

About 4 reached HN front page in the past 30 days. Not really that much. And that is already considered an outliner in the history of Zig.


The bit about people uninstalling when they see a 2MB no-op demo program is so dumb. That is a fixed cost, its just because Go packages it's runtime in the EXE so you can distribute it and expect it to actually work rather than depending on a runtime being available on the system. Sure there are niche cases where that isn't appropriate, and you don't use Go for those. But that initial 2MB doesn't tell you anything about how program size grows for actual projects.


> Overhead breeds complacency — if your program is already several megabytes in size, what's a few extra bytes wasted? Such thinking leads to atrocities like writing desktop text editors bundled on top of an entire web browser, and I think it would be nice to have a language that pushes people to be a bit more mindful of the amount of resources they're using.

You know, reading this kind of hurts. On one hand, i fully agree. If we take something like Visual Studio Code as an example, the folks at Microsoft have seemingly done much of what's possible to make it feel snappy and responsive, sometimes to surprisingly good results, yet i also remember how sluggish Brackets or Atom were as editors, both also based on browser technology.

On the other hand, i feel that it's not the filesize that developers care about when choosing browser tech, but rather their hands are forced because of a lack of other good options out there. Show me a widely supported cross platform GUI framework that's as easy to pick up and use as Electron's stack for the developers of our current year - people who in many cases don't have years to master many different technologies, but just want/need to apply much of what they already know to deliver software in short amounts of time. For better or worse, browser technologies have become the one common platform, in more ways than one.

Of course, i'll actually try to answer my own question above. In my eyes, the way how Lazarus/Pascal approached this issue was great: their LCL (https://en.wikipedia.org/wiki/Lazarus_Component_Library) provided a set of common components and then allowed them to target Win32, GTK, Qt, Carbon, ... based on the compile target, while on the tooling side they still had a really great RAD environment, with all of the drag & drop functionality of yesteryear. Now, Lazarus/Pascal are basically dead due to lack of hype/interest in those technologies, but they came pretty close to being the best option out there in my eyes!

Apart from that, i'm not quite sure. Over the years there have been many attempts at creating cross platform GUI frameworks for .NET, but there's still nothing truly stable yet. In Java, i really liked Swing because it was just so darn functional and simple to implement (with IDEs also having GUI builders), however many rather disliked how it didn't really look or feel native in many cases, which may or may not be relevant for particular types of software. In recent years, there also was JavaFX/OpenJFX (https://openjfx.io/) which i think is a good step in the right direction, but last i checked the tooling wasn't quite there yet - an integrated workflow that just lets you create and drag around windows/components without bothering too much to read the docs, whilst also letting you edit the autogenerated code after a rough wireframing/prototyping pass, should the need arise. In most other tech stacks, like Python, packaging and other concerns can be a mess, as can the loosely coupled tools be cumbersome to work with - sometimes you just want one executable (or close to it) at the end of your build steps, which can run on any supported platform.

The fact of the matter is, that if there was something easy to use, many would switch away from Electron on a whim. Not everyone is smart enough or has the time to grok lots of different technologies, or learn something a bit lower level like Qt or GTK, which in lieu of bindings for higher abstraction level languages, will typically be used in language with plenty of footguns, avoiding which takes discipline and knowledge. Regardless, not everyone needs the high customizability of the web platform. Sometimes just having native UI components being created from some abstracted common set of components would be fully enough.

But until technologies that make that easy surface, we'll just keep complaining about Electron every year.


This sort of thing makes me think fondly of using Fortran many years ago (I forget which version).

Then every subroutine was separately compiled into a separate object file. The linker linked only those that were actually used (if I remember correctly).

Of course there was no dynamic dispatch so it wasn't difficult to prove what was and wasn't needed.


Native-code libraries that are not hostile to static linking usually do this kind of thing even now, and linkers will, indeed, figure out which parts are needed and pull out only those. Even Glibc does it when statically linked, though its sprawling dependency graph makes the result less than helpful.


I know it's easy to troll on JS but Electron is a very succesful project that lead to softwares used by hundred of millions of people, can't say the same about Zig or its future.


Can't we admire both Electron and Zig? Certainly I do ...

> ... used by hundred of millions of people can't say the same about Zig or its future.

I'd be completely unsurprised if in 10-years Zig was the toolchain of choice for low-level systems programming and there were Linux forks containing Zig code used by used by hundred of millions of people.


Yes, I was thinking about it recently, there's an initiative to integrate Rust into the Linux kernel, but it's going to be tough: Rust isn't a simple language to learn.. But I expect lots of "grassroot" experimentations with Zig inside Linux once Zig reach 1.0, and in theory there will be much less resistance from kernel devs due to the simplicity of learning Zig.. Of course Zig bring less than Rust from a memory safety POV..


There are a lot of popular things that are horrible.


I'm tired of the slagging on Electron without providing something better.

Cross-platform development is known to suck. Someone from the Linux world could have created a cross-platform GUI toolkit system that worked on Windows, Linux, OS X, Android, etc. and "didn't suck".

People have tried. Linux folks bitched about them all. So GitHub created something. And it appears to suck least.

Want to help fix the "problem"? Here's how you break the logjam: create a modern text layout/shaping/rendering toolkit based around vectors/paths that works natively on modern graphics cards (aka: use Vulkan). Text and vector rendering are the bottlenecks that cause everybody to use suboptimal solutions like "use the Chromium renderer" (Skia underneath) because it's such a bottomless swamp of complexity.


> I'm tired of the slagging on Electron without providing something better.

As a user I think it is fine for me to call out Electron crap when I see it as crap. Companies/Developers using electron claims that their product wouldn't even exist, were it not for electron.

I do not have to praise them for saving world going to dark ages by making another sucky, half-assed Electron app. Because they are doing it for their benefit and not for my welfare.


> Cross-platform development is known to suck.

I'm getting tired of people saying this when it just isn't very true. There are multiple GUI toolkits that are cross platform, WYSIWYG cross-platform GUI designers like Lazarus, Java, and just plain old writing GUI code for multiple targets. Christ people, we used to ship computer games on 5+ different platforms with wildly different graphics and sound hardware and different instruction sets in an era where you often didn't get the luxury of writing things in a compiled language if you wanted them to perform!

Also, there's an argument to be made that most programs have no need to be cross platform in the first place.

I'm not saying cross-platform is trivial, but it isn't some impossibly herculean task either.


> There are multiple GUI toolkits that are cross platform, WYSIWYG cross-platform GUI designers like Lazarus, Java, and just plain old writing GUI code for multiple targets.

And they all kinda look and work like crap?

Take a look at Kicad (no offense to the Kicad folks--it's just something cross-platform that I'm intimately familiar with and use every day), for example, and tell me anybody would hold it up as an exemplar of cross-platform GUI development. Tk is cross-platform and cross-language, and all everybody does is complain about how it looks like crap. Blender thought GUI toolkits were so useless that they grew their own. No Digital Audio Workstation uses any mainstream GUI toolkit. I can go on and on and on.

As for computer games, they are a completely different beast. Computer games take over the whole screen, proscribe your input interface, and generally limit your ability to interact with text or graphical elements. GUI is a whole lot easier if you don't have to deal with text.


That is a consequence of not having proper UI/UX people on the team.


And? What's the best way to get proper UI/UX people?

Write in some obscure language and GUI toolkit and pray that 1 of the 5 in the world will join your project? Or change the language/toolkit to the one where the greatest concentration of UI/UX people already exist?

Programming, especially open source, is a social as well as a technical problem.


To pay them would be a good starting point.


While Google thanks those developers that help to push ChromeOS agenda.


... but chrome is just using freetype and harfbuzz for font rendering, exactly like Qt. It's not a differentiating factor.


We should thanks those to push software further. php, mysql, js they allow other project to improve.


Oh yea, I love all that ad-tracking by websites which I do not even visit. They are definitely pushing further.


It is absolutely impossible to know if in a parallel world where they hadn't existed, everything wouldn't be unambiguously better for everyone


Indeed, ChromeOS takeover on the Web appreciates the wide deployment of Electron packaged apps as yet another means to deploy Chrome on user's machines.


Was there even a mention of JS or Electron in the article?


"...atrocities like writing desktop text editors bundled on top of an entire web browser"


Launching entire processes to do small operations is an antipattern. Whether a program is 1k or 10k can’t make that much difference if the minimum process spawn time is counted in hundreds of milliseconds? Yes, I know, it’s about the philosophy of unnecessary waste and not actually about the first kb, a philosophy I can certainly support.

But still, launching a 1MB program and then launching threads from it takes a fraction of the time per thread. The 1MB process start time is quickly amortized.

I know as he mentions there are situations where small processes are launched hundreds of times, but these days I think of it as a disappearing antipattern. Consider how in the past it was common with compilers to use one process to compile one source file and exit. Or how a web server would serve one request and then exit. There is a reason that went out of fashion.

So the only time I’d reach for actually making a tiny executable out of performance concerns would be if I couldnt use a fatter binary and I knew the tool (for whatever reason) had to be invoked a lot, and it wasn’t under my control.


> if the minimum process spawn time is counted in hundreds of milliseconds

It’s not. Here’s an extremely rudimentary bash/zsh measurement, spawning a thousand processes:

  time (i=0; while [ $i -lt 1000 ]; do i=$(($i + 1)); /usr/bin/true; done)
This is taking my Linux-running laptop 450–530ms under zsh and 300–320ms under bash, suggesting a process spawn time of under half a millisecond, not hundreds.

Another variant:

  time (i=0; while /usr/bin/test $i -lt 1000; do i=$(($i + 1)); done)
Around 540ms in zsh and 440ms in bash.

I know that Windows has historically had very slow process spawn by comparison (no idea of the current situation, I just remember it being a real drag on some process-heavy makefiles I was using at work in WSL1 days), but my recollection says that even it wasn’t into hundreds of milliseconds.


1000 processes in half a second is very slow when you take into consideration how much work a modern CPU can do in a second. You could have ran though hundreds of millions of loop iterations doing some non-trivial task in the same amount of time, even on very slow machines.

I'm not disagreeing about process spawn times not being extremely slow, I would have guessed a few milliseconds depending on the program, but a millisecond is a long time for a computer.


The bare idea of running a separate process as the condition of a loop is just disgusting though. I really don’t get why are we stuck with bash.


It’s not disgusting, it’s a superb feature of shell scripting and a manifestation of feature composition. That you can write things like `while git diff --exit-code` is delightful.


Yes it is good for that occasion. It is terrible for things like `test`, `true` and the like as a separate processes.


Slightly OT: It seems to me the industry trend of runtime-like bundling happened largely because dynamic linking and platform APIs didn't provide developers with adequate tools to build apps, for various sub-reasons. Here, I include Go, Python and electron so I'm using a broad definition that covers everything from syscalls to http stack to GUIs, on desktop, mobile and servers alike.

In either case, we are now in a world where there is less and less overlap between native development stacks of different vendors. It sucks, but the tactic of shaming developers into learning and building for 5 native stacks isn't gonna fix the problem.

If many apps share the same runtime, can't we deduplicate binaries and blobs on a file system level? Use copy-on-write more cleverly in ram? Anyway, I wish the solution space was geared more in the direction actual developers are already going, changes that can be scaled world wide without evangelizing some new paradigm.


While I get where you're going with this however if you use small binaries that are built for efficiency then your process start time isn't 100s of ms, it's a least an order of magnitude faster.

Using a single process also guarantees a higher degree of isolation and resistance to memory leaks which can be important on certain systems.


Processes are the natural unit for user composability and control. It's more or less necessary for any multiple language scenario. That really is worth a lot. And it makes lowering the costs for that case quite worthwhile.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: