> Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it.
It's very refreshing and good to see WASM is embracing GC in addition to non-GC support. This approach is similar to D language where both non-GC and GC are supported with fast compilation and execution.
By the way now you can generate WASM via Dlang compiler LDC [1].
No, I don't think it will. Pointers to managed objects are opaque, and aren't actually backed by the wasm memory buffer. The managed heap is offloaded.
Shrinking the memory object shouldn't require any special support from GC, just an appropriate API hook. It would, as always, be up to the application code running inside the module to ensure that if a shrink is done, that the program doesn't refer to memory addresses past the new endpoint.
If this hasn't been implemented yet, it's not because it's been waiting on GC, but more that it's not been prioritized.
I'm not familiar with WASM. Can someone explain why this is a good thing? How does this work with languages that do not have a garbage collector, like Rust?
The answer was kind of known before hand. It was to enable the use of GCed languages like Python on Ruby to create WASM applications. Meanwhile, non-GCed languages like Rust, C and C++ were supposed to continue to work as before on WASM without breaking compatibility. This is what they seem to have finally achieved. But I needed to make sure of it. So, here are the relevant points from the WASM GC proposal [1]:
* Motivation
- Efficient support for high-level languages
- faster execution
- smaller modules
- the vast majority of modern languages need it
* Approach
- Pay as you go; in particular, no effect on code not using GC, no runtime type information unless requested
- Don't introduce dependencies on GC for other features (e.g., using resources through tables)
Note that the high level language needs a sufficient abstraction in its own runtime to allow substituting the Wasm GC for the runtime’s own GC. Work has been done for Java and Kotlin, but Python, C#, Ruby, Go can’t yet use the Wasm GC.
Agreed. That's what I guessed too. WASM GC is probably a low level component which high level languages can wrap to get their native/idiomatic GC behavior.
> Work has been done for Java and Kotlin
I'm unaware of this development. What did they do? Did they create an interface to the GC specification in the draft proposal?
Non-GCed languages will continue to manage memory themselves. Previously, GCed languages that wanted to run on WASM had to have an implementation of their runtime including GC compiled to WASM. The idea or hope here is that those languages can use the built-in GC instead and slim down the amount of WASM that needs to be delivered to run the application to only include a minimal runtime. The current scenario is closer to if a web app or node app built with JavaScript had to ship a significant portion of V8 with it to function.
1. Different languages have totally different allocation requirements, and only the compiler knows what type of allocator works best (e.g. generational bump allocator for functional languages, classic malloc style allocator for C-style languages).
2. This perhaps makes wasm less suitable for usage on embedded targets.
The best argument I can make for this is that they're trying to emulate the way that libc is usually available and provides a default malloc() impl, but honestly that feels quite weak.
I don't see this as a problem in the JVM, where independently of what programming language you are using, you will use the GC configured on the JVM at launch.
What do you mean by the Java direction? It's a virtual machine with GC support, so I guess in that regard it's similar to the JVM, CLR, BEAM, et al. If anything, those VMs show performance improvement and better GC over time and a strong track record of giving legacy software longevity. The place where things seem to fall apart over the long term is when you get to the GUI, which is arguably a problem with all software.
Java approach: create the JVM to support one language, so it has rich high-level concepts that are unfortunately skewed toward certain assumptions about language design, and it can be reused only for other languages that are similar enough.
WASM approach: start very low-level so C is definitely supported. Thus everything is supported, although every language has to roll its own high-level constructs. But over time more patterns can be standardised so languages can be interoperable within a polyglot WASM app.
If you have Android, emacs is now officially supported on Android (https://f-droid.org/packages/org.gnu.emacs). Along with https://github.com/Julow/Unexpected-Keyboard, it turns out to be a pretty usable (assuming you are the type that is okay working with emacs in general). I am now in search of a simple way to sync notes between my phone and computer (without using Big Tech solutions).
This is exciting! Especially glad that Litestream is still maintained. Is there a use-case for Litestream for more than backup? I am a fan of offline-first but it would be cool to have a way to synchronize on-device SQLite instances to a single central instance.
Can this be done with only Litestream, or is LiteVFS still in development? I looked into this last year but was put off by LiteFS's stated write performance penalty due to FUSE [1]; it's still marked as WIP [2] and hasn't seen updates for over a year.
From my experience having attempted to migrate away from VSCodium (in the attempts to de-VSCode) and build atop Theia as a platform, there are few things to consider:
- The build system is finicky and can easily take hours to figure/fix.
- The error-reporting is severely lacking. You can be lost why something internal isn't working and go on a rabbit-trail with your favorite AI-copilot, etc.
- Documentation is lacking. You have to dive into the platform code to actually figure things out.
- This can be seen positively but there are quite a few new things being introduced regularly (especially AI-related) which, for a platform, isn't always ideal.
One thing I find amusing as a Malayali (aka Keralite) myself is how we tend to get excited seeing other Malayalis. One of the first questions usually is "Where are you from in Kerala?" (or in Romanized Malayalam: naatil evideya?)
The camaraderie is amusing! When they banned beef in Bangalore, the entirety of Malayalis there continued to collectively call beef something else and that system worked wonderfully!
As someone who has recently tried to refactor our app atop of VSCode (treating it like a platform), we got burned by the UI design decisions that are not straightforward to overcome, let alone maintain. The closed-source MS marketplace did not help either towards our OSS goals.
However, I found Theia (https://theia-ide.org) on HN (like a bunch of other cool things; this is one way I justify the time I spend/sink on this site) and find it a much better fit for our OSS goals (foundation owned, open-source marketplace) with full mod-ability while being compatible to VSCode extensions API (in theory). I recommend you look into it for your app.
You can publish your extensions on OpenVSX fyi. A lot of projects have started doing that now. Not all, but a good amount. Glad you found Theia though.
Ah interesting! I'm building https://double.bot (ai assistant vscode extension) and someone asked about VSCodium but I didn't realize there's a open marketplace for that specifically.
It's made by Eclipse Foundation which was created to host projects around Eclipse the IDE. Kind of like project under Apache brand, Apache foundation and Apache the HTTP server.
Thanks for sharing! If Theia is built on top of monaco, I wonder if a form of one-click switch might work. The monaco editor is theoretically part of the vscode repo, while the "workbench" with settings/configs lives one layer up.
I think there maybe more than the five qualities that comprise persistence. But those five make a lot of sense and I like how he shows their interplay. Good read!
> The Apple Security Bounty will reward research findings in the entire Private Cloud Compute software stack — with especially significant payouts for any issues that undermine our privacy claims.
Wow!