>I’m not on Twitter anymore - my goal is to instead channel that micro-blogging energy into regular blogging energy on my personal website as well as posting Zig project news here on ziglang.org.
Cool, I like this!
I realized last year that I was investing too much time into sharing things on Twitter that I'd subsequently forget about or be unable to find.
Instead, I created a separate section on my personal blog for "notes" where the idea is to house content that I'd otherwise post to Twitter. It's been working well, and I like owning my own content rather than contributing it to another platform.
I'm especially glad for this change after seeing what's happened in the past few months with Twitter and Reddit. It's been unfortunate to see those platforms become so much more possessive of the content that their users generated. If you publish to your own platform, you're safe from that. At least until LLMs bury your platform in noise.
I'm also trying to do my part in fixing my own and many other's overreliance on reddit comments for everyday problem solving and advice. I decided that every time I would write an decently sized explanation of something on Discord/Telegram, I would instead write a post and link it.
To make it as convenient as possible (every amount of friction would eat at my motivation) I made it using Obsidian Digital Garden [1]. It's a bit of a hassle to set up and the UI is annoyingly bad but it's pretty convenient (boils down to click to publish) afterwards.
> At least until LLMs bury your platform in noise.
Curious, but how exactly would they do that? Are you talking about scraping your website and adding garbage filler content from LLMs and republishing to divert traffic?
It’s gotten really bad over the past few months, but even before GPT was a thing scummy operators just used Mechanical Turk or Fiverr.
I don’t understand why Google is doing so poorly at filtering it out: for a concrete example, searching for terms relating to common semi-trivial problems when programming mainstream platforms like the web, Java, or C# will now result in low-quality, non-authoritative sites like GeeksForGeeks - or even W3Schools always coming up first, with authoritative sources like MDN, the W3/WHATWG, even StackOverflow sometimes being below-the-fold.
The site: filter is your friend! Google should be smart enough that you don't need to use it but it is what it is.
You can even create your own custom Google search that only returns results for domains that you list. So you could put stack overflow, reddit, and a bunch of reference sites (like mdn) and have super high quality search results.
I already do that when I can - but I'm more concerned about the legions of beginner-to-intermediate-level "coders" out there who will be learning outdated (if not woefully insecure) programming habits from those low-quality sources, which comes back to bite us all because, like it or not, companies hire people on the basis of how cheap they are, not about how competent and socially-responsible they are.
An engineer being overly optimistic with a time estimate? Say it ain't so!
"A delayed game is eventually good, but a rushed game is forever bad." - Shigeru Miyamoto
The same goes for features. I'd rather Zig have a delayed/good async next year and forever after, instead of a rushed/bad async right this moment and forever after.
I really appreciate the honesty. It is not the end of the world to delay a highly anticipated feature, especially when the delay was a consequence of prioritizing the long-term stability of the codebase over the promised delivery date.
There are costs to not shipping a feature you meant to ship. So we first get a bit of sunk cost fallacy, trying to stretch to meet the goal, then if we pull back we have to make sure the feature is either completely toggled off or reverted. And then you have to fix the docs, which is never a fast process. It always takes longer wall-clock time to do the grunt tasks than strictly necessary to perform the steps.
I have yet to try Zig, but I approve of the idea of taking extra time and reducing scope to make sure that what you do ship is solid, especially in something like a programming language that may end up being a foundational piece of many other projects.
I hope Andrew does a Stream of preparing the 0.11.0 migration documentation like he did for 0.10.0. It gives a good insight on the new features in a conversational manner.
Waiting for IO is never cool. Also, zig's version of async is pretty low level and still obeys the no hidden control flow mantra (at least how it was implemented until 0.9.0).
It's similar to manual control over the current stack. You pause execution, you dump your current stack onto the heap, and restore it later. Control over concurrency, similar to out of order execution. I'm describing it conceptually but I assume the implementation can be/was smarter/faster.
I don't think closures have anything to do with it.
There was also some debate about how cancelation of an awaitable function built with those primitives would work. I don't think there was a great answer for that yet.
There isn’t implicitly called destructors because the languages mantra is “no hidden control flow”. RAII et semantics is almost entirely hidden control flow that you just have to “know”.
Lots of people hate when they need to “just know” things to fully parse the code they’re reading.
You’re free to dislike those decisions, of course. Personally, I like the target of no hidden flow.
The big problem with defer/errdefer is that I have no way to mark a function as "This function always needs a defer after it and the compiler needs to yell at me if it doesn't exist."
It's also sometimes really hard to scope your defer/errdefer correctly. You may have to twist your code inside out because defer/errdefer ends at a block scope while your variable may not (via: "break :blk varname;").
This all bites so hard for things like reference counting. The bugs ... the bugs ... <mumbles huddled up in corner>
That wasn't a request for help (I have debugged my reference counts already), but I thank you nonetheless.
You can do as you say, but that winds up being a lot of non-enforced boilerplate. And you can get subtle errors if you miss the defer at the wrong scope (ask me how I know this ... actually, better yet, please don't as it will give me flashbacks :) ). And neither the language nor compiler can help you. This contrasts with, say, Rust where reference counts or locks can be enforced by the compiler and simply never go wrong.
defer/errdefer is obvously vastly better than C. My codebase was painful in C. Zig made it tractable, but it was hardly pleasant.
Unfortunately, I really don't have a good suggestion as to what Zig should do instead. defer/errdefer is a minimum, but it's not clear what a single, better step further actually would be. Most solutions in other languages wind up with RAII and that invokes a nightmare of cascading design decisions through a programming language that Zig very much does not want to follow.
It will be interesting to see where async/await finally lands. I think that will have some component of a "slightly better" solution.
Yup. The lack of destructors means that at the time there wasn't an agreed upon way to cancel an awaitable function. Async/await in zig was cool, but there's a reason why it's described as an experimental feature in the article.
Destructors don't seem to be the best place to implement cancellation for async functions: Rust async is going through a phase where the community is realizing it would be nice to have async destructors or non-cancellable async functions to avoid introducing unnecessary overhead like concurrent reclamation, reference counting, and dynamically allocated memory for things that would otherwise be done statically using structured concurrency.
Go through Zig’s Async. Its a bit like co-routines but low level. You are required to keep track of memory allocations and pointers to function “frames”. For someone coming from a high-level language POV, frames were a new concept. But Zig does it so superb.
Zig just needs some runtime event loop like Tokio or AsyncIO from Rust to get up and running with its fantastic async model.
Zig's async implementation is the best of any languages, because you can run them without an event loop and in that case they will be simply synchronous. This completely eliminates the function coloring problem.
Zig async doesn't change semantics of how things are called. If something is called sequentially, it will execute sequentially with respect to itself, with or without async.
"Async functions can be called the same as normal functions." Only when you have an event loop, the order async functions are running might be different because of the scheduling, but everything else is the same.
No, async functions still return an async Frame (the semantic equivalent of a Rust Future) which evaluates to the result, rather than the result itself. Zig's "colorless async" description comes from the implicit infectiousness by default rather than colors not "being there":
A function which contains a suspend point becomes implicitly async. An await is implicitly a suspend until the Frame completes, so it also makes the caller's function async. Calling a async function without the `async` keyword is implicitly `await (async f(args))` so same deal.
You ended up not needing to know if `f()` is async or not when you call it ("effectively colorless"). If it is, it just makes you async and bubbles up until the sync boundary (an `async f()` call, `main()`, or an `export fn` C API).
Zig's stdlib could handle the main() case and spin up an event loop if you requested via `io_mode = .evented`. Some stdlib stuff like net/fs would then use the event loop while others like os/Thread would stay the same if you still wanted to do sync stuff.
This was the common case, but async is a lang construct not an stdlib construct so it works for all targets. For example, you could use it in wasm, the kernel, etc. You would just need to write your own starting / scheduling of the frames, known generically as an "event loop".
No, there still are sync and async versions of functions. Zig just implicitly chooses which version to run and implicitly inserts await points. There's still a runtime and event loop if you use async. It's an async like in any language, but it relies on compiler magic instead of having locally explicit syntax.
One can check this via static analysis as the complete code tree can be inferred with relative ease (only package paths and usingnamespace are depending on comptime) + third party code will be able to query compiler info, so I do not expect this to be a big problem once static analysis symbols can be enforced as unique (see https://github.com/ziglang/zig/issues/14656#issuecomment-143... and follow-up for the use cases of static analysis tooling).
Also, since there is currently no problem: YAGNI and solve it once a use case comes up.
Zig's async, like Lua's, lets people write libraries that are generic over async-ness by default. "dump my registers and jump to another stack" is also a legitimate move in assembly (though that's not how this implementation actually works).
I understand the technical reasons (caused by cultural reasons) why Python ended up with colored async but man oh man was it a bad choice for a high-overhead language to do that…
What alternative approach would you have preferred Python do? While I’m not a fan of the coloured-function approach either, I can’t think of a better alternative that checks-off the most boxes (Java’s Loom looks interesting but in-a-way it seems too-good-to-be-true to me, while low-cost approaches like Zig’s would introduce too much complexity for novice users and people wanting something that just-works)
Having some form of coroutines or green threads seems like a good idea to me for something low-level.
Otherwise countless projects that need something like it will re-implent their own incompable version with longjmp or similar means.
Is there a reason async/await is being implemented specifically, rather than some more generally-useful primitive (like delimited continuations, algebraic effect handling, functor/applicative/monad, etc.[0])?
When it comes to e.g. memory management, Zig tries to be unopinionated and allow different implementations to be implemented as desired; so it seems odd to bake-in something like async/await (even if the execution strategy of those computations is up to the user).
I've seen this happen in many high-level languages (JS, Python, PHP, etc.), which I mostly attribute to (a) ignorance of those generalisations, and (b) a band-wagon effect. The unfortunate result in those languages is a bloated mess of try/catch, async/await, for/yield, apply/return, etc. and all of their O(n!) possible interactions; which could have instead been implemented as libraries on top of a single primitive (e.g. shift/reset, or whatever)
[0]: AFAIK these are all equivalently expressive, and given one it's easy enough to write the others as libraries.
PS: I recall asking this question when PHP added generators; I can't seem to find a bug report or mailing list post though...
Zig's async manages the coroutines intrusively: It generates the state machine type, you provide the memory for where an instance of one runs, and you manage resuming it until completion. Similar to Rust's Futures, it's pretty unopinionated in how you manage them so it works everywhere (i.e. freestanding). Could you clarify (or provide more reading on) how the other systems like delimited continuations, algebraic effects, and monads differ from Zig async + how they could be adapted in a similarly unopinionated/low-level way?
Incidentally, monad transformers are an attempt to work-around a deficiency of monads: that they don't compose. Algebraic effects have become popular precisely because they do compose.
> Could you clarify (or provide more reading on) how the other systems like delimited continuations, algebraic effects, and monads differ from Zig async + how they could be adapted in a similarly unopinionated/low-level way?
The key requirement for all these is an ability/API to defer and resume execution (indeed, delimited continuations are sometimes described as "resumable exceptions"). In higher-level languages we'd just assume the presence of first-class functions/closures, and use those to describe these features. I'm less familiar with how that looks in a very low-level language like Zig, however these "async frames" appear (to my naïve eyes) to be analagous; hence why I'm interested whether one of those more-general primitives could be provided instead (the answer might be no!).
As for clarification on those features, here's a quick attempt. Firstly, note that all of these approaches are basically APIs to construct, consume, and combine (deferred) computations: they are unopinionated on how those computations get run (e.g. the user could supply a "main loop", or whatever).
Delimited continuations are like exceptions, except the stack (AKA continuation) is passed to the handler, which may choose to resume it. We can implement coroutines/async/yield/etc. by having handlers which put their continuation in a queue and pop off some other one to resume instead; we can get data parallelism by resuming a continuation many times; we can get backtracking by remembering old continuations and trying them again; we can get parsers, probabilistic programming, nondeterminism, etc. https://en.wikipedia.org/wiki/Delimited_continuation
Algebraic effects are similar, but defunctionalised: i.e. they represent control flow with datatypes, and are "interpreted" by a user-specified function.
I think you can implement delimited continuations in terms of async frames and memcpy (like in Lua you can implement call/cc in terms of the built-in coroutine library plus coroutine.clone). The language doesn't guarantee that this works though if you try to resume an async frame somewhere other than its original address.
Cool, I like this!
I realized last year that I was investing too much time into sharing things on Twitter that I'd subsequently forget about or be unable to find.
Instead, I created a separate section on my personal blog for "notes" where the idea is to house content that I'd otherwise post to Twitter. It's been working well, and I like owning my own content rather than contributing it to another platform.
I'm especially glad for this change after seeing what's happened in the past few months with Twitter and Reddit. It's been unfortunate to see those platforms become so much more possessive of the content that their users generated. If you publish to your own platform, you're safe from that. At least until LLMs bury your platform in noise.