Hacker News new | past | comments | ask | show | jobs | submit login
Zig 0.11 (ziglang.org)
253 points by tauoverpi on Aug 4, 2023 | hide | past | favorite | 185 comments



> Backed by the Zig Software Foundation, the project is financially sustainable. These core team members are paid for their time: (lists five people)

That's quite impressive. For comparison, Python had 2 full time paid devs in 2019 (not sure about now).


I suspect some of this comes down to how larger donations are made. For Python they have often had people working at companies donating time on their employers books to the development of it.

GVR for example works for Microsoft and has worked at Dropbox and other places, but spends much of his time on Python. So he isn't listed as an employee of the foundation, but at times is full time.

If a company donates employee time they have more influence on the project than donating cash to pay for development.

The fact that Zig had the cash donation to do it this way is brilliant, and probably a better model.


We're also pretty efficient, more than 90% of the donated money is used to pay developers (the rest goes to administrative costs, CI infrastructure, etc).


More important than the number is that the foundation actually pays the core developers. Python suffers from having almost exclusively volunteer work and those volunteers often are not interested in solving the problems of the foundation or community (eg: packaging etc.).

The most the PSF could do is “bolt on” a developer to solve packaging in yet another non embraced and supported way.


Their biggest release in terms of issues closed: https://github.com/ziglang/zig/milestones?state=closed 1012 in less than 8 months. What an amazing feat.


As a contributor to 0.11 it was very impressive how quickly my (small) PR was triaged, reviewed and merged.

The Zig team are doing great work and they care about the contributor experience.


The Async/Await didn't make it into this release[0] but hopefully by the next one!

[0]: https://ziglang.org/news/0.11.0-postponed-again/


Yes. Ideally, the release notes should have mentioned this in the roadmap section at the end :)


What are the use cases for zig? The website says general purpose and tool chain but people who have used it, what it excels at?


For WebAssembly, Zig really excels.

It's very memory efficient, everything compiles of the box to WebAssembly (freestanding or wasi), the resulting code is compact and fast, and it can take advantage of the latest WebAssembly extensions (threads, SIMD, ...) without any code changes. If you are using a webassembly-powered cloud service and not using Zig to write your functions, you are wasting money. Seriously.

Unsurprisingly, this is the most popular language to write games running on WebAssembly: https://wasm4.org/blog/jam-2-results/

Beyond the language, Zig is also a toolchain to compile C code to WebAssembly. When targeting WASI, it will seamlessly optimize the C library for size, performance or runtime features. I used it to port OpenSSL, BoringSSL and ffmpeg to WebAssembly. Works well.

Also, Zig can generate WebAssembly libraries that can then be included in other languages that support WebAssembly. Most of my Rust crates for WebAssembly are now actually written in Zig.

It's also supported by Extism, so can be used to easily write safe plugins for applications written in other languages.


If you don't mind, since you have experience targetting WASM with both Rust & Zig, what advantages does Zig have over Rust in this particular use case?

Are the memory safety guarantees that Rust offers over Zig not as important or critical when targeting WASM?

I've been interested in checking out Zig for a while now.


Your link is really interesting, but it shows that Zig was the most popular language for writing games for a WebAssembly-based fantasy console with constrained resources, not that it's the most popular language for WebAssembly-based games overall.


My understanding is that Zig has all of the power and modernity of Rust, without the strictness and borrow checker. Unlike Rust, it also has powerful compile-time evaluation and custom allocators, and probably more improvements I'm not familiar with (in Rust you can effectively emulate custom allocators, but you have to rewrite every allocating structure to use them; or you can use nightly, but most third-party library and even some standard-library types don't support them).

I also heard someone say "Zig is to C what Rust is to C++". Which I interpret as, it's another maximum-performance modern language, but smaller than Rust; "smaller" meaning that it has less safety and also abstraction (no encapsulation [1]), but less requirements and complexity.

Particularly with games, many devs want to build a working prototype really fast and then iterate fast. They don't want to deal with the borrow checker especially if their code has a lot of complex lifetime rules (and the borrow checker is a real issue; it caused evanw to switch esbuild to Go [1]). In small scripts with niche uses, safety and architecture are a waste of effort, the script just has to be done and work (and the latter is only partly necessary, because the script may not even be fed enough inputs to cover edge cases). Plus, there are plenty of projects where custom allocation is especially important, and having every type support custom allocation is a big help vs. having to rewrite every type yourself or use `no_std` variants.

[1] https://github.com/ziglang/zig/issues/2974

[2] https://github.com/evanw/esbuild/issues/189#issuecomment-647...


>My understanding is that Zig has all of the power and modernity of Rust, without the strictness and borrow checker.

This is an oxymoron :) The strictness and borrow checker are part of the power and modernity of Rust.

But even apart from that, Rust has automatic value-based destructors (destructor follows the value as it's moved across scopes and is only called in the final scope), whereas Zig only has scope-based destructors (defer) and you need to remember to write them and ensure they're called exactly once per value. Rust has properly composable Option/Result monads, whereas Zig has special-cased ! and ? which don't compose (no Option of Option or Result of Result) but do benefit from nice built-in syntax due to their special-cased-ness. Rust has typed errors whereas Zig only has integers, though again that allows Zig to have much simpler syntax for defining arbitary error sets which would require defining a combinatorial explosion of enums in Rust.

Of course from Zig's point-of-view these are features, not deficiences, which is completely understandable given what kind of language it wants to be. And the other advantages you listed like comptime (Rust's const-eval is very constrained and has been WIP for ages) and custom allocator support from day 1 (the way Rust is bolting it on will make most existing code unusable with custom allocators, including parts of Rust's own standard library) are indeed good advantages. Zig also has very nice syntax unification - generic types are type constructor functions fn(type) -> type, modules are structs, etc.

I hope that one day we'll have a language that combines the best of Rust's strictness and Zig's comptime and syntax.


> Rust has properly composable Option/Result monads, whereas Zig has special-cased ! and ? which don't compose (no Option of Option or Result of Result)

?!!??u32 is a perfectly cromulent type in Zig.


How do you express the difference between None and Some(None) in Zig?


I've found that types like these normally come up in generic contexts, so the code I'm writing only usually deals with one layer of Option or Result until I get to the bit where I'm actually using the value and find out I have to write "try try foo();". That said, I think this sort of thing will do it:

  const std = @import("std");
  fn some(x: anytype) ?@TypeOf(x) {
    return x;
  }
  fn print_my_guy(maybe_maybe_x: ??u32) void {
    if(maybe_maybe_x) |maybe_x| {
      if (maybe_x) |x| {
        std.debug.print("that's Some(Some({d})) {any}\n", .{x, maybe_maybe_x});
      } else {
        std.debug.print("that's Some(None) {any}\n", .{maybe_maybe_x});
      }
    } else {
      std.debug.print("that's None {any}\n", .{maybe_maybe_x});
    }
  }
  pub fn main() void {
    const a: ??u32 = 5;
    const b: ??u32 = null;
    const c: ??u32 = some(@as(?u32, null));
    print_my_guy(a);
    print_my_guy(b);
    print_my_guy(c);
  }


What about

  const std = @import("std") ;
  const print = std.debug.print ;
  
  pub fn main() void {
  
    const simple_none : ?u8 = null ;
  
    const double_optional_none : ??u8 = null ;
  
    const double_optional_some_none : ??u8 = simple_none ;

    print("none equals some none: {}", .{ double_optional_none == double_optional_some_none });
    // prints none equals some none: false
  
  }


> Rust's const-eval is very constrained and has been WIP for ages

Having strong backwards compatibility does that to the language, alternative is arguably worse (see Python 2 vs 3).


Yes, and I don't want a backward-incompatible Rust 2.0 either, but the slowness of stabilizing ! (named for when it's going to be stabilized), specialization, TAIT, const-eval, Vec::drain_filter, *Map::raw_entry, ... is annoying. Also the lack of TAIT currently causes actual inefficiency when it comes to async traits, because currently async trait methods have to return allocated virtualized futures instead of concrete types. Same for Map::raw_entry, without which you have to either do two lookups (`.get()` + `.entry()`) or always create an owned key even if the entry already exists (`.entry(key.to_owned())`).


If you think, that's bad, look at C/C++ standardization bodies, where stuff is eternally blocked because ABI compatibility.

---

Problem is lots of implementation things are vying for inclusion. And many thing people want aren't safe or block/are blocked by possible future changes.

For example default/named arguments were blocked by ambiguity in parsing when it comes to type ascription iirc. And not having default arguments makes adding allocator quite more cumbersome.

Plus Rust maintainers are seeing some common patterns and are trying to abstract over them - like keyword generics/ effect system. If they don't hit right abstraction now, things will be much more harder later. If they over abstract, its extremly hard to remove it.

---

Slowness of stabilizing never type (!) and specializations has to do with the issues they cause mainly unsoundness and orphan rules issues, iirc I haven't checked them in a while.

But otherwise yeah, Unstable book keeps growing and growing: https://doc.rust-lang.org/unstable-book/index.html


Also none of the common knowledge around traditional data-structures and algorithms work with Rust anyways. One needs to dance like a ballerina with their hands and feet tied.


“Linked lists are hard” is not “none of the common knowledge around traditional data-structures and algorithms work”.


I retort that almost all of the concurrent data structures are obnoxious to represent in Rust.

A lock-free ConcurrentHashMap, for example, is by no means a straightforward data structure in a non-GC language. Even if you somehow dodge Rust's pedantry, you still have to figure out who owns what, who pays for what and when they pay for it--and there are multiple valid choices!

Non-GC allocation/deallocation in concurrent data structures probably still qualifies as a solid CS problem.

(And, before you point me to your favorite crate for ConcurrentHashMap, please check it's guarantees when one process needs to iterate across keys while another process simultaneously is inserting/deleting elements. You will be shocked at how many of them need to pull a lock--so much for lock-free.)


This is partially why allocation is left to the caller (concurrent data structures are often intrusive), have single consumers (made lock free using separate synchronization) or only have lockless guarantees not obstruction freedom.

I think the obnoxious part in Rust is doing intrusive and shared mutability parts of data structures. Having to go between NonNulls, Options, Pin, and Cell/UnsafeCell is not a pleasant experience.


You should always know who owns some data, even if you aren't using Rust!


Garbage collection means the VM owns it! And that’s great. How it should be.


Welcome to a land of memory bloat because you've got a chain to a reference due to a lack of clear ownership.


I never understood this take. You shouldn't be heap allocating each node in your linked list anyways. It's trivial to convert the pointer fields to indexes and have each nodes live in a `Vec` unless you need it to be intrusive. You'll get better performance anyways because you're not doing a pointer deref and blowing your TLB up every time you traverse the list.


Ever tried to implement generic trees in Rust?


Trees are easy. It's the desire to have backreferences - making it a graph - that kills you.


I think Rust and Zig are very different in principals and shouldn’t really be compared. It’s almost like comparing C and Ada.


As someone who used C as main language, I've switched to zig. It's the only language that tries to be "better C", and not another C++. Comptime being like Nim where it's not entirely own language is also plus. I'd say it excels at general purpose system programming, and especially if you need to work with memory in detailed way (rust makes this very annoying and hard).


What advantages does Zig have over C?


- comptime, so writing compile time evaluating code, without introducing its own meta language like macros or templates.

- very solid build system (the build config file(s) are written in zig so you dont have to learn another language for the build system (looking at you, makefile)) that has crosscompilation builtin (with one compiler flag)

- language level errors, like, errors as first class citizens. forces you to handle errors, but without much mental or syntactic overhead (you can re-throw them with `try myfunction()`), also results in a unified interface

- no implicit conversions

- looks high-level (modern sytnax that is easy(ish) to parse) but as low level (or lower) than C, no abstractions that hide details you need to know about when programming (similar to C)

- C interop, so you can just add zig source files to a C project and compile it all with the zig toolchain. Zig can also parse c headers and source files and convert them, so you can include c headers and just start calling functions from there. For example, using SDL is as simple as pointing the toolchain to the SDL headers and .so file, and the sdl headers will be translated to zig on the fly so you can start with SDL.SDL_CreateWindow right away.


Just to name one: compile time code execution. It eliminates the need for a separate macro language and provides Zig zero cost generic types.

Not to mention memory leak detection, crazy fast compilation times, slices and a built in C compiler so your code can seamlessly integrate with an existing C code base (seriously no wrapper needed).

Zig is really really awesome. The only thing holding it back from mass adoption right now is that it's not finished yet, and that shows in the minimal documentation. That said, if you're comfortable tinkering, it's completely capable of production uses (and is already used in production in various companies).


> crazy fast compilation times

This is just not true and it's the reason #1 I am not using Zig. To give you some numbers, ZLS is a reasonably sized project (around 62k LOC of Zig) and on my very beefy machine it takes 14 seconds to build in debug mode and 78 seconds to build in release mode.

Because of the "single compilation unit" approach that Zig uses this means you are paying for that very same time regardless of where you modify your program.. so basically clean rebuild time is equal to simple modification time.

As a comparison my >100k LOC game in Rust takes less than 10s to build in release mode for modifications that happen not too far down the crate hierarchy.

So yeah, be for whatever reason you want (LLVM, no incremental builds and so on) as for today Zig is not even close to having "crazy fast compilation times".


True, current compilation times with Zig are not yet optimal. We're getting there though. As our own custom backends become complete enough we will be able to enable incremental compilation and the aim is for instant rebuilds of arbitrarily large projects.

Here people can read more: https://kristoff.it/blog/zig-new-relationship-llvm/

Since that post was written we completed the transition to self-hosted and the x86_64 backend is getting close to passing all behavior tests.


Namespaces.

Real enums, modules, etc.

A compile-time execution system (effectively replaces the preprocessor/macro system).

Syntax that is easier to `grep -r` for without complex regex.

Tests are a first-class citizen.

Safer type system.

Really, just a bunch of things that should've been added to C 20 years ago.


Zig is very pedantic about how you're using your pointers and arrays, which c desperately needed.


It’s difficult and even probably impossible to just “add” those things to C, while keeping everything else as it is.


Yeah, I know...



I thought Go tried to be better C


It kind of is, for anyone without bias against languages that have a GC.

However they could at least support some kind of enumerations, one of the things they definitly aren't a better C.


Go is about as much like C as Java is.


Why should there only be one "better C"? Go has a very different philosophy than Zig, but both can be considered a "better C".


Possibly the thing that makes C be C is that there is only one C. It is the single lingua franca of "one notch up from assembly". I would argue any language that wants to be a better C has to accept the challenge of being available on any platform, existing or future, any architecture, suitable for any bare metal use case, and it has to want to be the single obvious go-to choice. That's what it means to step into the ring of "better C" candidates. A lot of languages might offer pointers, manual memory allocation, no runtime, etc. That's cool and that gets you close to the space. But if you want to be a better C, then the bar is much higher: ubiquity.


In general I agree, but a couple of IMHOs:

> that there is only one C

IMHO one of C's strong points is the ubiquity of non-standard compiler extensions, C with Clang extensions is much more "powerful" than standard C, personally I see C as a collection of closely related languages with a common standardized core, and you can basically pick and choose how portabel (between compilers) you want to be versus using compiler-specific extensions for low level optimization work (and in a twisted way, Zig is also a very remote member of that language family because it integrates so well with C).

> "one notch up from assembly"

C is actually a high-level language, it's only low-level when compared to other high-level languages, but much closer to those than to assembly. Arguably Zig is in some places even lower-level than C because it's stricter where C has more "wiggle room" (for instance when it comes to implicit type conversions where Zig is extremely strict).

> being available on any platform, existing or future, any architecture, suitable for any bare metal use case

Zig has a pretty good shot at that requirement, it has a good bootstrapping story, and for the worst case it has (or will have) a C and LLVM (bitcode) backend.

For me personnally, Zig will most likely never completely replace C, but instead will augment it. I expect that most of my projects will be a mix of C, C++, ObjC and Zig (and Zig makes it easy to integrate all those languages into a single project).


parent replied to grandparent's statement that Zig is the only language trying to be a better C


1. It's typically at least as fast as C, unlike C++/Rust

2. You can do type introspection (and switching) during compile-time, and it's not just some stupid TokenStream transformer, you really have type information available, you can do if/else on the presence of methods, etc.

3. There are no generics, but your functions can accept anytype, which is still type-safe. See https://github.com/ziglang/zig/blob/9c05810be60756e07bd7fee0... and note the return type is "computed" from the type of the input.

4. Types are first-class values (during comptime), so any function can take or return a new type, this is how you get generic types, without (syntax/checking) support for generics.

5. You can easily call anything which does not allocate in these comptime blocks.

6. There's @compileError which you can use for custom type assertions -> therefore, you have programmable type-system.

7. It's super-easy to call C from Zig.

8. This is subjective: You don't feel bad about using pointers. Linked data structures are fine.


> 1. It's typically at least as fast as C, unlike C++/Rust

The typical urban myth that never comes with profiler proofs.


TBF, Zig with the LLVM backend may be faster than C or C++ compiled with Clang out of the box just because Zig uses more fine-tuned default compilation options. That's true even for C or C++ code compiled with 'zig cc'.

But apart from that the performance should be pretty much identical, after all it's LLVM that does the heavy lifting.


That is a big may, given that it depends pretty much on what is being done in terms of code, and compile time execution (C++ and Rust), and many of the same compiler knobs are also available for C++ code, if not even more.

And was you point out, at the end of the day it is the same LLVM IR.


That was the point I was trying to make, Zig isn't inherently faster than the other three. It just uses different default compilation options than clang, so any gains can be achieved in clang-compiled C or C++ too by tweaking compile options (and maybe using a handful non-standard language extensions offered by clang). Other then that it's just LLVMs "zero cost abstraction via optimization passes" and this works equally well across all 4 languages.

Or: the "optimization wall" in those languages is the same, only the effort to reach that wall might be slightly different (and this is actually where Zig and its stdlib might be better, it's more straightforward to reach the optimization wall because the Zig design philosophy prefers explicitness over magic)


I don't need to prove you anything. Go and give it a try yourself.


https://programming-language-benchmarks.vercel.app/c-vs-cpp

https://programming-language-benchmarks.vercel.app/c-vs-rust

I understand that anytime someone brings benchmarks out, the next response points out that benchmarks are not real world use cases. Nonetheless, they are data points, and your claims are against the commonly accepted view of C being roughly as fast as C++ and Rust. If you have absolutely no data to back it up, you shouldn't expect anyone to believe you.


I don't need any data to say that Zig is typically faster than Rust because I know that Vec<T> will do a shitload of useless work when it leaves the scope, even if the T does not implement Drop. You can do vec.set_len() but that's unsafe, so... Typical Rust is slower than typical Zig.

Zig does not do any work because it does not have smart pointers and it does not try to be safe. It tries to be explicit and predictable, the rest is my job.

BTW: This is very real, I've been profiling this and I couldn't believe my eyes. And even if they fixed it already there would be many similar footguns. You can't write performant software if you can't tell what work will be done.


You are comparing different memory management strategies, not languages features. All of those languages (C, Zig, C++ and Rust) give you enough choices when it comes to memory management.

You can decide to not use frequent heap allocations in Rust or C++ just as you can in Zig (it means not using large areas of the C++ or Rust standard libraries, while Zig definitely has the better approach when it comes to allocations in the standard library, but these are stdlib features, not language features).


That "just" seems to mean "you'll need to take 10x the time to write the equivalent Rust/C++ program". What a language makes easy to accompish matters.


Yes, it's true that you can do arenas in Rust, but they are harder to use in Rust, due to borrow-checking. For example you can't store arena in a struct without lifetime.

So in a language which actively makes your life miserable in the name of safety, you will likely just use Vec<T> because it's easy. Hence, back to the previous point - Vec<T> is slow -> your code is slow and it's not even obvious why because everybody does that so it looks like you're doing it right.


I'm a Zig fan and all, but you should probably check if your C/C++/Rust code uses the equivalent of "-march=native", that might already be responsible for most of the performance difference between those and Zig - they all use the same LLVM optimization passes after all, so excluding obvious implementation differences (like doing excessive heap allocations) there isn't any reason why one would be faster than the other.


The point was typical, typical Rust code is using Vec<>, typical Zig code is using arenas. Typical Rust code is using smart pointers, typical Zig/C code prefers plain, copyable structs.

It is 100% possible to use such style in Rust/C++ (and then the performance will be same or maybe even in favor of rust, it might be the case) but people usually do not do that.


It shouldn't be hard to make Zig go to top place then, which is great opportunity to shine, given that they are still missing Zig entries,

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://www.techempower.com/benchmarks/


Zig can and does win plenty of those benchmarks, but ultimately it boils down to who is it that gets nerdsniped into working on a specific challenge.

For example in this case Zig won big time over what C/C++/Rust people submitted: https://youtu.be/pSvSXBorw4A?t=1075

Zig is in the same ballpark as the ones mentioned above, so flohofwoe is right. But nevertheless I do think that cztomsik's point also still stands: how hard it is to make something fast will end up impacting the performance characteristics of the average program and library out there, and Zig does make it easier to write performant code than some other languages.

Which is basically what happened here: https://zackoverflow.dev/writing/unsafe-rust-vs-zig/

In truth the same applies to correctness, which is also a point that the blog post above touches upon.



> Zig can and does win plenty of those benchmarks, but ultimately it boils down to who is it that gets nerdsniped into working on a specific challenge.

> For example in this case Zig won big time over what C/C++/Rust people submitted: https://youtu.be/pSvSXBorw4A?t=1075

That was a faulty benchmark with a simple unnatural footgun that the Rust people overlooked. I don't think it supports your claim that benchmarks typically boil down to who is it that gets nerdsniped into them. Sure, it can happen, it happened at least once, but in general?


It goes both ways.


Being Modula-2 like (in safety), with a syntax that is more appealing to C minded folks, with nice compile time execution support.


I think this is one of the best short description of Zig.


A fancy new database is written in Zig: https://tigerbeetle.com/


Bun, a JavaScript runtime (uses WebKit as an engine I believe) is written in Zig. They seem to be doing alright:

https://bun.sh/


It uses JavaScriptCore, which was written for Safari. It's supposedly faster than V8, but harder to work with.


A game engine https://machengine.org is being written in zig, there's also https://microzig.tech as zig is well suited to embedded development.


Zig has a decent chance of being an actual embedded device (ie. no operating system) programming language.

In my opinion, Zig seems likely to grow the necessary bits to be good at embedded while Rust is unlikely to figure out how to shrug off the clanky bits that make it a poor fit for embedded devices.

However, I'm a personal believer that the future is polyglot. We're just at the beginning of shrugging off the C ABI that has been preventing useful interoperability for decades.

Once that happens, I can use Rust for the parts that Rust is good for and Zig for the parts that Zig is good for.


> clanky bits that make it a poor fit for embedded devices.

What do you see as "clanky bits that make it a poor fit" for such a broad range of stuff as "embedded devices" ?

Embedded goes at least as far as from "It's actually just a Linux box" which is obviously well suited to Rust, to devices too small to justify any "high level language" where even C doesn't really fit comfortably. Rust specifically has no ambitions for hardware too small to need 16-bit addresses, but much of that range seems very do-able to me.


Example clanky bit: memory ownership rules.

There are lots of systems where memory changes hands. Game programming, for example. You allocate a big chunk of memory, you put some stuff in it, you hand it off to the graphics card, and some amount of time later the graphics card hands it back to you.

Rust gets VERY cranky about this. You wind up writing a lot of unsafe code that is very difficult to make jibe with the expectations of the Rust compiler. It's, in fact, MUCH harder than writing straight C.

Example clanky bit: slab allocation

You often don't really care about deallocation of objects in slabs in video games because everything gets wiped on every frame. You'd rather just keep pushing forward since you know the whole block gets wiped 16 milliseconds from now. Avoiding drop semantics takes (generaly unsafe) code and work.


I see, so by "embedded systems" you meant a video game console ?


Yes, although I could have just as easily said "low-level systems programming" as embedded. I try to keep in mind that when people say "embedded" they may mean anything from ARM Cortex M series to Raspberry Pi 4s.

Rust is really good when it is control of everything. I love Rust for communication stacks (USB, CANBUS, etc.). Bytes in/state in to bytes out/state out is right in its wheelhouse. Apparently Google agrees--their new Bluetooth stack (Gabeldorsche) is written in Rust. When Rust has this kind of clearly delineated interface, it's really wonderful.

However, Rust does not play well with others as my examples point out. To be fair, neither do many other languages (Hell, C++ doesn't play well with itself). However, that's going to be a disadvantage going forward as the future is polyglot.


Fast and compact WASM builds are builtin to Zig's toolchain:

https://github.com/mattdesl/wasm-bench


“Whatever you would use C for but better” roughly

I’m not really convinced because in Andrew’s livestreams he’s actively uncovered significant stdlib bugs that he is aware of and tables for later.

Hopefully those will all be gone by 1.0, but I doubt it. For now, I cannot consider it a viable alternative to anything for production software. I do hope it will be some day, because it’s a nice language, even if it has a few syntactic warts :)

that said... I feel safer choosing C89 or C99 for certain things due to its extremely wide availability and longevity.

It’s great for it to have competitors, but C and C++ are more like standards and less like one tool with a handful of people working on it.


I've been thoroughly enjoying implementing a compiler and bytecode interpreter in Zig for a little scripting language I've been designing.


The new multi-object for loop syntax is such an improvement. Hoping for many more small QoL features like this until Zig hits 1.0.


Wow, those are really nice release notes.


I wrote a good chunk of these notes and can tell you that these are probably the least detailed/useful we've ever done, because they were only about 40% completed when release day came, so we spent a few hours doing quick & sloppy improvements :P you can hopefully expect them to improve a bit over the next few days

(although I think we still managed to write up all of the most important bits)


interesting. Maybe those are the first I follow with interest and it's just the general quality of presentation / fitting level of detail / "tone". Maybe I'm just complimenting your template, but feel free to feel addressed!


I enjoy seeing an update or discussion of things like D, Zig, Nim (and a few others I probably forgot) but I honestly can't keep track of where they are in relation to C/C++, C#/Objective-C, and Rust.

Is there are chart or a "are we xxxx yet" page one can reference?


IMHO this is still the most important page to get an idea what the current state of Zig is and how it changed over time:

https://ziglang.org/documentation/0.11.0/


Anybody here who is using zig on daily basis?


I use Zig for all my hobby projects:

- A pixel art editor https://github.com/fabioarnold/MiniPixel

- A Mega Man clone https://github.com/fabioarnold/zeroman

- Zig Gorillas https://github.com/fabioarnold/zig-gorillas

And most recently I had the opportunity to build a visualization for TigerBeetle's database simulator: https://sim.tigerbeetle.com

Before I was using C++ and Zig has been an improvement in every way.


That mega man clone is great. The movement feels exactly how I remember mega man 3 feeling. You absolutely nailed it. I also appreciate how you've set up the code, with the build script being written in zig. I checked the code out, compiled it, ran the web server and bam! It just worked. It doesn't seem like that should be a big deal, but competence is such an exotic bird in these woods that I appreciate it whenever I see it.


How did you not name that zigorrilas?


I don't know about "daily" right now (I've had to take a break due to obligations), but I'm working on a modern implementation of the Self programming language with actor capabilities: https://github.com/sin-ack/zigself

It's nowhere near usable yet, but Zig has been a joy to work with for over a year, and I can definitely see myself using it for a big piece of software.


Most days. Just switched to linux to access latest version and features, especially loving the new 'packed struct(u32)' for some low level SoC work (Register representation). I get compile errors if I miscount the bits, and I managed to get some type safety, and not a single 'shift', 'and' or 'or' sign in sight!

Looks like I'll be porting to 0.11.1 as soon as the documentation is in place... I hope they slow down soon, already feels complete. The WASM support is amazing, much smaller outputs than the other options I tried (Java & Go). Great work team!


I took a year off, and one of the things I did was learn Zig. I've built a number of libraries, including one of the currently more popular HTTP server libraries (https://github.com/karlseguin/http.zig).

A number of my libraries are used for https://www.aolium.com/ which I decided to write for myself.

I try to write a bit every day with the benefit that I can "waste" time digging into things or exploring likely-to-fail paths.


I am building an on-premise annotation platform for medical image data (MVA.ai) where it is used for both backend and frontend (WASM). Really enjoy the language, with the key aspects being the low level control and performance, the build system and cross-compilation, comptime for generics, the easy integration of existing C libraries and the support for WASM. Manual memory management is sometimes a bit tedious, but you get used to it quite quickly. On the other hand, being able to use different allocators can even give you something like 'lifetimes'.


I'm slowly writing a game with it in my own time and previously worked on an in-memory cache for smart metering full-time. It's been nice to play with and my goto language for prototyping since arond 0.5.0.


Everyone where I work does. :)


We trade ten million dollars a day of shitcoin derivatives using zig. Should be more soon :). We're probably stuck on 0.10.1 until async makes it back in though.


Zig is not memory safe and therefore at risk, just like C/C++, of future government legislation that outlaws the use of memory unsafe languages for some or all projects. The risk of such legislation is not insignificant: https://www.itpro.com/development/programming-languages/3694...

Personally I do not see the point of building an entirely new language and ecosystem that does not fully address this issue.


Well, technically, Rust is unsafe, unless they remove “unsafe”.

We’re really talking about safety on a continuum, not as a binary switch. Zig has some strong safety features, and some gaps. Well, one notable big gap, UAF. (Perhaps they’ll figure out a way to plug thisin the future? Perhaps by 1.0?)

Actually, safety has multiple axes as well.

> Personally I do not see the point of building an entirely new language and ecosystem that does not fully address this issue

The more safe languages make significant tradeoffs to achieve their level of safety. The promise of zig (I don’t know if it will ultimately achieve this, but it’s plausible, IMO), is “a better C”, including much more safety. For one thing, it has a great C interop/incremental adoption story, which increases the chance it will actually be used to improve existing codebases. The “RIIR” meme is a joke because, of course, there is no feasible way to do so for so many of the useful and widely used vulnerable codebases.


>Well, technically, Rust is unsafe, unless they remove “unsafe”.

I am somewhat surprised this is being mentioned. And the whole thread is without the usual people complaining it about unsafe.

Interesting changes happening on HN.


I'm sure the federal government's advocacy will aid Rust adoption massively. I mean, look at how Ada's adoption skyrocketed when it received DoD's stamp of approval.


the united states government is not going to outlaw the use of memory unsafe languages. that is an absurd idea. nothing in your links suggests they would even consider it. "moving the culture of software development" to memory safe language does not mean "we want to put you in jail for writing C".


Where did you get the idea that jails are involved? Governments are clearly forming a position, if they fund new projects, they are quite likely to enforce that position. That's a significant market already.


they can enforce that position by funding projects that are written in languages that they believe are memory safe. they do not need, or want, to legislate that.


Funny that you mention that, EU does sponsor Rust development.

"Logical Foundations for the Future of Safe Systems Programming"

https://cordis.europa.eu/project/id/683289

As for US,

https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI...

"NSA advises organizations to consider making a strategic shift from programming languages that provide little or no inherent memory protection, such as C/C++, to a memory safe language when possible. Some examples of memory safe languages are C#, Go, Java, Ruby™, and Swift®. Memory safe languages provide differing degrees of memory usage protections, so available code hardening defenses, such as compiler options, tool analysis, and operating system configurations, should be used for their protections as well. By using memory safe languages and available code hardening defenses, many memory vulnerabilities can be prevented, mitigated, or made very difficult for cyber actors to exploit."


No, however it may require that like with other kinds of dangerous chemicals, or hazourds goods, their use must follow strict requirements, like they already have to for high integrity computing.


Agreed, it's absurd. Jail time for writing javascript otoh...


I guffawed, and I'm not afraid to admit it.


And this attitude is why it's a serious issue in our industry. You clearly don't take security seriously, to the point that it's a laughing matter for you.


If you can't see why jailing people for writing JS is a comically absurd concept…


Why is JS even relevant to this discussion?


At some level you need languages that are not “memory safe”.

Memory safety comes with a cost. Either you pay for a GC runtime (Java) or for reference counting (Swift) or by not being able to express a correct program (Rust).

There are plenty of use cases where none of these tradeoffs are feasible.

To add, Zig comes with its own story around memory safety. Not at the static type system level and it’s not as comprehensive as other languages.


"Express a correct program" that might end up being incorrect due to programmer's fault.

The difference is, you can use unsafe blocks/fns in Rust, in which case it becomes equivalent to C expressiveness-wise; but you can also do the opposite and forbid(unsafe_code) altogether.


All programs can be incorrect. You cannot forbid unsafe code. Its use is necessary to implement basic functionality.


You absolutely can forbid unsafe code (within a scope of a particular codebases) and some projects happily do it.

"The use is necessary" = absolutely not. Lots (most) of projects wouldn't ever need to dip into unsafe code. And then there's another category where authors think they do because "that's how they would do it in C++", but in reality they don't.


> You absolutely can forbid unsafe code (within a scope of a particular codebases) and some projects happily do it.

No, you can't. The standard library contains 'unsafe', all over the place. Even if you discount that, large swathes of the library 'ecosystem' contain 'unsafe' and those that don't? They depend on libraries that do.

>"The use is necessary" = absolutely not.

Relying on some unsafe code that someone else has written is still using unsafe. The code in your dependencies is code that you use, that you depend on, and that affects you. It is meaningless to say you 'don't use unsafe code' if your code is just plumbing together a bunch of libraries that contain 'unsafe' code.

>And then there's another category where authors think they do because "that's how they would do it in C++", but in reality they don't.

This common sentiment from Rust developers is, just like all their other claims, totally unfounded. 'Oh you don't really need unsafe'. Then you explain why actually, yes, you do need unsafe and you're told that you're describing a 'special case'. Well guess what, every project is a special case in some respect. Every nontrivial project is going to have something that requires unsafe code, whether it's interfacing with hardware or using a system call that hasn't been wrapped or using a third party library or implementing a complex data structure or one of many many other various things that require 'unsafe'.

And even if you don't use 'unsafe' anywhere in your code and you somehow magically know that all of your dependencies are perfectly written and all the soundness bugs in the Rust compiler are fixed (at which point you might well ask: if you can assume all your Rust dependencies are perfect, why can't you assume all your C dependencies are perfect? And also, how can you be confident in any 'unsafe' code being safe if the rules for what is 'safe' to do in unsafe aren't even written down anywhere?), then what do you get? Almost nothing. 'Memory safety' is a very narrow category. As I said above, it means 'things Rust protects against'. It wasn't a term that was popularly used before Rust became known in the way Rust means it. Even to most Rust developers and contributors, right up until release it was widely assumed that it included leak freedom. And then a couple of weeks before release that was quietly dropped and memory-holed when it was shown that you could introduce memory leaks trivially. Large amounts of supposedly 'safe' unsafe code had to be rewritten and redesigned.

Rust just doesn't give you any the guarantees that Rust evangelists love to claim. Even in memory safety. In fact, almost every language out there is MORE safe than Rust, in the sense in which Rust claims to be memory-safe! The only ones that aren't are those without garbage collection.


> At some level you need languages that are not “memory safe”.

Perhaps as an escape hatch (unsafe Rust) or a compiler target, but ideally not as a "general purpose language" as Zig is marketed as.


...that's like, completely ignoring the escape hatch in Rust of unsafe {}.

You're not limited by anything there, period.


Rust is optimized for minimal _unsafe_ usage. That's the reason you choose it over C or Zig.

https://zackoverflow.dev/writing/unsafe-rust-vs-zig/


Why would the legislature permit the use of unsafe Rust?


That totally misses the point of quality software. What good is memory safety if you medical device crashes because of an out of memory error?

What I'm trying to say: There are use cases where areas of safety are required other than memory safety.


It sounds like you'd be worrying about n-1 types of safety errors instead of n, which is arguably better.


There are use cases where safety beyond memory safety is required. But there are no use cases where memory unsafety is desirable, yet alone required.


>there are no use cases where memory unsafety is desirable, yet alone required.

There are plenty of 'use cases' where Rust's guarantees (some vague but unenforceable promises around memory) are not worth the cost of using Rust (a very high cost). This is doubly true if you want to, say, not use any third-party libraries. If you use third-party libraries, you get essentially zero guarantees. And if you don't, you have to reinvent the world - and writing new data structures in Rust is a series of research projects, whereas doing so in C is trivial.

There are many situations where guaranteed 'memory safety' (a Rust propaganda term for 'having the guarantees we can provide but not the ones we can't provide') is not very important.


That is absolutely true. But but you can write memory-bug-free code in Zig but you cannot prevent heap allocations in most of the languages listed in the article, making it outright impossible to write certain software in them.


Sure one can write memory-bug-free code in x86 assembly too. But how can you prove it? ATS is an example of a low-level systems language where you can prove it.


As Zig promises "not hidden allocation", I assume you can build your allocator in Rust and then use it for all memory allocation in Zig.


A function that doesn't allocate doesn't mean it's safe. (Indeed if anything the opposite is more likely - copying is safe, if slow, writing to something that was passed in tends to be what breaks).


You can't prove it in Rust either.


formalized subsets of x86 assembly exist. Coq can be used as a macro assembler. tools to work with llvm ir exist, x86 can be raised up to llvm ir and proved, kinda bad way tho.


While that is true, there might be other requirements that prevent memory safe languages from being used. For example not having a heap available instantly disqualifies most of them. Or when you have simulations running where having constant OOB and other checks would be a massive slowdown. Now obviously your code should still be memory safe (because otherwise it's not correct anyway and you should fix the code), but not at the cost of runtime checks.


> But there are no use cases where memory unsafety is desirable, yet alone required.

Operating system bootstraps?

DMA management/volatile driver access?

Doubly linked lists?


Memory safety is one aspect of quality yes, but is there any evidence Zig is a good fit for other quality aspects, e.g. static analysis tooling and correctness proofs? ATS is a low-level language that allows embedded proofs of correctness in the type system.


That article does not even mention Ada/SPARK... So much for safety. :P Yup, there is static analysis with Ada/SPARK and it is great. It is much more general-purpose than ATS, and there are other things in Ada/SPARK that increases safety in general, not only memory safety.

For what it is worth, Ada/SPARK has a strong presence in safety-critical domains like aerospace and medical devices, while Rust is gaining popularity in system programming and web development. ^^ I'm surprised that it is not as widespread. That, or lots of misconceptions.


Yes Ada/SPARK is a great example.


Not to be snarky, but you argument works both ways :). What good is a medical device if it leaks sensitive data, because it had been exploited by a use-after-free?


Memory errors are much, much less likely to occur with more memory than use after free.


Zig enforces much more correctness than C or C++, which also results in much more memory safety, it's just not as extremist as Rust.


What would happen with existing codebases, sometimes built upon 2 decades of C or C++? Will the we "rewrite everything in Rust" ? lol


No, we will Rewrite in ChatGPT (or similar), and only architect jobs and the few AI druids that write the tooling will be safe.


Maybe doing it for new projects is better than doing nothing?


AI will probably find all bugs or re-write all the software in minutes soon enough.


I ditched Rust a year ago in favor of Zig and have not regretted since

Number of memory bugs in several fairly huge projects: 0

Zig is way more maintainable, leads to less code which translates to fewer bugs

How about that?


Someone will eventually build a static safety checker for zig. No reason memory safety has to be in the compiler.


IF a Gov subcontracts a company to design + impl a system, they as Customer (if you like) have the right to request specifics; maintenance & integration with the wider ecosystem is a massive concern in this case. That's not "legislation" though.


seriously?

"The National Security Agency (NSA) has recommended only using 'memory safe' languages, like C#, Go, Java, Ruby, Rust, and Swift, in order to avoid exploitable memory-based vulnerabilities."


Yes seriously. The west is getting hacked and owned on a daily basis. The NSA recommendation shows that governments are starting to identify where the problem is.


ah yes the east! The lovers of memory safety. West is getting hacked daily because every country is getting hacked daily.


log4shell enters the chat.

https://en.wikipedia.org/wiki/Log4Shell


Yes, it belongs to the remaining 30% of exploits, when we remove the 70% ones caused by memory corruption.


And they have not even mentioned Ada/SPARK... right.


[flagged]


You might think that, if you live in a small world. I want memory safety, but I am otherwise not a big fan of Rust. Rust tries to be too high-level like C++, making it opaque where allocations are happening. For a low-level systems language, with embedded proofs, I quite like ATS, but Vale is also promising and more like Zig.


The way zig is designed, I think it will be fairly easy to embed Ada's Spark like proof system in it.


Why do you think that?


Where did he/she even mention Rust?


Well, you do not see people mentioning Ada/SPARK whenever memory safety is the topic of discussion, do you? They mention Rust! Well, I do mention Ada/SPARK as much as I can because it is still a much safer language in general than Rust.


I wanted to try learning Zig, but found the resources to be incomplete and lacking in examples. Rust (or Go) in comparison has a plethora of online resources with great examples.

I realize Zig is just 0.11, but wondering what resources people relied on to pick it up?


I’ve been using the language reference [0] and also just browsing the standard library source code [1] for examples.

[0]:https://ziglang.org/documentation/0.11.0/

[1]:https://github.com/ziglang/zig/tree/master/lib/std


That's important: don't be afraid to look into the stdlib source code when questions arise. It's easy to read and stuff is easy to find, and it's also a great teacher.


Since go, this has been a really great way for me to judge a langage: how readable is the stdlib ?

If it's layer upon layer of unscrutable abstractions, then you know there's a high chance your codebase will end up looking the same after a few years.


The main available resources are listed here

https://ziglang.org/learn/


Are people supposed to realise this version number is related to Zig?


Somebody fixed the title from 0.11.0 to Zig 0.11, so it's ok now.


@WalterBright In case any of the sh*t got into you. Ignore all the trolls about you appearing in Zig thread. ( Although I think some of them are just joking )


It‘s a shame macOS Arm is deprecated :(


this comment is misleading. aarch64, the architecture for apple silicon, is still fully supported. did you see "arm" with a skull next to it and assume that meant all ARM architectures became recently deprecated?


I don't believe there ever was support for macOS on 32-bit ARM.


I believe iPhone 5c was the last of the 32-bit ARM Apple devices. I guess technically that's iOS rather than macOS but as far as Zig is concerned, it's all Darwin.



Wait what? I use Zig just fine on my M1 Mac. Did macOS ever run on non-Apple-ARM chips?


Whoa, thanks for the heads-up. I've been reading news about Zig from time to time, and planned giving it an honest try at 1.0 (whenever that may be) but it seems I'm out of luck.

Do you, by chance, know the reasoning behind this step?


Aarch64 is supported (M series chips) if that’s what you’re worried about.


No, I actually have a bunch of older Macs I keep running for certain experiments.


Older mac are intel based not arm. M1 is the first arm based chip


Those would be Intel not Arm, no?


does it support tabs yet?


It actually does.


> If the Operating System is proprietary then the target is not marked deprecated by the vendor. The icon means the OS is officially deprecated, such as macos/x86.

Not supporting proprietary OSes is a bummer. Especially if it is marked as deprecated since it is unlikely to change. People choosing Zig to anything choose their users to throw away their devices.

.. the number of hardware and software just keeps growing and growing endlessly, and the number of their combinations grows even faster. We probably need some more virtual machines as targets to cover them all.


> People choosing Zig to anything choose their users to throw away their devices.

Apple is asking you to throw away your devices, not Zig. Getting proper CI coverage for supported versions of the OS is already pretty expensive and doing the same for unsupported systems is entirely unfeasible at this moment.

Don't buy Apple if you don't want to throw away functioning hardware.


Yeah, sure. It is still a bummer when people that try actively to underpin the whole world tell you 'fuck you'.

Having 'we support proprietary systems until their makers support it' as a hard rule is unnecessary and harsh. It is a simple rule, but I don't think it is good. Why not choose the supported systems on individual merits, e.g. on usage statistics? I think for example the Linux kernel does that.

edit: I just continued the previous thought. If your reply means that you are open or likely to target deprecated systems in the future, that's much better!


Hi, kubkon here, the author of Zig's MachO linker. I just wanted to explain our reasoning here a little bit. As you have hopefully noticed in the release notes, there are 5 core team members that are paid either full- or part-time for their work on Zig. That's not an awful lot, and as you can see in our latest roadmap for 0.12 release, we have quite a few challenging components to develop including functioning linkers for the most common platforms, and not only macOS/Apple. This makes it really tricky to justify spending a considerable amount of time trying to support an OS that was officially dropped by a vendor such as macOS/x86, or watchOS/arm32 (FYI, as a fun fact, arm32 is considered private as opposed to public by Apple according to comments and conditional includes in Apple's libc headers). That said, after 1.0/stable release of Zig, I would love to spend more time into adding backwards support for old Apple platforms in the MachO linker and Zig as a whole.


> I just continued the previous thought. If your reply means that you are open or likely to target deprecated systems in the future, that's much better!

It would be nice to eventually provide support older systems but first we need to get to a point where doing so doesn't mean taking away resources from more worthy endeavors.


I think the “not” in there is a typo? It reads logically if you remove it.

So it’s not really zig deprecating 32-bit macos’s, but Apple, and there’s not a lot zig can do about that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: