Hacker News new | past | comments | ask | show | jobs | submit login
Why Zig when there is already C++, D, and Rust? (ziglang.org)
276 points by mardiyah on Jan 15, 2021 | hide | past | favorite | 259 comments



IMHO those "language designer trivia" are interesting to read and think about but in the end irrelevant for popularity (and I think Zig will be popular). My personal favourites so far are:

- The language is really simple and "enjoyable". Writing C++ often feels like puzzle solving to me even after 25 years. From the little Rust I've written so far, I got the same impression. Writing Zig is like writing C or Go but even more straight-forward. The language completely "disappears" while coding.

- The simple integration with C, Objective-C and C++ doesn't get enough credit (disclaimer: I haven't tried C++ yet). Even if Zig aims to replace C, I think the reality will be that most cross-platform / "cross-language" libraries will continue to be written in C, and most "serious" Zig projects will actually be mixed-language projects. Most other languages are too much trapped in their own ecosystem and aim to "rewrite the world" (which won't happen of course).

- Integrating the build system into the compiler and standard library is the right way to do it (not a feature unique to Zig, but if the C/C++ world is looking for the right way to solve the "build system problem", this is it).

...lots of other cool stuff in Zig to write about such as the incredibly simple module system, the comptime-, reflection- and generics-features, but IMHO the only feature that really counts in the end is: ergonomics.

This is the one thing that matters and where Zig beats the more "powerful" alternatives.


To add to what you're saying, Zig (and only Zig) is a plausible contender for replacing C, exactly because consuming a Zig .o file from C is roughly as simple as consuming a C .o from Zig.

So if you have a C program, and you want to dip your toes in Zig-land by writing the next library in Zig, you just can. If you chose Rust instead, you're going to have a bad time.


> If you chose Rust instead, you're going to have a bad time.

Naw.

    /* main.c */
    #include <stdio.h>
    #include <stdint.h>
    uint8_t answer();
    int main(int argc, char** argv) {
        printf("%d\n", answer());
        return 0;
    }

    // example.rs
    #[no_mangle] extern "C" fn answer() -> u8 {
        println!("The answer is:");
        42
    }

    # test.sh
    rustc --crate-type staticlib -o example.a example.rs
    gcc main.c example.a -lpthread -ldl
    ./a.out

    $ ./test.sh
    The answer is:
    42
Workflows using `cargo` instead of `rustc` are also sane and easy.


    #[no_mangle] extern "C"
All due respect to Rust, which is a fantastic language, what this line says is "here is a C function written in Rust".

You can of course do the same thing with C++ and with D, allowing all three to check the box for interoperability. And with Rust you get everything Rust offers: so you can write in the `extern "C"` dialect while getting memory-safety guarantees within the library, that's not nothing.

What Zig offers is the ability to write ordinary Zig and call that from C. That's part of what it's designed to do.

This is interoperability by design, rather than by configuration. It's simply a different thing, and considered along this singular dimension, I would assert that it's better.

I didn't, and wouldn't, say that Zig has the potential to replace Rust. I said that about C, and explained why.


> What Zig offers is the ability to write ordinary Zig and call that from C.

You say that, but Zig appears to suffer from the exact same problem you point out in my Rust code:

    export
https://ziglang.org/documentation/master/#Exporting-a-C-Libr...

Perhaps I'm mistaken. How do I directly call Zig's `std.debug.print` from C? Or Timestamp.unixEpoch as defined here? https://ziglang.org/documentation/0.7.1/#Doc-comments

There's a bunch of rust crates containing macros, binaries, and build scripts that cut down on Rust's boilerplate as well. While there are some rough edges and room for further improvement, I am far happier with Rust's C interop than I am with the C interop of most other languages. Zig might have a touch more polish, but the bit I originally quoted made Rust sound like it was a good order of magnitude worse than Zig at C interop, which I just can't agree with.

One of the very first things I did when learning Rust was write a test library and drop it into an existing C++ codebase. I've also experimented heavily with cross compiling it to pretty much anything I can get my hands on. It was a good time, not a bad one.


I found it very difficult to deal with C macros, though. That said, I have no idea how zig deals with that problem either.


Zig can handle C macros, you can see this if you use the zig translate-c option on the CLI which will convert any C code to Zig. This does make it the most usable C ffi I have used, as it means it really works with even difficult C code.


This seems like unsolvable problem - if you'd want to allow calling C macros from your new language, you would have to be C superset. So no memory/thread/... safety, weird syntax inherited from C, ...


This was one of the most redeeming things about Objective-C. It is a superset of C, and when I was working with it, I found myself writing the high-level stuff in ObjC, but with a lot of pure-C functions sprinkled in. It was really nice for working with C libraries like OpenGL, because you could find an example online and directly apply it without going through any kind of translation step. It was a really cool feature to have.


I share a similar experience with C++.

However that compatibility is also what makes Objective-C and C++ impossible to ever be fully safe, without breaking compatibility with the C subset.

Static analysis help, but only when all code is available.


It isn’t so black and white.

Nim, if you let it use C as a backend, will let you use C macros, despite being generally safe and not remotely a C superset.

Yes, the use of C macros (and any kind of ffi) is unsafe, that cannot be avoided. But it doesn’t need to be a c superset (and indeed, when you use JS as a backend, or the native LLVM backend, you get no access to C macros)


It's probably impossible in general, but you could probably deal with a bunch of common cases fairly easily (like constants)


Learning how to write memory safe code in Rust is a challenge at first, but it buys you access to an ecosystem of transitive dependencies that are (mostly) efficiently implemented and interconnected through memory safe API's. That advantage remains even if you have you have to manually define an unsafe C API for the rust code you added to your codebase.


The same is true for C++, Rust, etc.

Obviously, you can’t expect people that didn’t managed to learn these languages to know that.

But if that’s your killer Zig feature, lots of languages have it.


Maybe C "disappears" while coding, but I'd much rather it stuck around to help me out. When writing C, I have to carefully make sure that each resource has been freed/closed (but not too soon!), that I'm not mutating or holding onto anything I'm not meant to, that I've accounted for every exception that I can result from a given call, and so on. Can Zig handle some of that bookkeeping for me?


"I have to carefully make sure that each resource has been freed/closed (but not too soon!)"

You can use defer keyword to free right after allocating and it won't be called until the function/program completes. You can use debug or testing allocators to ensure no memory is left unfree'd or double free'd. The std lib has a large selection of custom allocators that you can use depending on your needs. One great one is an "Arena Allocator" which is just a linked-list of allocations that can be free'd at one time, versus tracking all these individual allocations. And if none of these allocators fit your needs, the interface to define one is pretty easy to implement. https://ziglang.org/documentation/0.7.1/#Choosing-an-Allocat...

"I'm not mutating or holding onto anything I'm not meant to"

Not really sure Zig can help you with this one. With great power comes great responsibility, or something like that.

"that I've accounted for every exception that I can result from a given call"

Zig does not have exceptions, but it does have error types which functions can return. From the function definition you usually can determine all the errors that can possibly be returned, unless the function definer use'd the catch all ! error type. Regardless, it is still much better than regular exceptions because

- you know by the function def that an error could be thrown, the compiler forces the caller to account for a possible error by "unwrapping" the return value

- an error happening just passes back a different type to the caller and in combination with the above, you can never "miss" an error https://ziglang.org/documentation/0.7.1/#Errors

Also Zig switch statements without an else ensure you handle every possible value so if you forget to handle one possible error (based on the type), it will throw a compile error. https://ziglang.org/documentation/0.7.1/#toc-Exhaustive-Swit...


The compiler infers the errors in an error set even if you don't declare it explicitly (e.g. when returning `!void`). This means that you can use a switch statement to have the compiler tell you what cases it contains.

There's also ZLS, a language server implementation for Zig that I believe can help with that (or if not today, it will in the future).


Cool trick, I did not know this, but it makes sense.


couple of weeks (days ?) back there was a post here which described the new defer mechanism (possible source of reference: https://gustedt.wordpress.com/2020/12/14/a-defer-mechanism-f...)

this can do that quite easily afaik.


Rust is a lot easier than C++, although there is an initial slightly steeper learning curve.


It's hard, coming from C because C is a small language with not many keywords. As a hobby programmer I did not know about generics and such and the differences between pointers and references.

Learning Rust kind of feels like brute-forcing my brain until it sticks. But I can't say it isn't fun and exciting!


Rust has a steeper learning curve initially but when you're over the hump you don't see mountains in the distance like you do in C++ :)


And I feel like, while Rust also has you puzzle-solve a little and jump through hoops initially, you quickly start to appreciate all the safety that gains you. And I value that a LOT. Though I have to say, Zig looks awesome. Both Rust and Zig get me really excited to code more after getting frustrated with god-awful python codebases.


I'm about to start a little fun project where it would be nice to be able to interact with C++ object files.

I was planning on using Nim but might give Zig a try also.

It looks really cool. I feel like Nim and Zig both fill a similar niche, maybe Zig aiming more towards the lower level / even embedded.

That's just my impression, not having used either in anger.

I agree on the C++ puzzle solving thing, though in a way it's almost part of the appeal!


As a Nim user, I think Nim and Zig fill quite different niches. Zig is made specifically for low-level programming with some higher-level constructs, whereas Nim is very general-purpose.


What are the flaws/weak points of Zig?


while zig's error handling is almost perfect, it's missing a way to return context w/ the error (https://github.com/ziglang/zig/issues/2647)

while the debug allocator can tell you at runtime about use-after-free or memory leaks, that's a less strong guarantee than rust or a GCed language

there are no closures; in general the code you write is lower-level than code in python, c#, rust

the language is very new

--opinions of a zig dilettante

bonus observations:

the std library code is very readable

the language overall has an excellent philosophy & design approach, and i predict it will become extremely popular

(as a single point of comparison: even after puzzling over it for a while, i still don't understand how modules work in rust; the equivalent feature in zig is immediately obvious, and in fact is the same concept as structs. how cool is that? (very cool, imo.))


> the std library code is very readable

I think that's very important feature to have for some people like me. I love Java standard library, because it's really easy to read. I regularly read its source code when I need to understand something and it's amazing how much time that could save. The only nitpick when it comes to Java is that some low-level stuff implemented in C and it's not as straightforward to jump into C implementation.

And that what turns me off from Scala and from C++. Scala collections library is horrible to read. It's immensely complicated. I get that they're solving hard problem, but I don't like it. And I hate C++ STL code (at least those that I saw), they're using obscure __identifiers__ and so on (may be there are better implementations, I saw GCC and MSVC ones).

Standard library is how language is supposed to be used. Those who learn language can just read its sources to better understand language idioms and replicate those in their code. And there's a huge difference between languages with readable standard library and languages with unreadable standard library (or even without sources at all, Apple sucks).


They use those "obscure __identifiers__" because they are reserved by the standard and thus a conforming program will not define macros with those values and break everything.


IMO a better approach would to define some way for compiler to undo all defines before standard header is included and redo those defines afterwards. Or just to stipulate that standard headers must be included before other user code.


With modules, "hiding" code like that will be possible in C++.

If only we could skip forward a decade and get mature implementations of it right now...


> I get that they're solving hard problem, but I don't like it.

this is why the world is such a sad place


Currently: initialization of multi-dimensional arrays, and partial initialization of arrays, this whole area isn't as convenient as C (yet, I hope).

Also, currently it's possible to return a pointer/slice to a stack-allocated array from a function without a warning or error resulting in a dangling pointer. But this too will hopefully be fixed (because even C compilers warn about this these days).

But I think all those points can be considered bugs / todos.


I love Zig. Occasionally I wish it had interfaces because that’s how my brain designs software components. You currently achieve the same runtime characteristics of interfaces by using a function property on a struct.

It’s a great language though and a ton of fun. It’s so easy to dive into a codebase quickly which I value as a very distracted dad. The standard lib is a joy to read too.


Can you explain what you mean when use the term interfaces? I’m just curious as to what your describing. Also, do you mean use a function property on a struct in Zig specifically to capture the operation of such an interface?


I assume interfaces refers to runtime polymorphism.


The ones I've seen are not Zig-specific, but rather attendant to any language in early stages: a small development team, very limited tooling. (To be fair though, the dev team has done an amazing job banging out good code.)


search for "design flaw" on the issue tracker


I was half expecting to find some kind of rickroll-esque joke, but there is actually a lot of really interesting discussion tucked away in that search¹. I find the open and frank discussion of problem choices to be very refreshing.

¹ https://github.com/ziglang/zig/search?q=design+flaw&type=iss...


Unlike Rust, Zig does not provide memory safety.

They have runtime checks (unlike Rust compile checks) which are at least slower, but likely still don't guarantee memory safety (did not dig deep).


If this is a language for people that didn’t managed to learn C++, D or Rust... lots of questions quite unfavorable for Zig pop up. I’d just remove that sentence.

even if the language is great, if all the ecosystem is written by people that didn’t manage to learn C++ in 25 years, I can’t really trust any library there


It’s it a running (almost) joke that nobody actually knows C++? I have not met anybody that work with the whole language and not only a subset.


Just like I bet most will fail a pub quiz using C17, C# 9, Java 15, F# 5, VB 16, Python 3.9 as base input for the question pool.


Sorry to pull the rug out from under ya'll like this but that article was a couple years old, so I gave it a once-over pass to update it. Mainly the "No hidden allocations" section is rewritten. You can see the full commit diff here: https://github.com/ziglang/www.ziglang.org/commit/bede1e57b6...


It'd be great to include some information about the awesome cross-compilation support!

Thanks for all of your work, it's great.


There's many things you do that I like, but Rust's bad choices about allocation are also almost all library not language mistakes.


You only get one chance to omit something from the standard library. After that, you're stuck with it forever.


Rust has “Editions”, which are points where they can make breaking changes as programs and crates (libraries) have to opt-in to the new edition, and you can mix crates from different editions. So if they need to make backwards-incompatible changes in order to add allocator support, they can do this in an edition.


While I am on the Rust side on this, I am not sold out on mixing crates with different editions, because I believe it won't work at scale in typical enterprise contexts.

Meaning, having binary crates compiled with different epochs, and then passing data and closures across crate public functions, specially using as input data from modern ones into the older ones.

With the language evolution it won't be possible to keep semantic changes to work as expected, or if the runtime requirements have changed, that linking everything together will still work as expected.

To me epochs seem to only work when using a single compiler, compiling all crates from scratch, with is nice, but not what enterprise C, C++, Java, .NET code looks like, and any new language being adopted is expected to play the same game.

Hence why Apple took such a big effort to make Swift ABI being able to deal with language updates.


Editions only support syntactic changes, so what you are afraid of cannot happen.

We mix editions in >1000 crate projects every day just fine, and we are sure it will work with all future editions.


Which just reinforces my opinion about them.

How are the editions going to help when semantic changes do eventually come up in future Rust versions?


If future Rust versions do make semantic changes to existing language constructs, that will presumably only apply to crates built with the new edition, and crates built with the old will get the old semantics.

If we’re talking ABI-level stuff, that doesn’t require an edition because you build all your crates with the same compiler, so it can use a consistent ABI.

Swift’s ABI stability has nothing to do with semantic changes and everything to do with they wanted binary compatibility, so they could start using Swift in the system frameworks that your app links against.


Yes we are talking about ABI level stuff.

Rust will never be an option in certain scenarios if the only option is to be like a scripting language and require its users to compile everything from source code.

Swift's ABI also takes into account ways to secure the ABI while evolving the language.


Rust will never do semantic changes.

That’s a feature.

If you want languages that introduce semantic changes and break your code, you have many options to pick from, C++ being one such language in the same space as Rust.


To the language, but not to the standard library. The standard library is eternal.


One can still deprecate things


I love both Zig and Rust - I believe they both serve different types of systems programming - but for many developers, I'm not sure how important these points are.

Zig's reasons for being better include:

- No operator overloading

- Rust panics when allocator fails

- Rust uses a std allocator (which is configureable)

- No metaprogramming (IE, macros)

- Simpler to grok

Essentially, Zig being the "Go" of C/C++/Rust/D. But, I think that dances around the problem where Systems Programming is just complex. Just because some functions and control flow is simpler doesn't mean the program itself is.

If trying to pull Rust programmers away, I'm not sure the merits in this article are strong enough. However, for newcomers to systems programming, Zig might be easier to learn and solve problems with.


> - No operator overloading

Ehh??? It's the first point there.

And I don't like all this hate for operator overloading. I prefer to write "vec1 * (vec2 + vec3)" instead of "vec1.mul(vec2.plus(vec3))".


I know a lot of people don't like overloading in general because they've been bitten by people abusing it, but I don't think that's where Zig is coming from. Rather, the project places a high value on being able to understand a snippet of code in isolation, particularly the performance implications of that snippet.

Especially in math-heavy domains I don't think anyone is arguing that "vec1.mul(vec2.plus(vec3))" is cleaner code in the abstract (and I know I've personally written pre-processors _entirely_ to avoid having to write that kind of garbage when doing math-heavy code in an environment unfriendly to such syntax), but the function calls make it crystal clear that something non-trivial is happening under the hood.

Do I want to to give up operator overloading in Python? Absolutely not, and to be frank I wish that portion of the language were even more dynamic. Do I care about Zig not having operator overloading? Not in the slightest. It sits at a different point in the language design space, and I'm super excited about it.


> the function calls make it crystal clear that something non-trivial is happening under the hood

Alright well I'd also rather write "vec1 [*] (vec2 [+] vec3)" too. Or equivalent.


Zig is still in development, and they're not necessarily opposed to an idea like that. I agree that could be a nice addition.

https://github.com/ziglang/zig/issues/427


This is a nice idea


> the function calls make it crystal clear that something non-trivial is happening under the hood.

Well, it makes it crystal clear that something non-trivial may be happening under the hood. If I have a vector type which is implemented using SIMD intrinsics, I'd still call its addition operation "trivial", even if it hasn't been blessed by the language as such.


The point that we only know something non-trivial _may_ be happening is definitely fair :) I'll leave the original comment as-is since you've already responded.

While we're nit-picking, a wide vector type implemented with SIMD intrinsics would still have a non-trivial addition as far as Zig is concerned; your specific example really only holds for sufficiently primitive vectors.


My beef with operator overloading is googling symbols and when I can't inspect it with a language server / IDE. In Pycharm/Clion you can jump to the dunder definition of the operator. Haskell lets you get :info on symbols.

Google has gotten much better at handling symbols.

Basically, custom infix operators are super convenient as long as they are auditable and aren't abused.


I for one have literally never been bitten by it anywhere. I get that you could launch missiles with it but ultimately that's true of literally any structured programming.


Allow me to introduce you to C++’s new std::filesystem::path class, which overloads the divide operators "/" and "/=" to concatenate two paths. (Not to be confused with the “+=“ operator, which also concatenates paths, but with subtly different behavior.)

Why did the division symbol get chosen for a concatenation behavior, of all things? Well, I suppose because it looks kind of like how we write paths with slashes, e.g. “dir/subdir/etc”. While I understand how this might be fun aesthetically once you already know the types involved, I find this completely opposite-than-numeric behavior here to be quite ambiguous and unintuitive.


Actually, slashes in filesystem::path is one of the operator overloading uses I really like in the C++ standard library (It’s time-saving, it’s useful, it’s easy to read). What I really don’t like is the IO operators << and >>, which I think is an incredibly horrid language design mistake. (It’s slow in performance, it’s hard to add formatting options, hard to read).


Python's pathlib does that too. It doesn't sound like it's bitten you though; it's just an aesthetic disagreement.


I suspect it might have bitten me if I wasn’t so lucky to encounter “/=“ first, which lead me to read the documentation and discover the different kinds of concatenation (“+=“ and “/=“) and their behavior.

To someone just skimming the code though, it might not at all be obvious that “+=“ and “/=“ both exist as concatenation operators, and their behaviors are different.

But yes it’s an aesthetic preference or opinion; I tend to prefer operators only when their only possible behavior is nearly so obvious that no documentation should be necessary. When that’s not the case, I prefer named functions/methods because they permit being explicit about subtle differences.


There are actually named member functions for those; they're append and concat. Can you guess which is which? I can't. append is actually "append with directory separator", which is weird, because I'd expect append to be a string-like append to match the append function on std::string. Instead concat is a string-like append despite the fact std::string doesn't have a concat member.

OTOH / is an instant mnemonic cue for "append with separator", and + matches the + on std::string.


Yes, I 100% agree with you that “append()” vs “concat()” are certainly no less confusing in their differences than “/=“ vs “+=“ (and the latter are arguably less confusing due to the mnemonic effect you mention).

In fact, to generalize, I think we can fairly say that operator overloads are akin to extremely short function names; in some cases they may work out great when there is not much ambiguity implicit in the underlying problem, but in other cases a longer and more explicitly descriptive name is required to disambiguate. In this case, “append” and “concat” seem quite poor names for different functions, given that they are virtually synonymous and therefore do nothing to describe or distinguish their differences of behavior.

So my claim is that we can do significantly better at resolving ambiguity with carefully named functions (or other approaches) than operators, or tersely/ambiguously named functions (like “append” and “concat”). Of course, this does come at the cost of code verbosity. Just where we should draw the line between too ambiguous vs too verbose, is of course a difficult subjective matter.


To be honest, I don't like the << operator either. I doubt I'll like the (optional) use of | for ranges too.


<< was necessary at the time


Why?


C++ didn't have variadic templates so passing along a stream with an operator was more ergonomic enough and typesafe.


I suppose you're comparing against std::format, but the question is why overloading an operator (cout << "foo") would have been necessary vs a regular member function (cout.put("foo")).


Because it had to replace people using a comma inside printf. The member function could only accept one argument cleanly so you would've to chain them like cout.put.put.put


I was once trying to diagnose a performance issue in an algorithm written in Ocaml. Someone had overloaded ** to be (IIRC) 64-bit multiply. I had a momentary “gotcha in 2 seconds!” moment before realising what was happening with that one.


Funny you should mention OCaml, with doesn't have operator overloading! You can only have one function with a given name in scope at any point, that includes operators. Of course you can redefine with `let (+) a b = ...`, but then you have to explicitly open that module where you want to use that redefined operator. That makes it even more clear what's going on.


Operator overloading is hated among programmers who never use math; in that community all uses of "+" are to do something other than addition, usually something that's not even remotely related to addition. Operators end up as one-character infix method names.


I don’t think I’ve ever seen a library overload math operators when they didn’t do what you’d expect. And just because a language feature is abused doesn’t mean it’s bad. Typescript’s generics are Turing Complete, but they’re still great.


Look at it this way - let's say you're a programmer who never uses math. As a result, 100% of your exposure to operator overloading will be, by definition, to cases that have nothing to do with math. While a programmer that uses math will see 1,000,000 good uses and 10 bad uses, a programmer who never uses math will see 0 good uses and 10 bad uses. I saw a similar effect in the Go community where they were debating whether or not the complex type should be removed. It's absolutely crucial for signal processing, but to the rank and file REST-ist, it's just another annoying case to type out in a generic-less language.


What programmer never uses math?

I mean, most of us aren't going use linear algebra, but still...


I'm using math in a colloquial sense to refer to "mathy math," not "computer science math." So linear algebra would be a leading example, along with stuff like vectors and complex numbers. Accounting math would be another example, because it requires a bignum implementation that doesn't drop decimals like float can.


Ah! Might be helpful to specify that up front. Even so: Interval math could be quite helpful for lots of things that ... aren't using it.


> Might be helpful to specify that up front

I thought his post was pretty clear tbh. For example he mentions complex types so that should be an indicator that he's not talking about the kind of high school maths that the average developer can coast along with.


Do you have an example of someone solving a real-world problem with interval math? I've only ever seen toy examples and theoretical justifications.


1) Most of (micro-)benchmarking in computing. Of course you want a tiny bit more analysis in that case.

2) Slightly more real-world: Measuring things when you only have approximate rulers, have difficult things to measure (odd surfaces, etc.), or have to calculate from 2nd hand measurements (pictures with rulers, etc.)

Not sure if that qualifies as Real World, but...?


By real world I mean code running in production and making someone money. Or a popular open source package. And I'd be interested in which company or project that is and if they have a writeup.


So for those use cases where you will be using a library with custom types anyway, choose one that doesn't overload the operators for those types? This very much seems like a problem on the library- and not the language-level.


Also, in Rust if you don't want to use the overloading, you can still write your code to directly use function calls `a.add(b)`, where `a` is a type that implements the `Add` trait.


I get the truth in that statement. But as a game programmer, I find it interesting the contrast that linear algebra is precisely what is commonly used.


Hah! Touche!

Weird to think that game programming is (weirdly) kind of a niche thing at this point. Compared to programming boring CRUD apps, that is. :)


I encountered something similar when I was trying out Nim. In Nim - ^ is the exponentiationoperator, so x^2 means x squared. The way operator precedence is implemented in Nim is that the unary negate (-) takes precedence over the ^, so - 2^2=4. It threw me off quite a bit (coming from a physical science background I assumed it should always be - 4,took a while in my debugging to figure this out). I went to their discord (?) channel and asked about this, some of the people there could not understand how it would be any other way, which just shows that the way someone who largely translates math into code (like myself) things about code very differently then someone who never does this.


Have you ever used Boost?


First thing that came to mind was Boost Spirit: https://www.boost.org/doc/libs/1_48_0/libs/spirit/example/qi...

Lovely.


Ah, the memories: I once cut the compile time of a multi-million line C++ codebase by over 20%, just by removing one single-line use of Boost Spirit and replacing it with atoi.


I'd rather solve this with code standards around function naming conventions (including symbols as names for functions) than remove a language feature.

I don't override operators often, but when I do it's useful because the operation I'm describing is a really close parallel to other uses and properties of that operation. As an example, it's why I dislike + for list append, + is normally commutative (which list appending definitely isn't) and subtraction is pretty bonkers as a reflected operator on lists.


FWIW, I advocate ~ as array append (a la perl6/raku strings) (and also as wedge product on vectors, * being dot product). It works well with x^y as exponent/repeat, since the rhs is qualitatively different anyway[0], and also gives you (with +) a sum-of-products/dioid/semiring structure for concatenation and alternation on regular expressions. Admittedly, you still don't have a sensible semantics for division, but that's true of plenty of other product operators (even int is iffy - either not closed (div-as-float/rational/etc) or not a multiplicative inverse (divmod)).

0: In particular, exponentiation by natural numbers (or positive integers for things like nonempty lists that deliberately exclude the multiplicative identity) is almost always well defined[1], even if the thing you're exponentiating bears no resemblance to a natural number.

1: Although note than with vectors and dimensional quantities (eg meters), x^2/x^3/etc are all different types: area/volume, bivector, etc.


I get your point. But even in non-mathy business software, data structure access is so much more convenient in languages with an overloadable index operator:

C#: foo[index] = bar[index];

Java: foo.put(index, bar.get(index));


That is a very good point that I hadn't considered, because I was thinking about Go. In Go, the builtin data structures are compiler special-cases, so you get the index behavior on the map structure. Of course, it starts looking like Java for anything more advanced, but the ethos of Go is to not use anything more advanced.


There are more operators than the infix mathematical ones, like increment/decrement (cursors, iterators, custom counters), dereferencing (used for some clever pointer types), etc.

Sometimes there are good reasons to mask complexity.


I think this is the main reason. Outside of math libraries (and maybe strings for concatenation?) operator overloading rarely makes sense. If you're working in a math heavy domain with custom types it's a godsend though. I wouldn't want to work in a language without operator overloading (I've done enough work with Processing and a Quake derived game engine to know that it sucks), but others might have seen it horribly abused


And I would go even further!

I have been thinking recently that everything should (or rather could, in a special language) be overloadable. And every operator should be treated as a non-first class citizen of a language, '=', '+', and all. The inelegance is granting those operators any privileged status. So the parsing can be in every case dependent on the arguments being parsed: if you're adding numbers, number addition, if you're adding vectors; even more complex behavior could be added for special parsing cases (which I don't even know what could be), say for creating certain algebraic operators with special conditions, maybe something like knuth's up-arrow.

Of course, thought must be employed on the scoping of these parsing changes but since this behavior would be conditional on the argument properties, it is very difficult to see any problems. For example, while adding vectors overloads '+', it is not going to cause problems in other cases (when the arguments aren't vectors), and when dealing with vectors 'vec1+vec2' the programmer will essentially always be thinking about the overloaded operation anyway, it seems absurd a confusion would occur. I should be able to write 'x=5!+3' to mean 'x=factorial(5)+3'.

Allowing contextual meaning (evaluation) of operators, functions, even syntax, allows more compact, expressive language, because we can reuse words, associate their slightly different applications and adapt syntactical behavior to the problem at hand.

Take the usages of the word 'slow' in natural language: it can be a description of current velocity (variable) of an object (context-dependent) "the car is slow", it can be a description of a property of an object (low typical/maximum velocity) "slugs are slow", it can be a verb "please slow down", and so on. Creating new words for each use case is inefficient and disregards the natural close association of their (contextual and algorithmic) meaning.


This is what happens in Lisp and Smalltalk for example.


I don't understand this benefit. so in zig we are trusted to manage memory but we can't be trusted to overload operators in a sensible way?


The benefit is that you don't have to be concerned about hidden function calls. You don't get unexpected function calls or performance hits this way.

If you want your code to run fast, you need to ensure that you know what your program is doing and you're in control, at all times. Manual memory management enables that.


... but operator overload is not really different to function overloading, it's just that the function name is infix rather than prefix.


I believe the rationale against operator overloading is not a question of whether the programmer should or shouldn't be trusted, but rather simply about optimizing for code readability.


Then better integrate some static analysis with natural grammar knowledge to ensure all functions are properly named for their actual purpose.


> Essentially, Zig being the "Go" of C/C++/Rust/D.

I'm not sure that's a fair comparison, as Zig features compile-time evaluation ("comptime") that supports first-class types. Generics falls out of that by letting you create types imperatively at compile time. It's hard to imagine Go ever going in this direction.

As a long-time C programmer, I am really excited to try out Zig once it stabilizes a bit more. Zig's take on generics seems like a compelling alternative to C++ templates.


I'm much more interested in Zig than Rust, but I'm also the kind of guy that is attracted to C over C++, and I also really like Go. I actually don't care about arguing the merits of one over the other, I'm just really happy and excited that there are cool new languages out there and it seems like here's really something for everyone.


Newcomers to systems coding need a really good intro guide as there are so many new things to learn (Ex: pointers, freeing memory). With high level languages, the syntax and garbage collector do everything for me. I build complex and nested data structures in Python daily and have no idea how to do the equivalent in C. Zig seems to assume you already know C, so I never see it winning the onboarding crowd. This is also why Clojure, Scala, F#, Rust...etc will struggle to get big.


There's a lot to do, sure, but while we don't have yet swats of learning materials for complete newcomers, I consider Zig to be today the cheapest way of getting into "systems" programming for people that never did it before.

I run Zig SHOWTIME [1] and we have quite a few talks that introduce basic concepts to people that aren't used to systems programming. Again, not yet enough to consider the problem solved, but it's a step in the right direction and we'll build a bigger library as more people join the community.

[1] https://zig.show


I'd say my (basic) understanding of Zig has actually helped me better understand C, since its semantics around pointer operations and allocation are a lot more explicit and intuitive; it's really helped make things "click" for me. Plus, the build system is infinitely better than I've seen from any native-compiled language, period.

Those two factors make me think it's close to the ideal language for newcomers to lower-level programming. Even the syntax ain't bad, as far as C-like syntax goes; it'd be interesting to see some alternative languages that put alternate syntaxes over Zig's semantics, though (and one of my New Year's Resolutions is to put together something to that effect, whether by transpiling to Zig or by (ab)using Zig's compile-time programming).


Actually, something that I appreciate about Zig is how approachable it is for people coming from TypeScript/JavaScript.

Zig is insanely easy to read coming from high level languages like these. It's only when you start to write Zig that you need to take more time to understand, for example, the difference between the lifetime of a pointer to stack allocated memory vs a pointer to heap allocated memory.

But by then, I think Zig will have already hooked you.

Zig is also a great way to learn systems programming, because the design of the language and the std lib code itself is such a great teacher.


This problem was exactly why I wrote Rust in Action. It teaches you systems programming alongside teaching Rust.

That said, you can get quite far without needing to understand pointers or memory management in any of the example languages you mentioned - "Clojure, Scala, F#, Rust". In the first 3, the garbage collector still does most of the work. In the last, there is are no calls to malloc() or free(). Rust's compile-time lifetime concept allows to compiler to do the work for you.


I should've been more specific. It's a different problem with Clojure, Scala, & F# where you're expected to understand the JVM & Java or the CLR & C#. It seems similar to Zig to me. There are numerous sources on learning C and I can barely figure out how to do a lot of things in Zig. Maybe a learn X in Y would help me.


If you already know any JVM or .NET language, the first step would be to understand the full stack, you don't need C for that.

Many of us were doing systems programming with other languages before C went mainstream.

What you need to learn is computer architecture.

Getting back to JVM or .NET, you can get hold of JIT Watch, VS debug mode or play online in SharpLab.

Get to understand how some code gets translated into MSIL/JVM, and how those bytecodes end up being converted into machine code.

https://github.com/AdoptOpenJDK/jitwatch/wiki/Screenshots

https://sharplab.io/

Languages like F# and C# allow you to leave the high level comfort and also do most of the stuff you would be doing in C.

Or just pick D, which provides the same comfort and goes even further in low level capabilities.

Use them to write a toy compiler, userspace driver, talking to GPIO pins in a PI, manipulating B-Tree data stuctures directly from inodes, a TCP/IP userspace driver.

Not advocating not to learn Zig, do it still, the more languages one learns the better.

Only advocating what might be an easier transition path into learning about systems programming concepts.


No.

With Python, Perl,R, Bash... whatever, I never had to know any C to be very productive.

With F#, good luck not knowing C#. With Clojure, not knowing Java is a pain. I've tried both several times and was frustrated by constantly being referred to something Java related that I didn't understand. Because of that, I feel like those languages will never pick up anyone outside of regular users of C# or Java that want to try something different. Of course there are exceptions, but they're just that.

I agree with you that perhaps one should go through all of what you're referring to Ex: learn assembly, write a Forth...etc.

I'm just saying a lot of these supposedly amazing languages stagnant with regards to their userbase for a reason. APL uses an entirely different way of doing things than Clojure, but I'd bet I could get up to speed a lot faster with APL than Clojure where I constantly have to fight with learning the daunting ecosystem.


I enjoyed reading through https://ziglearn.org/


Wow, that looks like a great (& concise!) resource


Why is no metaprogramming a good thing? I'm mostly a web dev but I've played with crystal quite a bit. Crystal embraces macros & metaprogramming. Its almost a language within the language.

Furthermore, isn't go a statically compiled language? Why isn't go comparable to c + others?


As a ruby developer who went through a stage of being enamored with meta-programming, I'd argue that it's siren song: enticing, but dangerous and destructive. It's especially dangerous because it's fun to write -- it scratches the "I'm so clever" itch -- but is a disaster to read, and a disaster to maintain.

Aside from just wanting to be creative, the desire to do it usually indicates either a missing core language feature, or a programmer's missing knowledge of how to use core features effectively and idiomatically. Otherwise you'd use the simple core language feature (the one every developer is guaranteed to know how to use already) to achieve your goal.

The other reason it's sometimes used is prioritizing terseness over simplicity. That is, you introduce some meta-programming abstraction that saves you some LOC, but at the cost of abstraction overhead. Almost always, you're better off with the reverse priorities.


Because what zig does is much more intuitive: it has a comptime keyword that allows you to execute almost any function at compile time. So you use the same language for programming and metaprogramming, there’s no weird second syntax to learn. Even better, they also use that approach for generics, where types are just comptime parameters. Again, makes the syntax more consistent and requires less different ways of thinking.


If anything, I'd argue that Rust is the "Go" of systems programming.


Really surprised to hear you say this, actually - in my mind Rust is the can of worms that allows almost anything and everything in terms of language features and will likely end up as bloated (if not more) than C++ in the next 20 years.

What makes you argue that Rust is the "Go" of systems programming?


For me personally, the problem with C++ isn't that it has too many features, but that they interact poorly and require complicated rules to makes them work together. To list a few pairs: destructors and exceptions, templates and inheritance, templates and headers, encapsulation and headers, namespaces and operator overloading.

At least at the moment, Rust's many features are mostly orthogonal and work well in combination. But in some sense, that makes the learning curve even harder, because you can't focus on an "accepted subset" as you would in an existing C++ codebase.


I wouldn't use this analogy as an everyday tool, but in reply to the comment above, if we really have to give the "Go" role to either Zig or Rust, I'd choose Rust for the main reason that, while you can certainly come up with a lot of abstruse abstractions, the language ultimately wants to keep you away from certain tools that would otherwise be considered core to a systems programming language. I'm of course referring to the restrictions that ensure safety at the expense of being able to implement some kinds of algorithms (in a "safe" way).

Go in may ways has the same approach: stop developers from doing potentially damaging things (according to Go's definition of what's "potentially damaging"), which I guess is one of the reasons why the creators decided to remove pointer arithmetic and add a garbage collector.

Zig wants to keep the language small, but the programmer is expected to have access to all kinds of advanced tools, in a way that I don't see neither Go nor Rust consider acceptable.

In Zig you can use `undefined` to avoid initializing memory, do all kinds of pointer arithmetic, and while you can't overload operators, you can use comptime to concoct all kinds of static/dynamic dispatch schemes, the only limit is just that they can't masquerade as basic operations.

Of course you can do all these things in Rust unsafe, and same in Go with cgo, but the point is not what you can or can not do, it's what the language guides you towards, and the only potentially damaging thing that Zig tries to prevent you from doing, is making code hard to understand for readers.


50 years of C and a few less of Objective-C and C++, have proven that C.A.Hoare was right regarding language defaults in computer security, in the approach taken by ALGOL based systems programming languages, starting with ESPOL/NEWP.

If I want unsafe languages, C and C++ already fulfil the purpose with 50 years of ecosystem.

Being safer than UNIX for certain kinds of customers, is one of the selling points of Unysis ClearPath MCP, thanks to NEWP.


Rust is the haskell of systems programming


Great in theory but so convoluted and theoretical that it will always be niche?


I mean mostly in the sense that it takes mathematically modelling programming and the machine being programmed very seriously and it allows you to build tall towers of abstraction on top. Compare this to C++ for example where many of the mechanisms for building abstractions are accidental or C where abstractions are not really possible.


Ah right, that makes sense, I read your comment in more of a snarky manner before.


Actually I think that in the long term most languages with automatic memory management will just adopt some form of affine types for high performance code sections, and that will be it.

We will keep the productivity of the existing eco-systems and tooling, with an additional tool for the 1% of use cases that it actually matters.


If even that. I think in a lot of cases, it doesn't really matter, and most already allow linking to a library, which works well enough for Python at least, and should be fine for most application development purposes.

Languages like Rust and Zig are just on a different level, for cases where performance or hardware restrictions are the number one concern.


The largest difference between Zig and Rust is Zig does not have memory safety. This arguments is more important than all other differences combined. Because when writing very large long living projects, memory safety is the most important issue.


Zig does however go a long way towards memory safety, to be fair, and if you wanted to, you could also argue similarly that Rust does not guarantee 100% OOM safety whereas Zig does: https://www.youtube.com/watch?v=Z4oYSByyRak

I would also say that for "very large long lived projects", memory safety is not actually the most important issue, but rather correctness, followed by (in no particular order) safety, performance, explicitness, readability and exhaustive fine-grained error handling, the latter also not found in too many languages besides Zig.


Isn't memory safety a subset (and I would argue a necessary requisite) of correctness?


Good question.

Assuming we both mean "memory safety" as in a guarantee given by the language (e.g. Rust), then no, logically speaking, it can't be a requisite for correctness, and it's not even a subset of correctness.

Here's why:

If you can write a correct program in a language which does not guarantee memory safety (which we certainly could, for example, simply by not allocating memory at all, or not using pointers etc, or by using runtime checks e.g. to ensure there are no double frees or out of bounds reads/writes), then memory safety is neither a subset of, nor a requisite for correctness.

Memory safety is a double-edged sword. It can make correctness easier to achieve. But that also depends on how the language implements memory safety. If this is done by at the expense of a steeper learning curve, then that could in itself be an argument that the language is less likely to lead towards correctness, as opposed to say an almost memory safe language that implements 80% of this guarantee while optimizing for readability, and with a weekend learning curve.

Historically, the lack of memory safety has obviously been the cause of too many CVEs. But even CVEs in themselves are more a measure of security than correctness. I would say that exhaustive fine-grained error handling checked by the compiler is probably right up there for writing correct programs.


Thank you for your answer. You can certainly write a memory safe program in a language that doesn't guarantee memory safety. However I still maintain that correctness implies memory safety, at least as a characteristic of the program if not of the language. If your language doesn't help there you should expend more time and effort to achieve that result, and accept higher risk of screwing it up. But I see why you argue that readability correlates with correctness. It's also true that most memory safety issue are really hard to spot, and that may be even more true for 80%-safe language. I really like when the computer does work for me, since as a human I'm way more sloppy.


Yes, I fully agree that a correct program can't contain memory bugs, and that we want the compiler to help us.

For a systems programming language though, I think Zig hits the sweet spot, and not only with regards to memory safety.

Correctness, in this realm, is as much memory safety as:

* error safety (making sure that your program correctly handles all system call errors that could possibly occur, without forgetting any, and there are many! The Zig compiler can actually inspect and check this for you, something not many languages do), see https://www.eecg.utoronto.ca/~yuan/papers/failure_analysis_o... for how critical error handling is in distributed systems,

* OOM safety (making sure your program can actually handle resource allocation failures without crashing),

* explicitness (clear control flow with a minimum of abstractions to make it easy to reason about the code with no hidden surprises, no weird undefined behavior),

* and especially as much runtime safety as you can possibly get from the language when you need to write unsafe code (which you will still need to do when writing systems code, even if your language offers memory safety guarantees, see https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru...). Here, Zig helps you not only at compile time, but also at runtime (and with varying degrees of granularity as you see fit), something not all systems languages will do.

On all these axes, Zig is at least an order of magnitude more likely to lead to a correct program than C, while optimizing for more readable code (even for someone coming from TypeScript) and thus code review, also essential for improving the odds that your code is correct with respect to its requirements.


That's really quite cool! Thank you again for your analysis and links


Not really sure why the original HN poster left "D" from its title, is there any conspiracy going on to silence D :-)

Actually unlike C++ and Rust you can have a lean D and most probably it can perform as good as Zig if not better. D has excellent CTFE support similar to Zig and D has no macro as well. If you want extra memory safety with ownership support D has that feature covered.


> unlike C++ and Rust you can have a lean D and most probably it can perform as good as Zig if not better

I doubt this is really meaningful. The performance-ceiling on all four languages is presumably very high, as they all allow you to write hand-optimised low-level code if you want to.


What it is probably not obvious to most people is that D has supports for D as better C, so it can have seamless interaction with C library and vice versa [1]. With betterC, D supports for C interfacing and integration is second to none. Heck, you can even compile the D compiler as a library if you fancy a built-in compiler for your C and D program. This seamless integration capability can be very productive where most of the high performance libraries are written in C.

[1]https://dlang.org/blog/2017/08/23/d-as-a-better-c/

[2]https://dlang.org/blog/2017/08/01/a-dub-case-study-compiling...


Zig has similar good integration with C. The Zig compiler is also a C compiler (calls into Clang), C headers can be imported directly from Zig code and used without writing any bindings, and you can also autogenerate C headers for Zig code so that it can be used directly from C (although this is currently broken).


Pretty much, so you could as well use the one you find most productive. If productivity is not there, fast programs don't materialize.


You could go for productivity, but that's not the only dimension. You could emphasise safety: C++ has no equivalent to Rust's safe subset. I believe D has a safe subset too, but I don't think Zig does. You could emphasise portability: C++ seems like the clear winner in terms of having the most supported platforms.


Microsoft and Google are trying to retrofit a safe subset into C++, via the Core Guidelines and lifetimes, enforced via static analysers.

Since C++ is more relevant to what I do, naturally I keep validating how good they fair and so far not good.

Lifetimes checker for example, is only able to deal with basic workflows and when it complains the error messages aren't that good, leaving you wonder why the analyser considered it an incorrect lifetime.

Main reason being, if the call sites aren't annotated, it can only guess the lifetime use, and mostly it gets it wrong.

No doubt if they keep throwing money at it, they might eventually solve it, however this is already a three year effort and this is the best so far.


It's also possible to formally verify C++, [0][1] but this isn't commonly done. I imagine it's very labour intensive.

[0] https://www.eschertech.com/products/ecv.php

[1] https://trust-in-soft.com/


I look forward to seeing Zig, but this is a pretty bad explanation and is wrong in fundamental ways:

>This means you can be sure that the following code calls only foo() and then bar().

No you can't be sure of this. Zig has different rules about error handling depending on what mode you build your software in, and it's not clear or rigorously specified how different modules built using different modes interact with one another when it comes to error handling.

So that initial line:

    var a = b + c.d;
That may end up causing a panic depending on the type of b and c.d, or it might result in undefined behavior, in which nothing is guaranteed. You also need to know something about the type of c, since if c is a pointer type then it could be a dangling reference.

As for hidden allocations, the only hidden allocation I know of was C's now deprecated runtime length array. C++ doesn't have this feature and I don't know of any core language features in C++ that allocate dynamic memory any longer. Some older compilers used dynamic memory for exception handling but modern C++ compilers preallocate memory for all of the standard exception types. I want to say that Rust also doesn't dynamically allocate memory but I'm not 100% sure about that. D does a boat load of dynamic memory allocation in the core language which is rather unfortunate.

For bare metal support, Rust has first class support for no standard library. C++ does not but in practice clang/GCC and MSVC all support compiling without it.

As for being a portable language, that's a nice goal that Zig can establish for itself, but that's all it is right now.

For build system... well C++ has an atrocious build system, there is no justification for it nor is there any defending it. Good on Zig for wanting to make the entire developer experience pleasant and making package management a core priority.

And finally for simplicity, that word really doesn't mean anything anymore. Everyone calls their project "simple" and everyone has a different meaning for it. Zig can claim its simple, maybe it's true, maybe it isn't, I think that's something that can only be decided if and after it gets some degree of adoption.


> I don't know of any core language features in C++ that allocate dynamic memory any longer.

The article mention coroutines. There may also be an allocation when one throws an exception.

> Everyone calls their project "simple" and everyone has a different meaning for it.

So true. For example, I'd say Brainfuck is simple. And much simpler than Zig. But that does not mean I'd want to use it.


Coroutines in C++ are a good example, definitely. Rust managed to use the borrow checker to avoid dynamic allocation with coroutines. I wonder how Zig handles this, and I hope it's not through undefined behavior.

From reading it over, it looks like Zig does use dynamic memory for coroutines but it requires the memory to be passed in at the point of construction. That has some pros and cons to it and is consistent with Zig's overall approach so I can respect that choice.


Not sure what you mean by dynamic memory. Much like a struct can be stored at any memory address - global, heap, stack - a coroutine ("async function") in zig can be stored at any memory address. Just like `pointer.* = Foo{ .a = 12, .b = 34}`, the pointer decides where the struct memory goes, `pointer.* = async foo();` the pointer decides where the async function's stack frame goes.


> As for hidden allocations, the only hidden allocation I know of was C's now deprecated runtime length array.

I believe there's plenty of functions in C's stdlib that just call malloc as they please, like localtime, for example.


localtime does not do any dynamic allocation and very few C functions do, there's malloc and some string handling functions that do it. Furthermore I am referring to core language, not standard library. Zig also uses dynamic memory in its standard library and any non-trivial standard library will do memory allocation.


Thanks for the correction on localtime, but they point is not avoiding allocation completely, is being explicit about it. All functions in Zig's stdlib that allocate memory take an allocator as parameter.

That's pretty much the whole point: allowing the programmer to know if a function might allocate or not, and allow them to customize which allocation scheme to use by passing in whatever allocator they prefer.


> well C++ has an atrocious build system, there is no justification for it nor is there any defending it

C++ has no build system. It has a compiler (and a preprocessor). That's it.

You can choose from a large number of possible build systems, some of which are atrocious (not always for the same reasons) and some of which are very nice.

C++ also has no package management system. For a lot of us , that's a feature not a defect.


With any compiled language you can use the compiler and vendor your dependencies instead of using the language's conventional package manager. For example, nothing prevents skipping Cargo and building Rust directly with rustc the way Bazel does.

https://github.com/bazelbuild/rules_rust


D dynamically allocates stuff that you never had in the first place so it's not like Java or anything like that.


Can you explain what you mean by never had in the first place?

The core D language allocates closures and exceptions which is like Java. Rust and C++ don't do any kind of dynamic allocation for these features.


It is not entirely true, please check D as a Better C [1]. You can have your cake and eat it too with D.

[1]https://dlang.org/blog/2017/08/23/d-as-a-better-c/


Exceptions are the big sticking point at the moment, true, but you don't have to allocate them with the GC.

Rust and C++ don't have associative arrays in the language to begin with, so if you write C++ in D you won't allocate.


I never mentioned garbage collector or associative arrays, I mentioned dynamic memory, closures, and exceptions.


Well I hoped dynamic memory was obvious given the D GC is written entirely in D and it has to get memory from somewhere


> D has an optional garbage collector, and it is common for code to use it, so without a full and recursive audit of all your dependency tree, it’s likely that you are accidentally using the garbage collector.

No audit needed since 2014 when the @nogc attribute was introduced. Under @nogc there is zero hidden allocations.


Thank you - I removed the incorrect claim from the article


Thanks!


Caches are everywhere, and systems have now 5 layers of different memories, from L1 to a network attached storage, plus accelerators memories such as GPU.

I would like a language designed to target a hierarchical-memory system. I would like a language that forces me write single-threaded batched (or "blocked") algorithms with low cache misses.

How many pieces of code are out there where an easy x4 speedup could be achieved today if there were written with batched operations from the start? (It also shows how limited are the compiler in autovectorizing)

Rust gives me guarantes regarding memory safety, I would like a language that gives me guarantees that SIMD instructions and 2 cache levels are correctly used, without having to read the source code and compiler output.


Having worked on hash table implementations in C, and having done everything to minimize cache misses, e.g. using tiny 8-bit bloom filters within a cache line to avoid further cache line probes, I now prefer Zig to C, because I believe it makes memory alignment far more explicit in the type system.

You can even align the stack memory for a function and this is all upfront in the documentation. You don't need arcane compiler specific pragmas. Zig just makes it easy. Zig's alignment options are so powerful and neat and available compared to C, right down to allocations with custom alignments, and all first class at the language level. Compare that with C's malloc() and posix_memalign(). Implementing a Direct IO system in Zig recently was also a breeze.

I also appreciate Zig's approach to memory management, where even the choice of allocator is considered important, and for things like async/await, Zig's explicitness around memory requirements is brilliant. Zig's @frameSize builtin (https://ziglang.org/documentation/0.7.1/#frameSize) will tell you exactly how much memory a whole chain of functions will need to run async, including their stack variables. You can even choose where you want their async frames to be stored: global, heap or stack.

Again and again, Zig's design decisions have just been spot on. Huge kudos to Andy Kelley and the Zig communities.


  > How many pieces of code are out there where an easy x4 speedup could be achieved
  > today if there were written with batched operations from the start? (It also shows
  > how limited are the compiler in autovectorizing)
Even before auto-vectorization, I'd love a functional automated "loop unrolling with interleave" that works on large functions. There is a pragma for this in Clang, but when I checked it in clang-9 it didn't work. I'll have to try again as v9 is a bit old now. When this is well supported it will avoid easily "filling the pipe" on multiple issues cores when doing batched operations, without having to manually unroll the loops as is done in VPP for example: https://gerrit.fd.io/r/gitweb?p=vpp.git;a=blob;f=src/vnet/ip...

Manual unrolling works, but getting the same effect with a simple pragma on top of the loop looks so much more attractive ;)


<rust evangelism strike force>

It sounds like the complaints here are operator overloading, global allocator by default, and metaprogramming. I think the value of these features are certainly not objective, but it's not a very compelling argument to avoid a language because of them.

As for optional standard library support, this is actually really common in Rust and is a breeze to support. Just annotating [no_std] basically gets you there libc wise. There's even additional restrictions like [no_core] to remove even more. I believe this should be noted.

I think the author missed an opportunity to really drill into actual pain points like the steep learning curves to c++ and rust, and where/if Zig excels in specific examples.

</rust evangelism strike force>


> but it's not a very compelling argument to avoid a language because of them.

You're looking at it from the wrong direction. Both Zig and Rust target low-level programming, i.e. domains where C and C++ are very established. You don't need a reason to avoid a language -- you need a reason to invest a great amount of effort to switch away from an established incumbent.

Now, my biggest issues with C++ are, in this order: 1. language complexity, which makes understanding and changing codebases harder, 2. long compilation times which lengthen cycles and reduce software quality, 3. lack of memory safety. Rust improves on 1 a tiny bit, and solves 3. Zig solves 1, 2, and almost completely 3. This means that I have little reason to even consider switching to Rust; I will only if it ever becomes dominant. I wouldn't switch to Zig right now -- it still needs to prove itself, but at least it's a contender, because it offers something quite radical. Rather than an improved C++, it is a whole new way to think of low-level programming, it seems to address precisely the things that bother me about C++, plus it focuses on other aspects that are very important to low-level programming, like cross-compilation.


On the other hand, what does Zig lose that C++ has? I've seen that it doesn't have destructors. Does it have some other means of ensuring that I'm closing my resources appropriately, or am I burdened with ensuring that myself? After all, memory safety is just one kind of resource safety.


It has "defer" and "errdefer" constructs for cleaning up resources, comparable to Go, but obviously applying to more things since there's no GC.

https://ziglang.org/documentation/master/#defer


AFAICT, it loses absolutely nothing. It does reject C++'s implicitness, so it replaces destructors with an explicit defer, but the mechanism is pretty much the same. That it does all this in a language you can fully learn in a day or two is remarkable.


> 2. long compilation times which lengthen cycles and reduce software quality

Seems like something that will be improved over time in Rust.

Do you see any advantages of Rust over Zig, which wouldn't fall under point 3?


No, although language preference is subjective. Rust is a language in the C++ tradition and, like C++, is one of the most complicated programming languages in the history of software, while Zig is probably among the simplest. Some people like elaborate languages and find the design, which mixes C++ and ML appealing. Zig is unlike anything else.

I would say that the biggest philosophical difference is this. While Zig and C++/Rust are all low-level languages and so suffer from low abstraction (it comes with the territory) -- i.e. implementation details are hard to hide, and changing them requires changing in the APIs clients -- C++/Rust invest a lot of complexity in trying to make the code, once written, look as if the language has good abstraction. The code is still as hard to change, but it looks like a high-level language on the page. Some people may like that, but Zig completely rejects that.


Compile times will likely improve in the future, but they're still likely to be much longer than those in zig. Especially if you also consider the body of software surrounding zig/rust. I believe zig developers are more likely to take a stance that libraries should be more specific and smaller, while rust libraries tend to be more all-encompassing. Even if the resulting code is small and efficient, the compiler has to do more work.

I think Bryan Cantrill had a pretty spot on description of this kind of thing in his "values" talk, even if zig is not on his list.


I have always been using Pascal

It used to solve 1 (although it becomes worse every year when new languages feature are introduced), it solves 2, and most of 3 (with automated reference counting)


Reference counting is a GC (and often not a very good one). In low-level domains you often don't want a GC for various reasons.


That is why Pascal also has raw pointers and inline assembly

So it is safe by default, but for those hotspots where it actually matters, you can write the most performant, unsafe code


You really have to go out of your way to get memory safety wrong in modern C++.


Not really. It's still extremely easy to trip over things like iterator invalidation or using moved objects.


Rust is also getting configurable allocators soon. Most standard collections will take an additional, optional allocator.

There are also a few people who want to rethink the design of allocators to allow for inline storage. It definitely sounds interesting, but may end up being too complicated.


Do you have a link to the RFC for this? This sounds awesome.

EDIT: thanks to the links below I found this [1] and this [2]

[1] https://github.com/rust-lang/rust/issues/32838

[2] https://github.com/rust-lang/rfcs/pull/1398


While I don't have an RFC your you (I'm not the same person), here's an interesting thread on maybe tweaking the interface: https://internals.rust-lang.org/t/is-custom-allocators-the-r...


And here's the working group's repo: https://github.com/rust-lang/wg-allocators


I'm not sure there's an up-to-date RFC, but here's what it looks like: https://doc.rust-lang.org/nightly/std/boxed/struct.Box.html#...


Will they avoid the Transparent Comparator fudge that C++ has because the allocator is part of the type?

https://stackoverflow.com/questions/20317413/what-are-transp...


Could you elaborate on what this is, exactly?


In short, it allows a std::map<K, V> to be queried with a type that isn't K. For example,

    std::map<std::string, int> m;
    std::string_view sv = "mykey";
    auto&& iter = m.find(sv);


Oh, rust already has that.


I guess I would be in the D evangelism strike force, but I concur.

D also has a really rather nice composable allocator system in the standard library.


The “no metaprogramming” is a bit of a misnomer. The thing is that Zig uses the exact same syntax for metaprogramming as for regular programming, you just add the comptime keyword and use types like variables. This is a much more ergonomic approach than having a completely separate syntax, which is really common and imho annoying in other languages.


IMHO, the list seems to confirm why I would choose C++ or Rust over Zig.

I might still consider choosing Zig over C, but since Zig has nowhere near the embedded coverage as C, with all the obscure compilers, it seems to me that Zig will remain a niche contender.


This is a recognized problem, which is why a C compile target is on the todo list.


I've been working my way through the blog os resources in Rust (https://os.phil-opp.com/) - is there a similar tutorial for Zig? I think it would be a great comparison. Closest thing I can find is this hobby x86 kernel in Zig: https://news.ycombinator.com/item?id=21967668


If you can't find one you should consider making your own and contributing a 'bare bones' style tutorial to https://wiki.osdev.org/. If you're familiar with the subject matter, getting to the point of working serial output in a new language shouldn't take too long.


I was interested by this statement made in the article: "The entire concept of the heap is strictly in userspace"

Is that just an analogy between Kernel and userspace from operating systems? Or is it an actual term used for programming languages, when abstract from an operating system?

If I was running zig on a bare-metal embedded device, and wrote my own heap allocator, is it appropriate to call that a "userspace" heap allocator?


The concept of the heap is in the userspace of the language. This terminology is confusing to someone thinking of usermode/kernelmode - this is instead talking of userspace versus language-space. As in, users implement code to reason about heap memory; the language does not know or care what heap is.

At a language level, zig does not have a "new" or a "malloc" or anything of the sort.


It means that it's a feature that can be implemented without special language-level support. As an example, Go 1 has maps and slices which are "magic" generic types that a 3rd party library author would not be able to recreate.


Flagging Out of Memory panics is pretty disingenuous.

Unless Zig only does static allocation of memory (fixed stack and heap), it is equally at the mercy the Linux optimistic memory allocator.

With Linux, you can request your memory, get no error code, and then when you go to use it--BOOM.


Linux is only one of many, many different environments in which the same Zig code can be compiled to target. That's the point, you write your code in a portable, reusable manner, and it behaves correctly everywhere.


That can be disabled and is disabled in many situations where one would use C or C++ such as embedded devices. It can be disabled system wide or an application can disable it for itself.


Actual article title is “...C++, D, and Rust.”

D sometimes feels like Rodney Dangerfield =)


As a community I really think we're not flashy enough about what we have.


Yes, one of D’s major failings is in fact not technical, but marketing.


At least it is mentioned

I used to believe Object Pascal would replace C++, but now it is not even mentioned


we get no respect, I tell ya, no respect


Is it possible to mix Evented and Blocking modes in a single Zig app? At work we write libraries that mix them. For example you might want to do CPU bound work in threads, while you'd want a single thread to manage IO (this thread would run in Evented mode).

At a glance, this seems not possible in Zig because it's a compiletime flag?


It's possible. The global flag is just a convention used currently by the standard library. There still are some rough spots, but I've written a TUI chat client that uses blocking I/O when interacting with the terminal, while all other operations are evented and use async/await.

We still need to nail down the interface, but there is no technical blocker.


I'm still constantly evaluating Nim, D, and Rust. And tooling-wise, I think they are production-ready.

Not sure what kind of thing I write in Zig. Will also try it, anyway :)


> If Zig code doesn’t look like it’s jumping away to call a function, then it isn’t.

I'll admit, I don't know zig. But how would it implement 1.0*2.0 for an architecture that doesn't support float multiplication? I assume that it would do something like compiler-rt and replace that operation with a function call. See https://github.com/llvm/llvm-project/tree/main/compiler-rt/l... for examples of "code that doesn't look like a function call" that actually is.


The compiler can inline the necessary instruct in-place for such an architecture. I think no jumping away is meant more in terms of no hidden side effect


I have to hand it to Kelley. I didn’t believe in this project when I first read one of his initial “I want to make a proglang” articles. Just one guy? And he doesn’t have a programming language background? And he wants to make a language which can be used instead of C, the most hard to replace programming language (apparently)?

I still haven’t tried it myself. But I see now that there are plenty of people who have made an informed choice and chosen Zig over the other systems programming languages. So obviously they must be doing something right. Obviously Zig has something that system programmers like.

So, props!


Has anyone tried to combine Zig with Cosmopolitan libc yet?

https://justine.lol/cosmopolitan/index.html


So for someone who's not actively into the systems programming ecosystem, how used is Zig (niche i guess?), and how different is it to nim? Since i don't see any mentions to it.


Tbh this sounds more like “we dont need all these newfangled tools, in my day we just ...”

writing software is complex, you do actually need sophisticated tools. Saying the language is simple just moves the complexity somewhere else. I say having many of the hard problems be solved by the language itself and its much larger development community is much better than just heaping it on the individual programmers (who might then get it wrong).


That's the wrong impression, C already fills that "in my days" niche nicely (and it does it well) ;)

Zig enforces correctness in a lot of places, it doesn't have C's sloppy implicit type conversions, it's impossible to accidentally use uninitialized data, it has "proper" arrays and slices with range checking, it enforces to handle return values, it has a proper error-handling system, etc etc...

But it's quite hard to balance "correctness" with "convenience", Zig has placed itself somewhere between "hippie C" and "extremist Rust". When it comes to enforcing correctness, it's much closer to Rust than C though.


> writing software is complex, you do actually need sophisticated tools.

Eh, in my experience people mostly use 'sophisticated tools' because either their language doesn't do enough to help them out or the just really like making things more complicated than they need to be.

Like, watch Casey Muratori develop a game in front of a live audience with C++ using a text editor, compiler, and a debugger. Or Jonathan Blow developing a compiler in C++ and a game in his own language using a text editor, a compiler, and a debugger.


Actually I think this kind of example is exactly the problem: people think that if Jonathan Blow can do it with a plain text editor, anyone can. But Mr Blow is one of the very best in his field, the vast majority of programmers are not at that level. So they need languages and tools to help them out.


First example doesn't use typing. Curious why? I have negative knee-jerk reactions to code without type annotations.


the caption above the example says:

> ...without needing to know the types of anything:


It's a bit unfortunate that an article titled "Why Zig when there is C++, D and Rust" mentions a lot of features that are shared by Zig, C++, D or Rust, not always giving credit or explaining how Zig solves these problems better.

Point by point:

- No hidden control flow: Linus Torvalds used to argue for C over C++ for similar reasons. I think it's a valid point for some kinds of systems programming. IMO, the argument is a lot stronger for features like destructors (C++) or Drop (in Rust) because those are truly hidden. At least the + sign is visible, and experienced C++ devs read it as a function call anyway.

- No hidden allocations: This is just a library design choice in C++ and Rust, not baked into the language. The standard libraries of those languages usually favor convenience over the last bit of flexibility and performance. At least in the case of C++, there's an ecosystem of libraries like the EASTL that make different choices.

- No standard library: That's fully supported in common C++ compilers and Rust. If Zig goes beyond what those languages offer, which it might, then the article should probably explain how.

- Error handling: "Zig is designed such that the laziest thing a programmer can do is [...] properly bubble errors up." That's also true for exceptions. Presumably, Zig doesn't have exceptions. This could be explained, and contrasted with Rust's monadic approach to error handling.

- Compatibilty with C: That doesn't tell me much, because C++, D and Rust all have excellent C compatibility.

- Package manager: Great! Both Rust and D also have one, so this might mainly be a point in contrast to C++?

- No metaprogramming: I looked up how Zig treats format strings. It looks like it's possible for functions to have "compile time arguments", and there are some rules around compile-time evaluation of if statements and inlineable loops [1]. At first glance, there might be some parallels to D's metaprogramming capabilities, which also has static ifs and loops, but I don't know if that mechanism is as powerful. It certainly looks elegant, but to me, it clearly is a kind of metaprogramming, not "no metaprogramming".

Having said all this, I'm very happy that there are more languages in this space now.

Maybe at this point, the best strategy for a project like Zig is to address Rust head-on in comparisons like this. It's very unlikely that someone reads this article in 2021, is looking for a new systems language and isn't also considering Rust.

[1] https://ziglang.org/documentation/master/#Compile-Time-Expre...


> - Compatibilty with C: That doesn't tell me much, because C++, D and Rust all have excellent C compatibility.

Zig's @cImport/translate-c is significantly easier to use than Rust's bindgen (though it's been a few years since I used bindgen) plus it can translate actual functions to Zig and not just declarations (e.g. inline functions)


I can totally believe that. My point is that a “why not X” article should explain why it’s better, not just name a feature that X nominally also has.

(BTW, bindgen hasn’t changed that much. The expectation seems to be that people write a safe wrapper around the generated bindings anyway, so not a lot of effort goes into making them maximally convenient. I also don’t find that ideal.)


dpp* can't do function definitions yet but for declarations it's as easy as #including a C header file natively.

* https://github.com/atilaneves/dpp


Serious question: Is there D? It has been years since I last heard anything about it - is it still fractured between multiple stdlibs? Has it seen any take up or sizable projects adopt it?


Can't wait to build a world of memory management tools in Haskell with the help of -XLinearTypes.

What this article mentions about non-global allocators will fit perfectly.


Sorry, but you cannot write malloc in linear Haskell. I wish you could too.


I could mmap some virtual memory and write malloc's API to allocate chunks, kept safe by linear haskell

I can write a variety of memory allocators, kept safe by linear haskell

more generally, anything you can do in C, you can do in Haskell. So malloc is fair game (although i'd just call the C one from Haskell)


No I mean linear arrows vs linear kinds means you can't write malloc without CPSing.


yeah i don't especially expect that to matter in practice. monads used to require CPSing all over the place too.


I’m wondering if there is a way to specify an allocator in some context without always having to pass the allocator to every function. A dynamic variable binding maybe, but I’m not sure how the function would handle an optional allocator parameter. Perhaps this would go against the Zig philosophy of avoiding unexpected behavior, but such a feature done right would seem better to me.


It's almost always a mistake to take the convenience of not having to pass an allocator over the flexibility of passing one. Testing, optimization via different allocators, etc. all become much easier to work with if you just make it a hard rule that anything that allocates takes an allocator.

It's impossible to misuse a function if it takes all its dependencies as arguments, etc..

With that said, I create structures that embed their allocator(s) and use them for their whole lifetime, with the rule that all their allocations have to come from the embedded ones or ones passed into methods (usually I don't mix argument allocators & stored ones, but this isn't a hard rule).


You could surely come up with a scheme of that kind in your application. To make it as simple as possible, you could simply have a global variable with the allocator you want to use, and pass a reference to it to every stdlib function you use; nothing stops you from doing that.

The explicitness is supposed to be respected by libraries, so that then the writer of the final program can enjoy maximum flexibility.


OK, I'll bite. What does this statement:

    var a = b + c.d;
actually do?


In Zig? It would allocate stack space for a new variable a, and add b and the d field of c together and insert that value into the memory location that a reserved. In D or Python or anything that allows for properties, it would be allocating a where-ever the language allocates stuff, then calling an infix function '+' which could be anything depending on the language including starting up a JVM in the background for god knows what reason, and calls it with the arguments b and the result of calling the property function d of the object c.


But what are the types of those fields it is "adding"? What are the semantics of "adding" for those types? These things are not obvious from reading that statement.


They are obvious to the programmer who knows the types of b and d and thus the semantics of adding them.

The operation itself implies that both b and c.d must be primitive types and so the semantics of the operation are defined by Zig's language rules.

edit: To clarify - this in contrast to, say C++, where nothing can be inferred about the types of the variables involved and the semantics of the operation, since '+' can be overloaded.


The point of the example is that in Zig, that statement is exactly as simple as it looks.

What looks like a field reference (`c.d`) is just a field reference; there are no getters or @property functions that are doing more complicated things.

And similarly with the addition operator, that plus sign is just addition and doesn't call a function somewhere else.

These abstractions are considered useful by the designers of other languages, but they are specifically excluded in Zig. The benefit of not having them being that it's easier to follow the execution flow of the program.


I think the article is trying to say the plus and the dot access are guaranteed to be language built-ins and therefore be very cheap to call. Furthermore they can't fail by throwing exceptions or such. (Note that in C the plus operator may result in UB, such as signed overflow, or forming a pointer beyond one past the allocated size.)


No way to know without knowing the types and the mode used to compile it. If c is a pointer then it could be a panic or it could be undefined behavior (dangling reference), similarly if b is a numeric type then depending on how the program is compiled you will get different results.


> C++, D, and Go have throw/catch exceptions

Pretty sure Go doesn't have throw/catch.


not only is panic/recover identical to throw/catch but with non-standard names, it is used in the go standard library.


throw/catch is more readable, because panic/recover doesn’t switch on type and isn’t block scoped (the defer stack is set up at runtime, which also can’t be good for the optimizer).


that's sort of a feature though, since it makes it less likely to be used ;-)

rust, i believe, can also recover from panics, but it's even less ergonomic, which makes it even more of a feature

zig goes to the extreme and makes panics completely unrecoverable, tho it's unclear how practical that is for e.g. long running servers w/ many clients

interesting discussion here:

https://github.com/ziglang/zig/issues/3516


> Pretty sure Go doesn't have throw/catch.

Go has panic/recover, which are basically exceptions with some different scoping rules.


'panic' is pretty much that.


Does Zig plan to support closures?


There's a proposal for that https://github.com/ziglang/zig/issues/6965


I was just reading through the language specs, and I think that's the only real dealbreaker for me. Not having closures would really make programming difficult.


"hidden" destructors are precisely what makes C++ safer than C at being correct.


Only manual memory management? Come on.. and on top of that .. no destructors ?? Pre C++98 already beats this language..


I'm still miffed about Andrew's response to DOS vulnerabilities in the standard library. I know that Zig isn't at a v1 yet (or wasn't at the time, at least) but that's no excuse to shut down conversation about security.

I also don't like that the discord is run like a cult. If you say anything bad about Zig, you're berated until you ultimately have to leave. It's the Rust community all over again.


> If Zig code doesn’t look like it’s jumping away to call a function, then it isn’t. This means you can be sure that the following code calls only foo() and then bar(), and this is guaranteed without needing to know the types of anything:

Okay, but what's the advantage, when due to optimizations such as inlining and tail call elimination, this isn't reliable in the other direction to begin with?

The reason compilers can remove function calls as such as an optimization is because it doesn't alter the semantics.

Rust will certainly inline most implementations of `std::ops::Add` to begin with as they tend to be small enough, and does it really matter they not be inlined?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: