The Zig build system is now able to run tasks in parallel. To avoid overloading the system (ie to prevent OOM) it asks you to define MaxRSS for each task, resulting in pretty sweet usage consumption: https://ibb.co/FW9kpxT
On a M1 Ultra studio (the same from my screenshot above) it takes me 6 mins to run the entire compiler test suite for arm64-linux (I do development in a Linux VM), which is pretty sweet.
Note that this is one stepping stone for getting good performance from Zig, but it's not yet incremental compilation with in-place binary patching [1]. That's still a work in progress.
Is it possible to use zig without the zig build system, in order to slowly integrate a new language in an existing program?
For example, can I use the C backend to compile zig to C, and then use the system compiler as I would normally do with a meson cross file or CMake toolchain file?
> For example, can I use the C backend to compile zig to C, and then use the system compiler as I would normally do with a meson cross file or CMake toolchain file?
It's possible, but:
1. The C backend isn't 100% there yet. You won't be able to use all features and might run into bugs.
2. The generated code won't be very readable, it's arguably not too different from just using Zig-compiled object files directly in terms of "opaqueness" and legibility.
If neither of these are a big problem for you (both points are likely to improve with time), then yes, you could do that.
To supply a data point: As of Zig 0.11.0-dev.2615+0733c8c5c, on an x86_64-linux host, the C backend is passing 1568 behavior tests compared to 1587 behavior tests passed by the LLVM backend on the same host. So, yes, it is not 100% there yet - it is 99% there :-)
Ah, good to know! My main sources were the release notes and occasionally your tweets (the account seems to have gotten suspended?), so my information was understandably a bit out of date. Glad to see progress being made though.
I feel like Zig can dominate the shellcode space, especially with pluggable allocators and minimalism. Is there any work towards outputting a naked payload for the lack of a better word? Given just the ISA, calling convention etc., producing something that starts from main() and does nothing but what is contained within main?
Totally, just use `zig cc` as an in-place replacement for clang and move forward from there as you feel comfrotable. I wrote a blog post about this approach: https://kristoff.it/blog/maintain-it-with-zig/
Particularly interesting is the use of nasm as a package dependency, which is executed to compile many assembly files into object files, which are then linked into the ffmpeg static library.
I'm using this package in a work-in-progress reboot of Groove Basin (a music player server) in Zig:
Point being that if you want to collaborate on the music player project, you don't need to screw around with a million system dependencies, it's just `zig build` and you're off to the races - no matter whether you are using Windows, macOS, or Linux.
The zig build system is under heavy construction during this release cycle of Zig. I recommend to check it out at the end of May when Zig 0.11.0 is released, and a few more issues will be smoothed over. Of course, if you want to get your hands dirty and help work on a bleeding-edge build system & package manager, come on over and give master branch a try.
First time I have seen the dotty zon file, it looks like zig anonymous struct syntax? If so, does that mean the structs/information in the zon file can be merged/included directly into the build.zig file where the dependencies are mentioned again? ie avoiding the zon file altogether? Maybe it is documented? but you are all working so fast I cant keep up :) I saw a http client/server push (btw nice!) that seemed to also include some new syntax that I wasnt familiar with; for(n..n2) etc. Anyway exciting times and great to see solid progress, well done.
libsodium is written in C and Assembly, but uses Zig as an alternative to autoconf/make/etc.
Builds are much faster than with make, and it makes it very easy to cross-compile to other platform, includes Windows, Linux with specific glibc versions, and WebAssembly.
In fact, it was the easiest to build Linux binaries for .NET, that have to support glibc back to version 2.17, but on a recent OS, with a recent compiler toolchain.
I'm not who you are replying to, but it's almost certainly due to autoconf &c. For many libraries on a machine with lots of cores, autoconf can take longer than the build, since autoconf isn't parallelized.
Also, IIRC, many smaller projects using autoconf have copied in a bunch of boilerplate feature tests that they don't need, like "is strncpy available" "is snprintf available" "does realloc NULL work". As you mention, each one of these tests generates and compiles a minimal C file, and not in parallel.
(And each test generates/defines a HAVE_SNPRINTF etc macro that your code can use to adapt based on available features. But if the project isn't as big as curl or git, it probably doesn't really adapt to all possible old and obscure systems anyway, so there's no point to 100s of such tests.)
I would guess that the author of the library has full control both over optimal parallelization of a build and minimal autoconf, but he can still observe a huge speedup, so I'd still like to read his answer.
I've been confused by the statement that "Zig can compile C Code" for quite some time and reading a couple of blog posts hasn't made it much clearer.
Does the Zig Project include a full blown C Compiler? Is it the Zig Compiler with some sort of adaptation to compile C code? Or does it use something like Clang behind the curtains? (In this case it would be responsible for some other parts of the compilation process)
Zig embeds clang to compile C code. This doesn't add a new dependency since Zig already depends on LLVM. If there is a future where the self-hosted Zig backend is good enough to not depend on LLVM anymore, there might be a reason to use a C compiler written in Zig (possibly https://github.com/Vexu/arocc)
Also worth noting that Zig embeds C stdlib source code (musl if I'm not mistaken). That means it is easier to cross compile C projects using zig since you don't need to install a cross toolchain. This is why some golang/rust projects use Zig when they need to cross compile.
This isn't quite correct. Yes, it can output C code; however the result is not very readable at all, and fails the DFSG on generated code. It _is_ useful for compiling Zig code to targets which aren't supported by LLVM, however.
Do you have a reference for this? Does it use a C backend that is part of LLVM, or is it something Zig-specific? What are its limitations? Can it compile libraries to C, or only entire applications?
I have wanted Rust and Zig to support compile-to-C for a while, so this is exciting news for me.
One thing that would particularly interest me is if functions intended to be inlined could be emitted into .h files.
1. It uses LLVM as the default backend, in which case that handles C as well
2. Optionally, and in the future by default, the Zig backend (a different one from the LLVM one) includes a C compiler (?)
Something like that. Its very powerful and it has the best C integration of any language in my experience (better than C++ since it effectively namespaces C headers).
Thanks for the answer but some things are still not entirely clear.
Who would be doing the parsing of C code for instance? Clang is also based LLVM but it is responsible for a load of C-specific stuff like parsing the language and feeding it into LLVM for instance.
I've got little experience with this stuff so I'm not sure if my questions even make that much sense.
(Edit: I believe ptato has answered my question above.)
You can invoke `zig cc` with the same flags that you would pass to `clang`. Zig cc takes your arguments, inspects them and applies some transformations and then Zig invokes its internal copy of clang with the resulting flags. One example of transformation is enabling ubsan when building in debug mode, another is `-target` which makes Zig add some sysroot flags to enable cross-compilation.
In this case all the file operations are handled by clang, Zig basically just sets up all the advanced flags for you when it makes sense to do so. One last example of CLI rewriting is related to the caching system: Zig cc knows how to cache a build by passing to clang the same kind of flags that cmake (or other build systems) would.
There's another C-related feature of Zig that works differently: `@cImport()`, which is a builtin that allows you to import C header files directly into a Zig script, in order to use from Zig all the types exposed by the header file. This one translates the C syntax in equivalent Zig syntax. I believe it uses clang's code to parse the C code into an AST, but then it's all Zig logic from there.
Lastly, we have a C frontend project going (arocc, linked by ptato) on that we plan to eventually upstream into Zig. This would be a replacement for clang and would work by parsing the C code and translating it into Zig IR, similarly to how the D programming language does it. The only limitation of this approach is that it would only support C, not C++, so we would either have to still keep clang around for C++, or ditch clang but then lose C++ compilation support. That said, even in the case where we keep clang around for C++ support, it would be worth having a custom C frontend for the Zig compiler in order to have more fine-grained control over the compilation process than what we can get from clang, plus the fact that it would make debug builds faster, since we could avoid invoking LLVM completely in that case (ie debug builds) even if you depend on C code.
How does this work on the assembly support side? I can see inline assembly, something called global assembly, but is Zig build also able to build standalone assembly main.asm type files?
I'd like to add the link to the use examples demonstrating features of zig for c and c++ compilation available with the default zig installation which aren't directly available after installing clang:
Sorry, I wasn’t specific. I was talking about how cImport works in Zig. I didn’t know `zig cc` worked differently.
“ Zig’s @cImport builtin is unique in that it takes in an expression, which can only take in @cInclude, @cDefine, and @cUndef. This works similarly to translate-c, translating C code to Zig under the hood.”
https://ziglearn.org/chapter-4/
Glad to see a Zig Build for Raylib.
I am using Zig to compile most of my C stuff now as I learn Zig. This makes Zig much more attractive to anyone considering learning it.
It's easier than Make and CMake for me.
Can you debug zig in any MS/jetbrains IDEs? I type in nvim but debug in whatever has the best experience. I think I asked this question like 2 years ago and was told you can write tests, use lsp server or look at assembly.. has situation improved?
I use VS Code on Linux to debug Zig. Haven't tried the others you mentioned, but it just emits standard DWARF symbols, so I'm guessing if you can debug C/C++ you could probably also do Zig with minimal changes? I just use the lldb VS code plugin[0], which works out of the box for me with no issues.
I've been able to debug Zig in Windows by simply opening the .exe file with Visual Studio. I didn't explore much what can be done in it but it is possible.
At least DWARF is supported (e.g. any gdb or lldb frontend works, e.g. what various VSCode extensions like CodeLLDB offer). Not sure about PDB support on Windows actually.
This also means you can transparently debug-step from Zig into C code and back, which is kinda expected but it never gets old for me :)
Not so sure about any real IDEs, lldb has worked fine for the (fairly small) zig programs I've worked on and the "CodeLLDB" vscode extension worked. Of course with the move from LLVM i assume lldb will stop working, and vscode may not be a good enough debugging experience.
The best debugging experience imo is using gdb and rr within nvim. Works for zig, c, rust, etc. with minimal configuration in nvim. The less I leave vim the more productive I can be. Same thing probably goes for emacs although I will never admit it.
I’d love if you could elaborate on your setup. Are you using something like nvim-dap from within neovim or something else? I’m still trying to improve my debug experience in neovim.
Would also love to hear more. I have nvim-dap set up for Go and for C and it is an OK experience but I would not call it great. This is something on my Neovim todo list.. improving my debugging experience.
I take it so that zig build system is Turing complete, isn't it?
There is a reason why, for example, meson build system DSL is made to be non-Turing complete. It makes reasoning much simpler.
IME you really need a programming language to describe a build, even when it is desired that the result looks 'mostly declarative' in the end.
E.g. not sure how Meson handles this, but when I have a project with dozens of similar build targets and platform specific compile options, I really want to do the build description in a loop instead of a data tree.
PS: apparently Meson build scripts can also have variables, conditions and loops, which I guess makes the difference to an actually Turing complete build system rather esoterical?
A proper build system is so much more than just describing build targets and their dependencies, you also want to generate source code, copy and process data files, communicate with REST services etc... The more this happens in a 'real' programming language the better.
I fell in love with the ninja build system recently. It's machine language for build systems, and I can write my own scripts to generate the ninja build file, rather than introducing any new language from someone else just for build descriptions.
Meson has the ability to generate code and process data files. Why do you need to communicate with REST? Meson support that however by allowing you to break out using the `run_command()` function.
Uploading build output somewhere for instance. However this may overlap with CI tasks (but there, usually YAML is used to run shell commands, which is also a bit of a crutch).
That's just your opinion though ;) Why should the process of producing a build artifact be different from putting the artifact into the right place? E.g. Makefiles usually have a 'make install' step which isn't all that different from uploading the build result somewhere, and make definitely counts as a build system.
I'm creating a different build system (not Zig's), and I'm taking a different approach. Instead of a non-Turing-complete language, I've made one that is as powerful as possible. However, it will allow users to restrict the language so that they will only use subsets, and those subsets will not necessarily be Turing-complete.
In this way, it has the power to do anything, but the ability to restrict that power for ease-of-use.
When it comes to reasoning ability, Turing-completeness is a red herring. Turing-completeness falls beyond the reasoning ability of something that has unbounded computational power and unbounded patience, but because people only have access to bounded computational power and have bounded patience, their limit of feasible reasoning are well below Turing-completeness.
A language with nothing but boolean variables and functions with no recursion, or a language with nothing but boolean variables and loops of up to a depth of 2 can already encode TQBF [1], which makes reasoning about it intractable (it's PSPACE-complete). Because most build systems fall within that category they might as well be Turing complete.
I'm holding my breath. The first day out I ran into a link problem. Zig linked statically not dynamically as instructed and while the task was produced, it didn't work anyway. That's not gonna be fixed I believe until 0.11.0. now to give credit where due, the head guy (Andrew if I recall) had already found this issue or a good part of it.
zig looks like a 'better C', is there a list of something it does better than C(e.g. integer promotion,UB,etc), so that I should embrace it quickly and start to use it in my embedded projects? would like to see a comparison table between zig vs c (or even c++)
Zig things that I miss when I have to go back to C:
- All integer operations trap on overflow in safe build modes; with explicit operators for saturating or wrapping arithmetic
- No implicit integer promotion unless the destination type can represent all values of the source type (so no implicit signed/unsigned conversions unless they're statically guaranteed to be safe - e.g. a u8 can coerce to an i32 but not to an i8)
- Arbitrary bit-size integers (C23 will have this)
- Enums that are actually useful and fun, vs the complete waste of time that C's enums are (Enum values are namespaced, you can't directly use their values as integers, you can control the underlying representation if you want, etc)
- Built-in support for tagged unions, also known as sum types (bare union + a tag indicating which field is active)
- Safe unions in safe build modes (compiler inserts a hidden tag to track which field is active)
- A standard library that's actually useful (it's small compared to some other languages, and not well-documented yet, but it's not littered with landmines the way C's is)
- A modern import system instead of preprocessor-style copy/pasting text
- Compile-time programming in Zig, instead of preprocessor macros
- Arrays are an actual type, instead of decaying to pointers
- Much better support for pointers (pointer + length AKA slices are the primary way to deal with multiple items whose length can vary at runtime; also single-item pointers and multi-item pointers are different types, so you can't accidentally index into a single-item pointer or attempt to dereference a multi-item pointer without providing an index)
- Errors must be handled, with convenient syntax for passing the error up the stack + inferred error sets so that you don't have to explicitly annotate the set of possible errors for most functions
- Nulls encoded in the type system so that they must be explicitly handled.
- test blocks for writing tests inline and running them with `zig test`
I did a bit of Zig exploration a few months ago, here's a few things that caught my attention:
- You don't have implicit allocations when using Zig stdlib. For example, when you instantiate an ArrayList or HashMap, you need to pass in an allocator, so you have full control over how memory is managed. So, even though you have higher level data structures, you still have a lot of low level control.
- Very good error handling. IMO better than Rust and Golang, while still being very explicit about what is happening
- "Uncolored" async functions, meaning there's no special syntax for declaring functions that can be paused/resumed. If I understood correctly (didn't try it a lot), you can turn any program into "async" by changing how I/O is handled globally. More details here: https://kristoff.it/blog/zig-colorblind-async-await/
As far as language features go: extensive comptime support, error handling and optionals integrated into the language, a new (to me at least) twist on generics, type reflection, and a couple of smaller 'convenience features' like type inference, if and switch being expressions, etc...
Reading this from top to bottom gives a pretty good overview:
The word for German in many Slavic regions is something like Njemacki (and similar). Loosely translated it means "Those who cannot speak".
Of course, Germans can speak and languages of German descent are just as rich, precise and expressive as any other. But the term Njemacki probably stuck around out of an initial ignorance about a foreign culture in earlier times and lack of general education.
>initial ignorance about a foreign culture in earlier times and lack of general education
I upvoted your comment because I agree with it in the context of the parent, but this ending explanation is frankly ridiculous. It sounds like early Slavs had no idea that Germanic tribes had their own languages which is just plain impossible. Proto-Slavic němъ meant also unintelligible/hard to understand. So contrary to popular opinion those Slavic words for Germans doesn't (and didn't) mean mute (or "cannot speak"), it's just that in modern Sl. languages words stemming from němъ evolved to indicate mostly muteness.
Thank you for the clarification. I tried to exaggerate my point, but what you say is obviously true and much more nuanced.
In fact it's so easy to forget or ignore that we humans were just as smart and creative thousands of years ago as we are today.
But it's interesting and funny to think that our ancestors called each other mute, or rather unintelligible, because they didn't understand what the other one was saying. I find it endearing how we often stumble over our own limitations and quirks, so much that it is often ingrained in language and culture.
For the example you cited (anonymous struct literals), there are two parts to it:
1) It omits the constructor name. My uneducated guess is that "modern" languages try to avoid the Java-style pattern of repeating a type/constructor many times in a single line (e.g., "Point pt = new Point(0, 0)") when it can infer things to help the developer.
Someone needs to make a competing build system called Zag, just so I can eventually make the joke "we Zigged where we should have Zagged" after Zig has some major issue in our build env.
Once Zig is done, may be Zag could be a superset of Zig where it focuses on application and slightly higher level development. Think C++, Swift and Java.
As "ridiculous" as it might be, we're a small non-profit and have to prioritize where to allocate our resources as we develop Zig. When it comes to macOS, we follow the same policy as Apple: support the latest 3 versions of it.
Maybe once Zig is fully developed we'll focus our effort on more retrocompatibility.
If you want to help us get there faster, consider donating to the Zig software foundation, as we're looking to hire more developers to work full time on Zig (we're 4 full-time people right now).
No, I will not donate, because I think that my money will be spend on unnecessary effort of removing already supported targets, instead of meaningful tasks, like writing simple check in CMake to indicate that given OS is not supported instead of failing with vague error message on link time step.
You can't even run the current Xcode version on the previous macOS version, which is much more ridiculous, but that's the Apple software development ecosystem for you ;)
andrewrk commented Feb 17, 2023 •
macOS 10 is no longer supported by Apple, and therefore also no longer supported by Zig. You have to use one of the latest 3 versions or else your system is not being patched for security vulnerabilities, and likewise zig does not provide support for cross compiling to anything less than the latest 3 versions.
https://ziglang.org/download/0.10.0/release-notes.html#macOS
I also had this issue with a non-updatable MBP, but linux support is good, and I am looking forward to trying Zig out with Risc-V when I can get my hands on some hardware that hopefully wont expire as fast.
Yes, and dropping Catalina means Zig disregards a lot of not really that old and still powerful Apple hardware, except what on this list:
iMac (Mid 2014 or later)
iMac Pro
MacBook (Early 2015 or later)
MacBook Air (Mid 2013 or later)
MacBook Pro (Late 2013 or later)
Mac Mini (Late 2014 or later)
Mac Pro (Late 2013 or later)
On a M1 Ultra studio (the same from my screenshot above) it takes me 6 mins to run the entire compiler test suite for arm64-linux (I do development in a Linux VM), which is pretty sweet.
Note that this is one stepping stone for getting good performance from Zig, but it's not yet incremental compilation with in-place binary patching [1]. That's still a work in progress.
[1] https://kristoff.it/blog/zig-new-relationship-llvm/