I think it’s great people keep experimenting with new languages. Sure many will die off and be forgotten, but the people involved will learn and create. If we keep waiting for someone else to make the perfect language we’ll never get anywhere.
I like how error handling is structured, though I’m undecided on if I prefer a specific case of a more extensible system. Pretty sure it’s right for 80% of cases though.
I don't mean to troll here. Aren't generics officialy excluded forever from Go lang roadmap? I'm not following Go much, but that's what I recall from months ago.
I think they are still open to it, in a very sceptical way. Basically they are asking [1] the public/users for papers/documents/articles/blogs, describing the use cases of generics in Golang. How would they use it? Why is it required (as in can you not do it easily/better without generics)? Stuff like that. They basically aren't convinced that generics is something they need/want in Go, but if the users can convince them, they will still go for it in the future of Go.
"I'm focusing today on possible major changes, such as
additional support for error handling, or introducing
immutable or read-only values,
or adding some form of generics,
or other important topics not yet suggested.
We can do only a few of those major changes.
We will have to choose carefully."
I find it interesting that they're asking the community what the use cases for generics are, when internally Go does support generics for its built-in collection types...hmmm I wonder why?!
Maybe because it provides compile-time type safety and reduces the amount of boilerplate that would otherwise be required. Yet they can't see the usefulness outside of this single case?
I'd like to see Go adopt Ada-style generics. They are a bit more cumbersome to use than templates in C++, but they allow for reuse of code across instantiations of the same generic that have the same actuals, thus shortening compilation times.
This is the ugliest thing about zig. Can you come up with a better proposal to capture the semantics? I'd love to see it. I've tried and this is the best I can do.
// original:
%%io.stdout.printf("Hello, world!\n");
// my favorite ideas:
try io.stdout.printf("Hello, world!\n");
dare io.stdout.printf("Hello, world!\n");
// others:
err io.stdout.printf("Hello, world!\n");
ex io.stdout.printf("Hello, world!\n");
bet io.stdout.printf("Hello, world!\n");
bid io.stdout.printf("Hello, world!\n");
For the binary operator:
// my favorite ideas:
a yet b
a odd b
a but b
a ex b // extra or except
a err b
a bail b
// others:
a else b // Would overloading the meaning have sense?
a esc b // escape
a alt b // alternate
a aux b // auxiliary
a sub b // substitute
a prox b // proxy
a then b
a eor b // error or
a repl b // replacement
a supp b // supplemental
a %or b
Agreed. A short keyword is generally better than a symbol. Yes, you do get used to symbols, but words come naturally. "a and b" is easy to read, easy to explain and read by newbies and overall much, much nicer in my opinion than the 1-letter-shorter "a && b".
Keywords can also benefit from the fact that 99% of programmers know at least the basic English words, while special characters are not necessarily international. They might not be used, they might not have the same meaning. I'm speaking more about "&" here, but it applies to other characters as well.
"try" could mean "return an error in case of an error" aka "%return". Then "dare"
could mean "panic in case of an error" aka "%%foo();".
Or use "ex" or "but" everywhere:
ex foo();
const num = foo() ex 42;
const num = ex foo();
ex defer bar(baz);
but foo();
const num = foo() but 42;
const num = but foo();
but defer bar(baz);
Am I right in assuming that single ampersand is taken (like in C and/or for bitwise ops)?
I think things like "a|b" and "a&b" for or/and is fine - but when you need to stack them two and three deep, keywords starts to look more attractive... I think this holds true even for the relatively benign tripple-equal (===) used as a kind of stand-in for "a is b".
BTW, did I miss an obvious short intro to what these things are supposed to mean in zig? I feel I've overlooked an obvious quick-start document?
[ed: never mind - the post is just a bit hard to read on a cellphone - but nothing reader mode in Firefox can't fix]
From the docs, the prefix %return is a shortcut for:
const number = parseU64(str, 10) %% |err| return err;
Keeping the %% syntax, how about:
const number = parseU64(str, 10) %% throw;
(Or "raise" maybe)
I think that ordering is more readable than having the %return tucked away in the middle of the line (even more so if it becomes a plain keyword, no % sign).
Edit to add: a "throw" keyword could also be used to shorten normal error returns too:
Limit the audience for end user applications, possibly.
If this were generally true then C wouldn't be used in numerical applications which is clearly not the case.
I think this is the wrong way to look at it. Where C still has advantages over many languages is where the developer's primary mental model is the machine the program is running on. That's why it's used for device drivers and low level library work.
Over the last couple of decades, C has become less and less good at this task as the C model and the machine have steadily diverged. At the same time, it's limitations have been exposed more and more.
In my view, Zig is aiming at this space i.e. where the developer is primarily interested in what the machine is doing. What they have done is kept the C model (largely) but mitigated or removed some of the egregious failings of C.
Zig may not be unique in doing this but it's definitely quite rare. Rust, Go, Swift et al aren't attempting to do this. This isn't a criticism just a perfectly legitimate difference of priority.
As I failed my soothsaying exam, I'm in no position to predict which, if any, will succeed. I hope (without much justification) that having language diversity will encourage natural selection.
Rust is a pretty complex language. It's more of a C++ replacement.
Zig is a C replacement: a dead simple language for low-level development, partly disregarding memory safety (there seems to be some improvements, but only where it doesn't make the language more complex).
Could you ever imagine writing something like Tiny C-Compiler for Rust? C is like fancy assembly. Zig seems to fit into that niche quite well, but doing it in a more disciplined and modern way.
Every non-memory-safe language ought to come with a formal semantics. If the type checker can't semi-decide whether the program has a well-defined meaning, at least you ought to have the tools to do it on your own. (This isn't a Zig-specific complaint. I've made exactly the same remark about unsafe Rust in the past.) I'm afraid that the illusion of simplicity would be lost in the formalization process.
For example: "The motivation for this design philosophy is to enable users to write any manner of custom allocator strategy they find necessary, instead of forcing them or even encouraging them into a particular strategy that may not suitable for their needs. For example, Rust seems to encourage a single global allocator strategy, which is not suitable for many usecases such as OS development and high-performance game development."
NB: that is sorta true right now (as the sentence says, "seems to"), but it's also an area of active work; we don't expect Rust to end up that way.
To elaborate a bit:
Right now, the standard library relies on an allocator. The language itself does not. If you want to write your own types and back them up by your own allocator, you can 100% do that today. But that doesn't help you with any of the stuff in the stdlib.
3. Allowing you to swap out the allocator of stdlib types on a per-instance basis; this almost had an RFC for the needed machinery, but was postponed. Will probably come up again in the coming year after #1 & #2 get resolved.
If you see quick compilation as an advantage Go has over Rust, but want Rust’s robust error checking and you don’t like Go’s reliance on garbage collection, Zig might be interesting.
Congratulations Andy, I've seen you working on this for a long time and am very impressed by the rigor it takes to release a language this large on your own. I look forward to seeing cool projects built in Zig!
Congrats on the release! Zig is not my cup of tea, but I'm glad it exists. I still owe you a full review sometime, Andy. Never enough hours in the day.
The “defer” statement makes it easy to ensure important code gets run once even if there are multiple returns (along with %defer for error cases). That covers at least 95% of fiddly manual allocation problems -- the kind of thing RAII helps you avoid in C++, or “using” in Python and C#.
Deallocating/RAII is not memory safety. Leaking all memory is, in fact, a tried-and-true way to get memory safety (GC'd languages are semantically doing this: GC is "just" an optimization), since it means problems like use-after-free are impossible. Memory leaks are great to eliminate, but they can't lead to the corruption that memory unsafety can cause. http://huonw.github.io/blog/2016/04/memory-leaks-are-memory-...
Rephrasing the parent's question: does Zig try to ensure memory safety (e.g. no use-after-free, dangling pointers, iterator invalidation, data races)?
Maybe look at the language that inspired Rust: Cyclone http://cyclone.thelanguage.org/ . It's just a research language (which the site claims is "done") but maybe worth a look.
the reference to a 5-bit number in the intro was intriguing; i don't think i've ever seen a language that would support that (maybe pascal with type subsetting).
Common Lisp allows integer types of however many bits you want using SIGNED-BYTE and UNSIGNED-BYTE (don't let the names confuse you; a byte isn't necessarily 8 bits):
CL-USER> (the (unsigned-byte 5) 31)
31
CL-USER> (the (unsigned-byte 5) 32)
; Evaluation aborted on #<SIMPLE-TYPE-ERROR expected-type: (UNSIGNED-BYTE 5) datum: 32>.
Andy, is it a coincidence or have some features of Zig been inspired by Go? More specifically: defer, the syntax for the array type with [] as a prefix instead of a suffix, and the syntax for array literals.
Incremental improvements and cross-pollination are great! I don't need a language with manual memory management for my current projects, but I'll definitely keep an eye on Zig. It solves most things I dislike in C, and keeps the good stuff.
Nice to see Windows support. I think first class Windows support is an underrated factor in Rust's adoption as opposed to other similar languages (i.e. it had pretty good Windows support very early).
I am still trying to learn Rust here and there, but Zig seems to click more with me. I love Rust's documentation. Is there a Zig book in the works? I will continue to use the site for reference, and translating C snippets of mine. I like the libc independence of it for my toy programs and game dev jams. I was using TCC64 for them before.
Nearly every platform has a C compiler. Many embedded platforms have almost no language alternatives. No way Zig ( or Rust, or Go ) is ever going to target them all.
It seems as there have been some attempts to revitalize it [2] but these are targeting older versions and would probably require a lot more work to ever get back in-tree.
It opens up the language to embedded use cases like with PICs or other esoteric targets which don't fit the flat memory models assumed by modern languages including LLVM backends. When your memory model requires banking or any other form of non-linear or non-paged access, you're often in the proprietary C compiler with language extensions territory.
I think that it is sad that we keep on creating new programming languages that are not radically different. The Zig language ecosystem faces the same problems as C and the compiler will preform many of the same tasks, lexing, parsing, optimizing, documentation, supporting all sorts of weird edge cases and use cases. From what I see in the introduction, I don't see a convincing case for a new language here. This kind of incrementalism is not worth another persons years of effort to create a parallel path.
I think that if you are going to spend years creating a new language, you ought at least to try something new. Otherwise, you become just another amateur landscape artists. Maybe you're paintings will look nice, but they'll never really be of any great value to society.
I disagree. We don't have enough incremental improvements, because it's difficult to improve a language and migrate the existing code base at the same time. I'd be happy with a better C (for example get rid of header files and #include), a better Python (for example better parallelism without a GIL), a better Go (for example with a solution for generics and marginally less verbose error handling), a better JavaScript, etc. I'm mentioning "mainstream" languages on purpose.
> This kind of incrementalism is not worth another persons years of effort to create a parallel path
Historically, incrementalism is pretty much the only way that languages can reach mainstream status e.g. C/ObjC/Javascript/C++/Java/C#/Kotlin.
Of course, having languages that break away from the mainstream is equally important to generate new avenues of research, but it's clear to me both approaches are a requirement for a healthy PLT field.
What languages were “radically” different when they were relased? My take: COBOL, C, SmallTalk, lisp, ML, prolog, Shell. I’m might be missing some, but I think the pattern is clear. Only C remains among the most popular languages.
Almost all widely used languages today (c++, php, ruby, java, python, javascript, etc...) are incremental improvements on old ideas.
I think your opinion that the most used programming languages today are not “of any great value” to be bumpkis.
What was radically different about C when it was released? It doesn't seem to stand out that much to me from the backdrop of BCPL, Algol, Bliss, Pascal, etc.
Haskell was an "incremental" improvement on Miranda and a consolidation of features from a zoo of lazy functional languages sprouting forth from Landin's 1966 article "The Next 700 Programming Languages."
The problem with incremental improvements is you have to keep backwards compatibility. This is a big issue when it comes to keywords. For instance this is valid C#:
using async = Task<string>;
class BadIdea {
public async async async(async async) { }
}
I am assuming that using the return value of log was buggy, and so this tested that you could save it in a variable. I don't remember the exact semantics of log, but if it's like println!, it returns (), which is useless, so binding it to a variable is something you'd never write in real code, so it's "weird" in that sense.
The other language that does what Zig is trying to do is the Jai language that Jonathan Blow has been developing, but it's not open to the public yet.
We don't need a radically different language, really. We just need a language that combines some of the best ideas that have been developing recently in a clean way while keeping the language simple, not complex.
D was trying to do that but then it exploded and became a monster just like C++. It's kind of just a slightly cleaner C++, but it's still a monster.
Perhaps what we need is more tools for creating languages so that it doesn't take a massive engineering effort to create a new language and to enable language features to be implemented once rather than for each new language. That way, we could have a ton of languages and everyone can choose their own "Goldilocks" language.
LLVM is a great effort in this direction and now language designers can target multiple architectures and get a ton of optimizations essentially for free. And it's great to see things like coroutines get added at that level. But more could be built at that level and on top of it to give higher-level language features in that same essentially free manner. It'd be cool to see stuff like borrow checking and garbage collection get added in a similarly reusable way.
The end result of this would be that syntax, the stuff that's mostly easy and yet most prone to bike shedding and developer aesthetics, could be mostly decoupled from the serious work normally associated with creating a language and compiler. And developers with less expertise could build quality languages that appeal to an unmet syntactical aesthetic without having to reinvent as much of the wheel as is currently necessary. Instead of having one language that brings together the best ideas, we'd just make those best ideas reusable so that any language could easily bring them together using whatever syntax shakes out of the bike shedding process.
Jai's a good unique name tho, and it wouldn't be a good idea for him to change that branding since he's already done years of outreach using that name.
The thing is, as far as I can tell, it's only other users that have been using that name. I haven't seen Blow use "Jai" or "JAI" himself (aside from the file extensions), though he doesn't make a fuss about it when people ask him questions about his language and refer to it as "Jai" or "JAI".
I spent some time with Zig a while ago. Here are my conclusions:
- The string handling is braindead
- The compiler and stdlib is immature. This seems like the first programming language the guy has written. You should have a few failures to learn from before you make a Serious Programming Language imo.
- Code run at compile time is a neat idea but has a weird syntax and really really complicates the internals of the compiler and related tooling.
Note: The set of people in the Zig community who are blocked is size 1 and it is Sir_Cmpwn. He is blocked for being rude, destructive, and making a large number of incorrect claims, which he ignores the rebuttals to.
Your feedback is not constructive. At least give examples why it's stdlib is immature or why the string handling is braindead and what you think is missing. This way the community would know more what to work on.
I should have mentioned that I approached Zig with my feedback at that time and engaged in discussions on the matter. My concerns were largely disregarded.
maybe they were disregarded because the feedback was too generic and cannot be acted on. My point was if you're giving feedback make sure the receiver can do something specific about it.
What? Which alternatives to C? Fortran and Pascal?
I don't know of a single modern alternative to C. I wouldn't count Rust, because a replacement for C should be a dead simple language. Rust is a C++ alternative.
C works pretty well, but it could definitely be improved in a meaningful way.
Personally, I think it's just about time for a good C replacement, and Zig's goals are perfect for that.
Can anyone even fix C, at this point? You can't break backwards compatibility and the last standard that is widely adopted is C89, from 28 years ago. C99 and C11 haven't been adopted by Microsoft, so as far as I know their adoptions among developers is somewhat low.
If big companies can't move the needle for C, how can a single developer do it?
A single developer can apparently create a whole new language, complete with a module system, a proof system, a constraint system, a parser/lexer, but can't extend C by adding new flags to the compiler that are optional and don't necessarily break anything from old code and lets you selectively upgrade your sources.
Most existing C compilers (eg clang or gcc) are pretty large and complex beasts. Its often easier to start from scratch than it is to extend an existing codebase that you have never worked in before. It depends on your goals: if you want to appeal to an existing user base and use an existing ecosystem, then modifying existing compilers (or compiling to an existing language) is probably a good idea, but if that isn't a goal for whatever reason, then doing so may not be worth it.
Not saying this is a good thing, just that it happens (at least, in my experience).
I assume you already know why but, in case you don't, let me put it in simple terms for you: it's easier to create something from scratch than to extend existing code base.
Have you every tried to fix a bug in or make an improvement to GCC? I did; I tried many times and I failed. Maybe it's just that I am not hacker enough (I'd say it's exactly the reason), but...
The size of the code is enormous, the documentation scarce and of limited use to new people - sort of like with man(1) pages: if you already know what you're looking for they're great, but if you don't and need real help then I'm sorry, but tough luck.
You'd better know where to look for parser, optimiser, code generator, etc. and how they interact with each other; where to get information about source code you're translating (keep in mind that there are "new" and "old" ways, and for some constructs only one is available). Unless you have tons of time on your hands (don't have to work, or are lucky enough to go to a University which has some people working on the code), or work for one of the companies developing GCC then you have pretty slim chances of actually getting through.
I don't know how it is with LLVM/Clang. Maybe there it's easier to contribute.
But HELL YES it's easier for a lone developer to "(...) create a whole new language, complete with a module system, a proof system, a constraint system, a parser/lexer (...)" and it is ridiculously hard to "(...) extend C by adding new flags to the compiler that are optional and don't necessarily break anything from old code and lets you selectively upgrade your sources.".
I know this from experience as well as, I'm sure, many others. I am building a parallel VM: I designed its instruction set, built a compiler and static analyser for it, went with it to a conference, etc. etc. It's not that hard and I bet you could do it too. But to take an existing language (read it as "an existing compiler") and "add new stuff without breaking anything" or just "add new stuff"? Man, that's orders of magnitude more difficult.
There's also the satisfaction factor at work: when working on new project you get instant gratification - "Test pass!", "New feature!", "A bug fixed!", and "I get to implement this cool idea I had and nobody's gonna stop me!". When you work on an existing project (especially one as big as GCC or Clang) you have to brace yourself for several hours of reading the code before you can begin to think about where to start. If you did read the code and tinker with it every day then it will get easier, but you have to have the will power to burn through all that code, and not everyone does.
Well I should question whether if you are used to working with teams or developing projects others started, but I'm too nice for that.
I've had my own share of ICEs and patches back in gcc3 days where Debian mailing lists were being used as a gcc issue tracker :-) and these days you just fork and work from there. And if people really need your patches, they will integrate it willingly. No need to "get through" anyone.
If you need some confidence boost, start off with reading semi-old patches from other contributors. That will immediately take you to the cogs of the machinery so to speak. Also, debugger is your friend. I hope you have one for your parallel VM.
I maintain 15+ year old delphi code bases as a profession, so I guess your rant just flew out of the window for me. The "tests passed all green checks" insta-gratification is a lost memory. The freedom to implement whatever you want is kinda constrained but still there.
Ok, let's assume that you're willing to go through 10 feet of snow for hours, uphill both ways.
To really change C, you need to at least get your changes in clang, not only in gcc. And let's not even talk about msvc. Or God forbid, actually creating a new standard version.
I definitely agree that your perceived goal of "changing C" is not attainable. And it is not something I proposed. You could add a flag to a fork of a compiler/toolchain and if it is useful, others will eventually catch on.
I like how error handling is structured, though I’m undecided on if I prefer a specific case of a more extensible system. Pretty sure it’s right for 80% of cases though.