Even more unfriendly-yet-typical line is where you create an allocator, and few lines further you run allocator() method on it, to get ...an allocator (but you had it already! Or maybe you didn't ?) Same: you create Writer, but then you run a writer() method on it.
Here is the code to illustrate:
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
var visited = std.BufSet.init(arena.allocator());
var bw = std.io.bufferedWriter(std.io.getStdOut().writer());
const stdout = bw.writer();
So.. what are the entities we use, conceptually? "allocatorButNotReally" and "thisTimeReallyAnAllocatorIPromise"? Same for the writer?
Plus, the documentation isn't much explaining wtf is this and why.
The answer is probably buried somewhere in forums history, blogs and IRC logs, because there must have been consensus established why is it ok to write code like that. But, the lack of clear explanation is not helping with casual contact with the language. It's rather all-or-nothing - either you spend a lot of time daily in tracking all the media about the ecosystem, or you just don't get the basics. Not good IMO. (and yes I like a lot about the language).
std.mem.Allocator is the allocator interface. For that struct to be considered an interface, it must not contain directly any specific concrete implementation as it needs to be "bound" to different implementations (GenealPurposeAllocator, ArenaAllocator, ...), which is done via pointers. An allocator implementation holds state and implements alloc, free and resize for its specific internal mechanisms, and then pointers to all these things are set into an instance of std.mem.Allocator when you call the `.allocator()` function on an instance of an allocator.
File and Socket both offer a `.writer()` function to create a writer interface bound to a specific concrete "writeable stream".
BufferedWriter has both extra state (the buffer) and extra functions (flush) that must be part of a concrete implementation separate from the writer interface.
> The answer is probably buried somewhere in forums history
That's just how computers work, languages that don't expose these details do the same exact thing, they just hide it from you.
Well, your explanation doesn't really tell why do I call .deinit() on a structure before alllocator() call and calling all the rest important stuff on a structure after such call. I think you guys, while doing great job by the way, are kind of stuck in a thinking from inside language creators' perspective. From outside, certain things look so weird.
I need also to be a picky about "that's just how computers work" phrase, you know uttering such a phrase has always a danger of bumping into someone who wrote assembly before you were even born and hearing this makes a good laugh..
> Well, your explanation doesn't really tell why do I call .deinit() on a structure before alllocator() call and calling all the rest important stuff on a structure after such call.
That's because the Allocator interface doesn't define that an allocator must be deinitable (see in the link above the fn pointers held by the vtable field). So just like you have to call flush() on a BufferedWriter implementation (because the Writer interface doesn't define that writers must be flushable), you have to call deinit on the implementation and not through the interface.
Fun fact, not all allocators are deinitable. For example std.heap.c_allocator is an interface to libc's malloc, and that allocator, while usable from Zig, doesn't have a concept of deiniting. Similarly, std.heap.page_allocator (mmap /virtualalloc) doesn't have any deinit because it's stateless (i.e. the kernel holds the state).
I don't know about the deinit thing, but I think this allocator/writer stuff has nothing to do with "inside language creators' perspective". To me, even though it wouldn't be my first guess as someone who's never used Zig, it does make sense to me that it's done this way since apparently Zig does not really have interfaces or traits of any kind for structs to just have. In fact when Googling about Zig interfaces I found another post from the same blog:
which says that an interface is essentially just a struct that contains pointers to methods. In other words when you call the .thing() method on your SpecificThing, that method is producing a Thing that knows how to operate on the SpecificThing, because functions that accept Things don't know about SpecificThings. You can't manufacture that Thing without first having a SpecificThing, and a SpecificThing can't be directly used as that Thing because it's not. There's essentially no other way to do this in Zig.
> why do I call .deinit() on a structure before alllocator() call
This is explained right in the documentation about arena allocator. Arena allocator deallocate everything at once when it goes out of scope (with defer deinit()). You need to call .allocator() to get an Allocator struct because it's a pattern in Zig to swap out the allocator. And with this, other code can call alloc and free with out caring about the implementation.
This is just how arena allocator works and not related to Zig's design. You may take issue with how Zig doesn't have built-in interface and having to resort to this implementation struct returning the interface struct pattern, but I think the GP clearly explained the Why.
The zig allocators used to use this because it enabled allocator interfaces without type erasure, but it was found to have a minor but real performance penalty as it is impossible for any compiler to optimize for this in scenarios that are useful for allocators.
Other interfaces might actually have the opposite performance preference
If you control every implementation (ie you aren’t writing a library where others will implement your interfaces), then tagged unions are a simple way to accomplish this. See the bottom of this page: https://www.openmymind.net/Zig-Interfaces/
IMHO the Zig stdlib (including the build system) by far isn't as elegantly designed as the language. There's more trial-and-error and adhoc-solutions going on in the stdlib and there are also obvious gaps and inconsistencies where the stdlib still tries to find its "style".
I think that can be expected of a pre-1.0 language ecosystem though. Currently it's more important to get the language right first and then worry about cleaning up the stdlib APIs.
All languages have these problems. Even Go with famously excellent std has many rough spots that either were not available (such as context) or was just a bit poorly designed.
The most important job of std is not (contrary to popular belief) to provide a “bag of useful high quality things” but rather providing interfaces and types that 3p packages can use without coordinating with each other. I’d argue that http.Handler, io.Reader/Writer/Closer are providing the most value and they are just single method signatures.
When there’s universal agreement of what shape different common “things” have, it unlocks interop which just turbo charges the whole ecosystem. Some of those are language, but a lot more is std and that’s why I always rant about people over focusing on languages.
This is a naming convention problem. In a certain other language that zig is trying hard to not become one of those things would be called an AllocatorFactory.
Also note that on Zig master, initializing `GeneralPurposeAllocator` like this is deprecated -- it'll be removed after the next release. Instead, you should do this:
var gpa: std.mem.GeneralPurposeAllocator(.{}) = .init;
I'm glad I read the last line, for those who may not have gotten that far: this is about to become a much less prevalent pattern in Zig code, replaced with declaration literals. The new syntax will look like this:
var gpa: std.mem.GeneralPurposeAllocator(.{}) = .init;
Which finds the declaration literal `std.mem.GeneralPurposeAllocator.init`, a pre-declared instance of the GPA with the correct starting configuration.
I do not know Zig, but it looks like it just means "call default constructor for parameter/variable/type". I do not see how you could expect it to be elided unless every function auto-constructs any elided arguments or always has default arguments.
In other words, for a function f(x : T), f(.{}) is f(T()), not f(), where T() is the default constructor for type T.
If we had a function with two parameters g(x : T, y : T2) it would be g(.{}, .{}) which means g(T(), T2()), not g().
It looks like the feature exists to avoid things like:
x : really_long_type = really_long_type(), which can be replaced with x : T = .{} to avoid unnecessary duplication.
I do not know Zig either; I had assumed that it has default parameters, but it seems that it does not[0]. So, yes, it makes sense now why it cannot be elided.
They should add default parameters to avoid this sort of thing. Maybe they ought to consider named/labelled parameters, too, if they're so concerned about clarity.
Zig believes that all code needs to be explicit, to prevent surprises- You never want code that "just executes on its own" in places you may not expect it. Therefore, if you want default arguments, you have to perform some action to indicate this.
i don't get this argument. what is code that "just executes on its own"? how is it more difficult to differentiate what a function does with vs without arguments compared to one that takes arguments with values vs arguments without values?
However, in Python, if you routinely call foo([]), you'd specify that (or rather an empty tuple since it's immutable) as the default value for that argument.
Well yes, but if it's someone else's library, realistically you're not going to change it.
Zig is a static language without variadic parameters, so you can't make it optional in that sense. You could make the options a `?T` and pass `null` instead, but it isn't idiomatic, because passing `.{}` to a parameter expecting a `T` will fill in all the default values for you.
How does this create variadic functions? The arity is the same, since the function signature defines the exact amount of arguments. The compiler just passes the omitted ones for you.
i have never used zig before, but after reading the article i came to the same conclusion. the "problem" (if it is a problem at all, that is) really is that .{} is the syntax for a struct whose type is to be figured out by the compiler, that new users will be unfamiliar with.
i don't know if there are other uses for . and {} that would make this hard to read. if there are, then maybe that's an issue, but otherwise, i don't see that as a problem. it's something to learn.
ideally, each syntax element has only one obvious use. that's not always possible, but as long as the right meaning can easily be inferred from the context, then that's good enough for most cases.
It's really not much different than in nearly any other language with type inference except for the dot (which is just a placeholder for an inferred type name).
fugly syntax is one of the biggest reasons i will never touch rust. zig is not far too off, unfortunately. i i needed non-gc language, i would go for odin. not perfect but closes to usable. it's just too hard to do anything but Go, once you get comfortable with it. they got too many things right to see grass being greener elsewhere.
Rust encodes far more information into source code than most languages, so it simply needs more syntax. I wouldn't say it's ugly (except macros, not sure what they were thinking there), there's just more of it.
Obviously if you remove lifetimes, types, references, etc. you're going to need less syntax.
> Rust encodes far more information into source code than most languages, so it simply needs more syntax.
I don't think this is the case. Firstly, all the necessary data can be encoded by keywords, spaces and newlines. Forth or TCL can encode everything Rust can (since their interpreters are 100% configurable), only with keywords, and spaces between. A language should have special syntax for only the important part, not for everything.
Secondly, even though Rust has special syntax for a lot of stuff, it could be nicer to the eye.
But if you need a bunch of stuff that may or may not apply to a function or type definition for example, then why not just use CSS/Rebol style syntax, and put all your keywords that apply in a row. No need for all the weird symbols, brackets, colons and all that. You could even use keyword=no, and be extra explicit.
Did you actually read that Rattlesnake post? It's making exactly the same point I was.
Also IMO the Rattlesnake example looks awful. The Haskell flavoured Rust is even worse. Do you seriously prefer those? If so I'm afraid your sense of taste is a bit suss.
just because there are more keywords/syntax, it does not necessarily mean it has to be ugly. they could have made better decisions when designing the language.
That “.” substitution of an inferred type is going to fire back. I really appreciate when code has one simple property: you search a type by name and you get all of the places where it’s constructed. Makes it easy to refactor the code and reason about it with local context. It’s the case with Rust, but not C++ or Zig.
"No hidden control flow" is completely orthogonal to "no implicit typing". I think anyone looking at Zig would immediately recognize that it is firmly in the type inference camp by choice.
As far as simplicity, I think their pitch is "simpler than Rust", not in absolute terms. The whole comptime thing is hardly simple in general.
I am not a big Zig aficionado but I definitely contrast it in my mind moreso with C and C++ rather than Rust. It definitely aims at being a “better C” sort of language moreso than a “better C++” which Rust seems to be focusing on.
Better than that would be a language that doesn't require / almost compel users (by "almost compel", I mean the user community, obviously, not the language literally, since it is not sentient) to use an IDE in order to use the language, and using which (language) you can still do what you said above, by just using a text editor.
In the same vein as what you said here about orthogonality ( https://news.ycombinator.com/item?id=42097347 ), programming languages and IDEs should be orthogonal (and actually are, unless deliberately linked). People were using languages much before IDEs existed. And they got a hell of a lot done using the primitive surrounding tools that existed back then, including, you know, gems like Lisp and the concepts embodied in it, many of which have, much later, been adopted by many modern languages.
And I still meant "almost compel", even by the community, because of course they cannot really compel you. I meant it in the sense of, for example, so many people using VS Code for programming posts.
> Better than that would be a language that doesn't require / almost compel users (by "almost compel", I mean the user community, obviously, not the language literally, since it is not sentient) to use an IDE in order to use the language, and using which (language) you can still do what you said above, by just using a text editor.
It's ironic that you complain about this because Zig is probably the most "normal editor" friendly programming language for exactly the kind of thing mentioned in the article.
I don't need an IDE to figure out the 12 options to that function and fill them out with the correct defaults. I don't have to hunt through 23 layers of mysterious header files to find the declaration I need to figure everything out. etc.
Just try figuring out a foo(12).bar(14).baz("HELP!").fixme("ARRGH!") construction chain in C++ or Rust without an IDE. Oof.
1) Zig doesn't encourage those and 2) in Zig I can trace the @import() calls and actually run "grep" on things.
>It's ironic that you complain about this because Zig is probably the most "normal editor" friendly programming language for exactly the kind of thing mentioned in the article.
echo Who complained, $(echo bsder | sed 's/sd/ro/') ? ;)
Not me. Don't put words into my mouth.
(I don't care if I got the above shell syntax wrong), this was just a quickie, for fun ;)
you seem to have misunderstood my words, in the exact opposite way from what I meant. congrats. not!
>I don't need an IDE
who told you that I needed an IDE?
chill, willya?
and, wow:
>to figure out the 12 options to that function and fill them out with the correct defaults. I don't have to hunt through 23 layers of mysterious header files to find the declaration I need to figure everything out. etc.
12 and 23, exaggerating much? we are not talking about win32 API functions, podner.
>Just try figuring out a foo(12).bar(14).baz("HELP!").fixme("ARRGH!") construction chain in C++ or Rust without an IDE. Oof.
Don't resort to theatrics or histrionics to make your point (like HELP! and ARRGH!), (I am allowed to, tho, because i > u :)
>1) Zig doesn't encourage those and 2) in Zig I can trace the @import() calls and actually run "grep" on things.
faaakkk!
though a bsder, you find header files mysterious, and cannot grep through them, if they are in C++ or Rust, eh? are find and xargs your enemies? or even just ls - R | grep
?
stopped editing, even though there might be a few minor typos.
Zig does not have an IDE, but it does have a language server called zls[0] that I have found to be pretty decent after implementing Loris' suggestion from this post[1].
An easy way to find all places is to temporarily add a new struct member without defaults, run the compiler and let it complain of all the places where it is being instanced.
Similar to when you add a new enum member and it complains of all switch statements that are not using it (as long as you didn't add a default case).
This is tedious in Rust when initializing a struct which has nested structs. A language which has type inference at all should at least be consistent about it and allow to not mention the type when it can be inferred by the compiler.
What's meaningfully different in Rust's type inference. E.g.:
fn example() {
let p = returns_a_point_type(args);
}
Where create_point() is a function from a module (e.g. not even defined in that file) which returns the Point type automatically inferred for p? I mean sure, it's technically constructed in the called function... but is that often a useful distinction in context of trying to find all of the places new instances of types are being assigned? In any case, this is something the IDE should be more than capable of making easier for you than manually finding them anyways.
GP is talking about how easy it is to find places where the type is instantiated. Seems to me that create_point() will have one such site. And then it’s trivial to find callsites of create_point() with the LSP/IDE. What’s the issue?
The IDE can find all places new variables are assigned to the type (regardless of whether it's direct instantiation, return value, inferred, or whatever way it comes about) so what's the special value of being able to manually find only the local instantiations find ctrl+f if you'd still need to manually track down the rest of the paths anyways?
In other languages, defining types in terms of themselves is unproblematic, because the type identifier is just a symbol and the whole thing amounts to a graph with a backreference.
However, here it's supposed to represent actual executable code, which is run by the compiler and "produces" a type in the end. But how would the compiler execute this function without getting stuck in a loop?
It's not actually about `*` -- for instance, declaring `const T = *T;` emits an error. The thing that makes this okay is that field types (for structs and unions) are evaluated in the "lazy" way you describe.
Using parens to pass type arguments was one of the things that turned me off on Zig. For a language that prioritizes "no hidden control flow," it sure did a lot to make various syntax conventions _masquerade_ as control flow instead.
> Using parens to pass type arguments was one of the things that turned me off on Zig.
It's just regular comptime function calls which happen to have parameters or return values which are comptime type values (types are comptime values in Zig). I find that a lot more elegant then inventing a separate syntax for generics, and it lets you do things trivially which are complex in other languages (like incrementally building complex types with regular Zig code).
It might be unusual when coming from C++, Rust or Typescript, but it feels 'natural' pretty much immediately after writing a few lines of generic code.
Macros can have control flow, so compile-time control flow is definitely possible, but perhaps we trained ourselves to not think of control flow in this way because using complicated compile-time logic is generally frowned upon as a footgun.
Perhaps Zig is the language that on purpose blurs the line between what runs when (by basically having what looks like macros integrated into runtime code without any conspicuous hashtaggy syntax), and so a Ziggy would not see compile-time control flow as something weird.
Why did they keep the dot in Struct initialisation? Why not the syntax of just using without dot:
const c1 = Config{
port = 8000,
host = "127.0.0.1",
};
Is there some other use with dotless one?
Just `{}` means a code block; in Zig you could do something like
const c = blk: { const x = 5; break :blk x-3; }; // c = 2
just having an empty block `{}` is exactly that—an empty block of type `void`. having a dot or something else distinguishing it from a block is necessary in order for it to not be that.
I would be fine with it if it only threw an error about that when building in release mode or if there was a flag to silence it temporarily.
But while trying out some things and learning the language I find it annoying. And I don't know how it makes me more disciplined when I can just write `_ = unused;` to suppress the error. Even worse, if I forget that assignment in the code the compiler will never warn me about it again even when I want it to.
Or the compiler could just add a flag and not make assumptions about my setup or workflow. Also linters are optional and under my control. Meanwhile the Zig compiler is forcing that onto me, and for what benefit exactly?
> I wrote a 16kloc Zig project a couple of months ago, and not once the 'unused variables are errors' thing was annoying
That's great, but different people are different. I've tried learning Zig twice by now but this is just a dealbreaker, simple as that.
The way I deal with this in my language, which also bans unused variables, is simple: I delete the unused variable or I use it.
My workflow is probably very different from yours I'm guessing. I have my editor configured to save on every keystroke and I have a watch process that then recompiles my code. I pretty much never manually compile. My compiler is sufficiently fast that I almost never wait more than 1 second to build and run my code. I notice every error immediately and fix them as they arise. This is what I am talking about with discipline. I never allow my code to get into an unexpectedly broken state and don't need a linter since I just make the compiler as strict as I would make the linter. This ultimately simplifies both the compiler and the build pipeline.
These are all huge upsides for me. The cost of occasionally deleting a definition and then restoring it are for me minor compared to the cost of, say, writing a separate linter or adding feature flags to the compiler (the latter of which doesn't fit into my workflow anyway since I auto compile).
The problem is that in order to delete an unused variable, you may need to delete a huge chunk of code which would be useful later when you want to actually use the variable.
can you give an example please? i can't imagine how any section of code would be affected by removing an unused variable. if the code reference the variable, it would be used. if it doesn't, then why would you have to delete it?
Wrong syntax is (and must be) an error. Totally different. The Problem in Go and Zig is that they put theory over practice: No compiler warnings is a good idea in theory, but fails in practice for things like unused variables or unused imports. Defending that makes it even worse and begs the question what other treasures they have burried in their language design. This thread is a testament to that.
You do. The zig compiler and stdlib can iterate faster across a team of mostly by numbers volunteer developers with varying degrees of skill and across global timezones because of the discipline that the compiler imposes.
This is nonsense argument, because there are more pragmatic solutions: Turn warnings into errors for release builds, or if there is only one build type, have a policy that requires developers to remove all warnings before committing code.
i am absolutely serious. pike for example does not need imports. if a reference is not found in the local namespace the compiler will find it in the module path and resolve it by itself. there is no ambiguity because each module has a unique name.
we accept type inference but don't do the same for module references? why?
pike does have an import statement, but its effect is to include all members of a module into the namespace instead of just resolving the ones that are really used.
and instead of speeding things up, using import on modules with lots of members may actually slow things down because the namespace is loaded up with lots of unused references. sometimes import is used to help readability, but that's rarely needed because you can solve the same with a simple assignment to a variable.
if you can show me an example where import resolves an ambiguity, i'll try to show how pike solves the problem without import.
I don't know how it works in Zig. In JavaScript, you can have lots of things with the same name, so you need to explicitly import them and you can give them an alias at the same time if there's a clash. I believe Python is the same.
In C++, you have to #include the thing you want to use somewhere, not necessarily in the same file you use it, it just has to end up in the same compilation unit. If two files define the same name, you'll end up with a compilation error. In very large projects, sometimes you won't notice this until some transitive dependency includes the other thing.
I'm personally a fan of explicit imports. I imagine this helps IDEs resolve references without having to scan 1000s of files to resolve them, and it helps build tools pull in only the needed files. Regarding speed (of execution), in JS we have tree-shaking so if you import a file but don't use all of its members, those excess/used members will be dropped from the final bundle (saving on both bundle size and run-time parsing). Plus it means I don't have to spend much time thinking of a super/globally unique name for every thing I define.
If you use fully qualified statements everywhere, sure. That means writing `datetime.datetime.now()` everywhere instead of `from datetime import datetime` and then just doing `datetime.now()`. But then you'll tell me, just create an alias, `dt = datetime.datetime`. Sure, I guess, but now you've made `datetime` some kind of superglobal so you can't use that as a variable anywhere.
And how does this work in practice? In Python and JS you can also put executable code inside your modules that gets executed the first time it's imported. Are you telling me that that's going to run the first time it's implicitly imported instead? Is that a good idea?
The story in JS gets even crazier because you can do all kinds of whacky things with imports, like hooking up "loaders" so you can import things that aren't even JavaScript (images, CSS, you name it), or you can modify the resolution algorithm.
but now you've made `datetime` some kind of superglobal so you can't use that as a variable anywhere
depends on the language, in pike, and as far as i know in python i still can use it as a variable if i want to, it would just cover the module and make the real datetime inaccessible. but why would i want to do that? if i see someone using a well known module name as a variable in python i would probably recommend changing it.
i don't see the benefit of not filling the global namespace over making import unneeded. add to that, by convention in pike module names start with an uppercase letter, and variables don't, so the overlap is going to be extremely small and never causes any issues.
In Python and JS you can also put executable code inside your modules that gets executed the first time it's imported
pike doesn't have that feature. if there is something to be initialized you'd have to call it explicitly. i don't see a problem with that, because in python you call import explicitly too. so you just swap out one need to be explicit for another. i prefer the pike way because it actually gives me more control over if and when that initialization happens.
i think that also better fits the paradigm of explicit is better than implicit. in python i have to learn which module does initialize stuff or ignore it, in pike i can easily see it from the way it is used.
further in pike to get an initialization i would normally create an instance of a class in the module because modules are singletons, and you probably don't want to change something in them globally.
going to run the first time it's implicitly imported instead
pike is compiled, that is, all these references are resolved first, before any code is run. so even if there were any implicit initialization it would be possible to run it first.
more specifically, pike modules are instantiated objects. i don't know the internals, but i believe they get first instantiated when they are resolved. again, that's before the rest of the code where the reference came from is running
I'm not sure I see the total difference between matching parens and matching defs and refs.
Sure, saying "an open paren must have a matching close" is quantitatively different from "a def must have at least one matching ref", but is it really qualitatively different?
Lets say I gather Diag data, which I conditionally print during testing. Are you saying that I cannot leave the diag code in place after I comment out the print function? Thats unproductive and a major obstacle to using Zig. I’m still pissed at Andrews stance of preventing Tabs, operator overloading, polymorphism, and this just seals my “stay away” stance. I really do want to like Zig, but cannot.
You don't need to comment out the print function - it could gate its behavior on a comptime-known configuration variable. This would allow you to keep your debug variables in place.
anything a linter can do, can be included in a compiler. or the linter can be part of the compilation process by default. iaw instead of being optional it should be required maybe with a special opt-out, but opt-out should be frowned upon.
That's one if the great advantages of Zig. Other languages can't always enforce this rule (because of inheritance and such) and will generate strong warnings instead.
If you like copying dead memory around, you can always do `_ = unusedParam` to confirm to the compiler that you don't need that variable after all, despite going out of yiur way to declare it.
It seems like an interesting idea, but I wish Andrew spent more time fleshing it out with complete examples. I can't tell if the _ characters are eliding values or if that's literally what's in his code.
The underscores mean that it's a non-exhaustive enum. An exhaustive enum is where you list all the names of the possible enum values, and other values are illegal. A non-exhaustive enum means any value in the underlying storage is allowed. So at root this code is creating a bunch of new integer types which are all backed by u32 but which can't be directly assigned or compared to each other. That means you can't accidentally pass a SectionIndex into a function expecting an ObjectFunctionImportIndex, which would be impossible to do if those functions just took raw u32's.
It's an interesting pattern, but it's a shame there's no way to efficiently use one of those OptionalXIndex with zig's actual null syntax, `?` and `orelse` and etc. It would be smoother if you could constraint the nonexhaustive range, and let the compiler use an unused value as the niche for null. Maybe something like `enum(u32) { _ = 1...std.math.maxInt(u32) }`
The article spends a lot of time justifying a syntax that really just papers over Zig's lack of parameter pack support. The same pattern in Rust would just use variadic templates/generics.
It's amazing they couldn't figure out how to get f(.{}) down to just f({}). Like here is this brace enclosed thing being matched against the argument type.
Standard C has no such feature as blocks being expressions.
Compound literals are grotesque. The braces have to be proceeded by cast syntax indicating the type. It could be subject to inference. Maybe the current draft has something.
I actually find compound literals quite pleasing, visually.
Parentheses, brackets and braces I find can help a lot in guiding your eyes though code. They make it more „regular“, or lessen the „entropy“ (physicists, please don‘t stone me).
For this reason I‘ve come to dislike Rust’s braceless if-syntax, after being a die-hard fan. With parentheses it just reads better.
Here is the code to illustrate:
So.. what are the entities we use, conceptually? "allocatorButNotReally" and "thisTimeReallyAnAllocatorIPromise"? Same for the writer?Plus, the documentation isn't much explaining wtf is this and why.
The answer is probably buried somewhere in forums history, blogs and IRC logs, because there must have been consensus established why is it ok to write code like that. But, the lack of clear explanation is not helping with casual contact with the language. It's rather all-or-nothing - either you spend a lot of time daily in tracking all the media about the ecosystem, or you just don't get the basics. Not good IMO. (and yes I like a lot about the language).