That's right. C being more limited, it has a more simplistic ABI. But the parent comment was replying the "need for a stable, standard ABI, which C provide". Which is misleading as both C++ and C have the same level of stability and standardisation of their ABI. (I.e: none of them is standardized, but both of them are stable on practice)
Moc just adds some more generated code next to you valid C++ code, generally in a new translation unit.
That generated code is just boiler plate which you could write by hand if you wanted. In a sense this is what verdigris does: it provides macros that makes makes it easier to "write it by hand" (the W_OBJECT_IMPL macro somehow expends to the code generated by moc)
> Moc just adds some more generated code next to you valid C++ code
Code without which the complete output of your toolchain from your source is incomplete and therefore invalid - again if this was not the case you would not need to run MOC at all.
Qt Creator doesn't even get involved in the build once it has started qmake - a qmake .pro[ject] file is nothing like a Visual Studio project file. You are supposed to make edits other than adding or removing source files by hand. So you really don't need Qt Creator for moc.
Note that qmake is generally easiest for tiny programs such as bug reproducers. For anything nontrivial, it's better to use CMake. Most advanced functionality in qmake is undocumented and weird. Qt itself is moving to CMake with Qt 6.
I was trying to figure out how to do MOC from gnu make when I came across this, the only other option I saw was to run qmake and then run that generated makefile from my makefile. Of those two options verdigris seemed to be the better choice to me.
Technically the Sun is moving as part of the arms of the Milky Way galaxy, which itself is moving through space on a collision course with Andromeda, so where you point at will be off.
This would be unnecessarily pedantic but not wrong* if not that the heart rotates on itself, so you are pointing at a 8 minutes old direction.
*All these discussion make no sense at all, as neither synchronous remote events nor the concept of what rotates around what are meaningful. This is not to say that we should not have this conversation, but to say that you are being pedantic without bringing anything to the table.
> most of those languages (1) have a GC that makes boxing a much cheaper operation than in Rust
Why is that? I would intuitively think it is the other way. (Is a malloc/free pair not cheaper than an allocation on the GC heap + collecting its garbages?)
As always, it depends on a lot of details. A generational garbage collector can be made to allocate extremely quickly; IIRC for the JVM it's like, seven instructions? For short lived allocations, it sort of acts like an arena, which is very high performance. malloc/free need to be quite general.
It's always about details though. If a GC is faster than malloc/free, but your language doesn't tend to allocate much to begin with, the whole system can be faster even if malloc is slower. It always depends.
Doesn't something like jemalloc basically give you this, but without pauses? Thread-local freelists for quick recycling of small allocations without synchronization.. funnily enough, jemalloc even uses some garbage collection mechanisms internally.
I don't know a ton about jemalloc internals, but it is true that a lot of modern mallocs use some mechanisms similar to GCs. There's some pretty major constraint differences though.
`malloc` + `free` are unknown function calls, they can't be inlined, don't understand the semantics of your language, the strategy behind them is quite generic, etc.
A GC that's integrated with a programming language can do much much better (different heaps for short and long lived allocations, for example).
One can do even better by supporting custom "pluggable" allocators, and not just a single global allocator like Rust does at present. Some of these allocators could even implement GC-like logic internally.
I disagree that the user of the macro needs to understand its output.
The output of a macro is an implementation detail, and the documentation of the macro should be enough to use the macro without even looking at its output.
For example, no need to understand the magic behind the 'quote!' or the #[async_trait] macros to use them.
Not every user has to understand every macro. But the output of a Rust macro is valid Rust code whereas the output of a C compiler is not valid C code.
As a consequence, criticising the complexity of whatever a C compiler generates cannot possibly be valid criticism of C's complexity on a _semantic_ level whereas criticising the output of a Rust macro can be valid criticism of Rust on a semantic level.
Nearly all C compilers allow inline assembly. Macros are similar to inline assembly in that they step outside the normal bounds/use case of the language and are a complicated but valid and useful tool.
Most C programmers won't have to write or understand inline assembly often, if ever. Of course you can encounter it in production problem or something, so you could make an argument that all C programmers need to understand "C with inline assembly", which you are making for Rust macros.
As long as you just use Rust macro's and not write your own you are solidly in "C without inline assembly" territory.
>Nearly all C compilers allow inline assembly. Macros are similar to inline assembly in that they step outside the normal bounds/use case of the language and are a complicated but valid and useful tool.
I couldn't disagree more. Macros are not similar to inline assembly at all precisely because they do _not_ step outside the bounds of the language.
Whatever similarities you may find, it's simply not helpful to deny the fundamental distinction between language A generating code in language A and language A invoking/generating code in language B.
It's futile to debate the properties of a particular language if you can't make a distinction between that language and anything it can generate or embed in some opaque way.
What I fail to understand is why it's so important to you whether a pre-processing/compiler/de-sugaring of language A results in a valid snippet of language A or another language for complexity of language A.
Take Objective-C automatic reference counting [1], implemented as a transformation of the original code to valid code of the same language (similar to a Lisp/Rust/Scala style macro) by automatically adding the appropriate statements.
If I understand your argument correctly, according to you this increases the complexity of "Objective-C with ARC", but would not have done so if the compiler would have implemented it as a direct transformation to its compilation target instead.
To me, that is an implementation detail which does not matter. "Objective-C with ARC" is exactly as complex in both cases. I'd argue it's even a little bit less complex with the "macro" implementation since you don't need to know assembly to know what ARC is doing.
Similar to ARC, Rust implements some things with macro's, which first "compiles" something to valid Rust. To me this is not more difficult for users than it would be if the compiler would directly generate LLVM IR without this intermediate step.
The inclusion of macro's in a language do make the language more complex of course! And creating macros is notoriously difficult since you're basically implementing a small compiler step! But for the user using something is not suddenly more difficult because it's implemented using a macro.
>What I fail to understand is why it's so important to you whether a pre-processing/compiler/de-sugaring of language A results in a valid snippet of language A or another language for complexity of language A.
It's important because any and all code in language A is fair game when it comes to criticising semantic properties of language A. Code in other languages isn't.
noncoml criticised Rust based on a piece of Rust code. My point is simply that this criticism is potentially legitimate in ways that criticising C based on a piece of assembly code could never be.
I think our disagreement arises because you are asking a completely different question. What you're saying is that for devs who invoke some code it may not matter one bit whether that code was implemented in language A or language B or language A generated by language A or B. Those distinctions do not necessarily affect the semantic complexity for users of that code.
I completely agree with that. I also agree that the code snippet noncoml posted does not mean using async code in Rust has to be overly complicated.
But when I see a piece of Rust source code, I can criticise Rust based on it regardless of where that code came from or what purpose it serves.
Someone had to think in terms of Rust in order to write that code, and it's always worth asking whether it shouldn't be possible to express the same thing in a simpler way or whether that would have been possible in another language.
The fact that this code does not have to be understood by its users is completely irrelevant for this particular question.