D does compile time function evaluation every time there's a ConstExpression in the grammar. It did this back in 2007. It required no changes to the grammar or syntax, it just enhanced constant folding to be able to do function calls.
We aren't talking about just compile time function evaluation of ConstExpressions. C++ had that all the way back in C++11, and many compilers were doing CTFE long before that as an optimisation (but as a programmer you couldn't really control when that would happen)
Compilers are always allowed to CTFE than what the C++ standard specifies, many do. The standard is just moving the lower bar for what all compilers must now evaluate at compile time, and what c++ programmer may expect from all compilers.
Since C++11, they have been massively extending the subset of the language that can be evaluated at compile time. C++14 allowed compiletime evaluation of loops and conditionals, along with local variables. C++20 allowed CTFE functions to do memory allocation (as long as it was also freed) along with calling object constructors/deconstructors.
The main new thing in C++26 is the ability to do CTFE placement new, which is something I remember missing the last time I tried to do fancy C++ stuff. Along with marking large chunks of the standard library as constexpr.
It has been a very exciting fifteen years. As a minor counter argument, other languages will let you run any code at compile time. Because it's code. And it can run whenever you want. C++ has that too I suppose, you just have to weave some of your control flow through cmake.
A big discussion item in constexpr in C++ was evaluation of floating-point at compile-time.
Because depending on your current running CPU flags, you can easily end up in a case where a function do not give the same result whether at run-time vs at compile-time, confusing users. How does that work in D (and for that matters, any other language with compile-time evaluation) ?
Here's the thing - that cannot be fixed. Consider the x87 FPU. It does all computations to 80 bits, regardless of how you set the flags for it. The only way to get it to round to double/float is to write the value out to memory, then read it back in.
Java tried this fix. The result was a catastrophe, because it was terribly slow. They backed out of it.
The problem has since faded away as the X86_64 has XMM registers for float/double math.
C still allows float to be done in double precision.
D evaluates CTFE in the precision specified by the code.
More to the point, constexpr does supply something D doesn't do. Whether D runs a calculation at compile time or run time is 100% up to the programmer. The trigger is a Constant-Expression in the grammar, a keyword is not required.
BTW, D's ImportC C compiler does the same thing - a constant expression in the grammar is always evaluted at compile time! (Whereas other C compilers just issue an error.)
int sum(int a, int b) { return a + b; }
_Static_assert(sum(1,2) == 3, "message"); // works in ImportC
_Static_assert(sum(1,2) == 3, "message"); // gcc error: expression in static assertion is not constant
Tbf, C and C++ did constant folding for function calls since forever too (as well as pretty much any compiled language AFAIK), just not as a language feature, but as an optimizer feature:
Because C++ is a completely overengineered boondoggle of a language where every little change destabilizes the Jenga tower that C++ had built on top of C even more ;)
int sum(int a, int b) { return a + b; }
_Static_assert(sum(1,2) == 3, "message");
gcc -c -O x.c
x.c:2:17: error: expression in static assertion is not constant
_Static_assert(sum(1,2) == 3, "message");
Constexpr as a keyword which slowly applies in more places over decades is technically indefensible. It has generated an astonishing quantity of work for the committee over the years though which in some sense is a win.
I think there's something in the psychology of developers that appreciates the byzantine web of stuff which looks like it will work but doesn't.
I always wondered why constexpr needs to be an explicit marker. We could define a set of core constexpr things (actually this already exists in the Standard) and then automatically make something constexpr if it only contains constexpr things. I don't want to have to write constexpr on every function to semantically "allow" it to be constexpr even though functionally it already could be done at compile time...
Same story with noexcept too.
One reason which matters with libraries is that slapping constexpr on a function is, in a way, a promise by the author that the function is indeed meant to be constexpr.
If the author then changes the internals of the function so that it can no longer be constexpr, then they've broken that promise. If constexpr was implicit, then a client could come to depend on it being constexpr and then a change to the internals could break the client code.
I think being explicit is a good thing in C++. Suppose there is not constexpr in C++ and the following works:
inline int foo(int x) { return x + 42; }
int arr[foo(1)];
I think it would qualify as spooky action-at-a-distance if modifying foo causes arr to be malformed. And if they are in different libraries it restricts the ways the original function can be changed, making backwards compatibility slightly harder.
This would make more sense if constexpr was actually constant like say, Rust's const.
The Rust const fn foo which gives back x + 42 for any x, is genuinely assured to be executed at compile time when given a constant parameter. If we modify the definition of foo so that it's not constant the compiler rejects our code.
But C++ constexpr just says "Oh, this might be constant, or it might not, and, if it isn't don't worry about that, any constant uses will now magically fail to compile", exactly the spooky action at a distance you didn't want.
When originally conceived it served more or less the purpose you imagine, but of course people wanted to "generalize" it to cover cases which actually aren't constant and so we got to where we are today.
Slapping the constexpr keyword on a function is useless by itself, but it becomes useful when you combine it with a constexpr or constinit variable. Which is not all that different from Rust:
// C++
constexpr Foo bar() { /* ... */ }
constexpr Foo CONSTANT = bar(); // Guaranteed to be evaluated at compile time
constinit Foo VARIABLE = bar(); // Guaranteed to be evaluated at compile time
// Rust
const fn bar() -> Foo { /* ... */ }
const CONSTANT: Foo = bar(); // Guaranteed to be evaluated at compile time
static VARIABLE: Foo = bar(); // May or may not be evaluated at compile time
So Rust is actually less powerful than C++ when it comes to non-constant globals because AFAIK it doesn't have any equivalent to constinit.
That Rust constant named CONSTANT, is an actual constant (like an old-school #define) whereas in C++ what you've named CONSTANT is an immutable variable with a constant initial value that the language promises you can refer to from other constant contexts. This is a subtle difference, but it's probably at the heart of how C++ programmers end up needing constfoo, constbar, const_yet_more_stuff because their nomenclature is so screwed up.
The VARIABLE in Rust is an immutable static variable, similar to the thing you named CONSTANT in C++ and likewise it is constant evaluted so it happens at compile time, there is no "may or may not" here because the value is getting baked into the executable.
If we want to promise only that our mutable variable is constant initialized, like the C++ VARIABLE, in Rust we write that as:
let mut actually_varies = const { bar() }; // Guaranteed to be evaluated at compile time
And finally, yes if we write only:
let mut just_normal = bar(); // This may or may not be evaluated at compile time
So, I count more possibilities in the Rust, and IMNSHO their naming is clearer.
static VARIABLE: Foo = bar(); // May or may not be evaluated at compile time
Under what circumstances would this not be evaluated at compile time? As far as I know the initializer must be const.
Unlike C++, Rust does not have pre-main (or pre-_start) runtime initializers, unless you use something like the ctor crate. And the contents of that static must be already initialized, otherwise reading it (especially via FFI) would produce unknown values.
To evaluate something at runtime you'd have to use a LazyCell (and LazyCell::new() is const), but the LazyCell's construction itself will still be evaluated at compile time.
(Back in C++20 days I had a terrible habit of finding bugs in MSVC's implementations of metaprogramming features, after they claimed to be feature complete on C++20. Probably because people use these features less. Even now I occasionally receive emails about those bugs they've finally fixed.)
The compiler writers have had a ton of issues implementing modules and aren't particularly excited for them (the committee seems to have forgot the lessens learned from c++98 and not having full implementations for crazy hard things)
on the other hand constexpr changes tend to be picked up pretty quickly, and a lot of them tend to be things that the compiler/stl authors themselves are asking for.
Since C++14 that I have increasingly changed my mind that it should only be about existing practice, and no paper should be accepted without implementation, regardless of how basic it may be, just like in other language ecosystems.
Yes it might prevent lots of cool features, yet how good are they if it takes years to be implemented across compilers, or they turn out to be yet another export template.
Additionally it would help to reduce count from those 300+ people, not everyone is there for good reasons, some only want to have a "contributed to C++" on their background, and then they are gone.
> should only be about existing practice, and no paper should be accepted without implementation
When c++11 was still c++0x they made a big song and dance about how they wouldn't do another export template boondoggle and wanted an implementation available for any features. Then they seemed to have completely forget about it when doing modules (which not that surprisingly is running into similar issues that export templates did).
Many features of D have since found their way into C++, such as ranges, compile time function execution, thousands separators in numeric literals, conditional compilation blocks, etc.
There's not only the features, but how they are done. D users regularly tell me that it is just so much easier to program in D. Aesthetics do matter.
I've pointed out many times that thousands markers in D came from Ada. However, no other language did them until D did.
Yes, Lisp has compile time function execution. Where it remained until D did it with a compiled language.
C++'s implementation of ranges is still based on pointers (an iterator pair), rather than based on arrays as in D. The language loses semantic information because there is no particular thing that connects the iterator pair as being the limits on an array. Basing ranges on arrays enables crucial things like array bounds checking.
Yes, D's contracts are based on Eiffel. But they also went nowhere until D adopted them.
Yeah, maybe they should have, would have been a lot simpler.
But C++ really wanted modules to be a more or less drop in replacement for #include (or at least a set of common use cases), which really pushed up the required level of complexity.
That it was already there via Apple and Google's work, header maps, but what we got was Microsoft proposal, after some collaboration with Google.
Note at WWDC 2024, the module improvements regarding build times, in what concerns C++, it is based on header maps as well. Apple is not even bothering with C++20 modules.
Modules implicate the entire toolchain, backward compatibility, and binary compatibility. constexpr is a compiler feature. Wild that they are being compared.
See how many years it takes for a compiler on average to be 100% compliant after a standard is ratified, not widely at all when one wants to write portable code, regardless.
In practice it's the same thing with C++ today as it is with web standards: you have to evaluate support on a feature-by-feature basis, and stick to the features that are supported on all the implementations you care about.
And constexpr in stdlib is much more likely to get quick adoption across all implementations than something like modules.
love seeing all the constant updates but sometimes i feel like the biggest pain points just stick around way too long - you ever feel like these standards chase the wrong problems sometimes?
"complex" isn't going anywhere. Complexity in languages only rises with age, and C++ has it especially because it's meant to be very general-purpose rather than specialized. Everyone uses C++ in different ways for different purposes, leading to additional complexity to keep everyone happy.
"sanity" might refer to default behaviors. If it does, there's not a lot you can do there either. You can't go changing fundamentals of how the language works, because backwards compatibility is paramount. There would be riots if a new standard broke significant chunks of legacy code - even for the better.
One of my first tasks in my carrier was implementing stable sort to incomplete stl implementation. In 3 years no one used it. This so niche it should not be part of C++ standard.
Amazing, now could I just get a way to do asynchronous network requests in two lines of code, like I have with other languages?
Honestly, it seems to me like the committee is constantly chasing the easy bits but is failing to address the bigger issues in the language and, more importantly, the standard library.
It’s very different to ask for that from a language like Go or Python vs a language like C++. C++ is for interacting with system APIs and resources, on ANY system. Standardizing networking would require that every targetable host must have a common interface. Merely bridging win32, macos, bsd, android, and Linux is hard enough, without regard to all the possible platforms a C++ user might be interested in targeting.
The more you add to the standard, the narrower the platform support gets. Should we also force C++ to only be able to target 64-bit hosts? How about requiring hardware vector support? See what I’m getting at?
If you just want to make an async nw call, there are lots of things you can do that in already. If you want to write an async nw driver or library, then maybe you should use C++.
Boost.asio and Boost.cobalt both allow you to do this. You could even implement the coroutine traits yourself if you don't want to use these libraries.
I don't understand why it is so complex in C++.
reply