So the f-string literal produces a basic_formatted_string, which is basically a reified argument list for std::format, instead of a basic_string. This allows eg. println to be overloaded to operate on basic_formatted_string without allocating an intermediate string
std::println("Center is: {}", getCenter());
std::println(f"Center is: {getCenter()}"); // same thing, no basic_string allocated
In exchange we have the following problems
// f-strings have unexpected type when using auto or type deduction.
// basic_string is expected here, but we get basic_formatted_string.
// This is especially bad because basic_formatted_string can contain
// dangling references.
auto s = f"Center is: {getCenter()}";
// f-strings won't work in places where providing a string currently
// works by using implicit conversion. For example, filesystem methods
// take paths. Providing a string is okay, since it will be implicitly
// converted to a path, but an f-string would require two implicit
// conversions, first to a string, then to path.
std::filesystem::exists(f"file{n}.dat"); // error, no matching overload
There are two other proposals to fix these problems.
This is becoming such a tiresome opinion. How are concepts fixing a problem created by previous features to the langue? What about ranges? Auto? Move semantics? Coroutines? Constexpr? Consteval? It is time for this narrative to stop.
Move semantics is only needed because C++ introduced implicit copies (copy constructor) and they of course fucked it up my making them non-destructive so they aren't even 'zero cost'.
Constexpr and consteval are hacks that 1) should have just been the default, and 2) shouldn't even be on the function definition, it should instead have been a keyword on the usage site: (and just use const)
int f() { ... } // any old regular function
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
int y = f(); // this is evaulated at runtime
That would be the sane way to do compile time functions.
I agree that I would have preferred destructive moves, but move semantics makes C++ a much richer and better language. I kinda think pre-move semantics, C++ didn't quite make "sense" as a systems programming language. Move semantics really tied the room together.
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
That's very silly. You're saying this should fail to compile?
void foo(int x) {
const int y = bar(x);
}
There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
So you respond "well, I didn't mean THAT kind of const, you should have a different word for compile-time constants and run-time non-mutability!" Congratulations, you just invented constexpr.
There are many bad things about C++, but constexpr ain't one of them.
>There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
Yeah, I see no problem with that.
Non-constant expressions usage of 'const' has always just seemed like a waste of time for me, never found it useful.
But I guess a lot of people really liking typing const and "preventing themselves from accidentally mutating a variable" (when has that ever happened?), so as a compromise I guess you can have a new keyword to force constant expressions:
constexpr auto x = foo(); // always eval at compile time
const auto x = foo(); // old timey const, probably runtime but maybe got constant folded.
but it's not really a big deal what they keyword is, the main point was that "give me a constant value" should be at the usage site, not at the function definition.
> the main point was that "give me a constant value" should be at the usage site, not at the function definition.
The issue is, not everything can be done at compile time, and so “I can use this at compile time” becomes part of the signature because you want to ensure that it will continue to be able to be used that way. Without it, changes in your function could easily break your callers.
Exactly right. There's a huge benefit to encode the ability for compile-time evaluation in the signature of the function itself. Much better than doing it "ad-hoc", like how template instantiation does it. Sometimes it will work, sometimes it doesn't. constexpr functions always work.
I like const because I can look at a function signature and know that nothing downstream is mutating a parameter. I can also write a function that returns a const reference to a member and know that nobody in the future will ever break my invariants.
This isn't about "oops, I didn't mean to mutate that." This is about rapidly being able to reason about the correctness of some code that is leveraging const-qualified code.
I kinda like it on occasion. Works like pythons defaultdict. Like, if you wanna count something:
for (const auto &thing: collection) {
counts[thing]++;
}
Works nicely, you don't have to check if it's already there before ++ing it. As long as you know that's what operator[] does, it comes in handy more than I would've expected.
Yeah. It has its uses. You could accomplish the same with the rather verbose `counts.try_emplace(thing).first->second++` but nobody wants to type that (even if it is more explicit about what it's doing).
Another popular use case is something along the lines of:
That said, I don't know what behavior I'd want if maps didn't automatically insert an element when the key was absent. UB (as with vector)? Throw an exception? Report the incident to Stroustrop? All these options feel differently bad.
Maybe it's not a concern in C-family languages, but rust's culture of defaulting to let and only using mut when it's specifically required does feel very pleasant and ergonomic when I'm in that headspace.
Eh not really accurate because C's const means immutable not actually constant. So I get introducing constexpr to actually mean constant. But, yeah, constexpr x = f() should probably have worked as you described.
const is different in C++ from const in C. const variables in C++ are proper compile-time constants. In C they are not (the nearest equivalents are #define and enum values).
So in C++ "const x = EXPR" would make sense to request compile-time evaluation, but in C it wouldn't.
Ouch, but thanks. I learned something today - something I'd long forgotten. I like your example, it shows the point well. (Though, there are circumstances when a compiler can unroll such a loop and infer a compile-time constant, it wouldn't qualify as a constant expression at the language level.)
It's been so long since I used C++ for serious work that we weren't using C++11, so neither auto nor range-for were available. It would be uncommon to see "const type = " with a non-reference type and a non-constant initialiser.
Even with your example, some styles avoid "const auto item", using either "auto item" or "const auto& item" instead, because the "const" matters when taking a reference, not so much with a copy.
But I appreciate your point applies to const variables with non-constant initialisers in general, in the language.
There was once a big deal in literature about const in C++ being the "better" alternative to how #define is commonly used with C for constant values, and it seemed applicable to the thread as a key distinction between C and C++, which the parent commenter seemed to have conflated by mistake.
But I'd forgotten about const (non-reference) variables accepting non-constant initialisers, and as I hadn't used C++ seriously in a while, and the language is always changing, I checked in with a couple of C++ tutorials before writing. Unfortunately those tutorials were misleading or too simple, as both tutoruals said nothing about "const type x = " (non-reference/pointer) being uwed in any other way than for defining compile-time constants.
It's bit embarrssing, as I read other parts of the C++ standard quite often despite not using it much these days. (I'm into compiler guts, atomics, memory models, code analysis, portability issues, etc.). Yet I had forgotten this part of the language.
So, thanks for sending me down a learning & reminder rabbit-hole and correcting my error :-)
I thought the whole point of ranges is to solve problems created by iterators, move semantics to take care of scenarios where nrvo doesn't apply, constexpr and auto because we were hacking around it with macros (if you can even call it that)?
To me, redoing things that are not orthogonal implies that the older version is being fixed. Being fixed implies that it was incorrect. And to clarify, sure, auto types and constexpr are entirely new things we didn't have (auto changed meaning but yeah), but we were trying to "get something like that" using macros.
> To me, redoing things that are not orthogonal implies that the older version is being fixed
The older version is being improved, especially for ergonomics. Regarding your examples, ranges do not obsolete iterators, they are just a convenient way to pass around iterator pairs, but actual range are better implemented in terms of iterators when they are not just a composition of ranges. Similarly move semantics has little to do with nrvo (and in fact using move often is suboptimal as it inhibits nrvo).
Again, I have no idea how constexpr and auto have anything to do with macros.
auto is fixing the problem of long-ass type names for intermediaries thanks to templates and iterators.
Move is fixing the problem of unnecessary mass-construction when you pass around containers.
std::ranges was introduced because dear fucking god the syntax for iterating over a partial container. (And the endless off-by-one errors)
concepts, among other things, fix (sorta) the utter shit show that templates brought to error messages, as well as debacles like SFINAE and std::enable_if.
You're right. They're not fixing problems created by previous features. They're all fixing problems created or made massively worse by templates.
Hah. What's interesting about this is that since it doesn't require everything to actually be converted to a string, one can implement things other than just printing. So you could also implement interpretation, eg:
pylist = python(f"[ y*{coef} for y in {pylist} if y > {threshold}]")
It also allow for things that will set off spidey senses in programmers everywhere despite theoretically being completely safe assuming mydb::sql() handles escaping in the format string:
cursor = mydb::sql(f"UPDATE user SET password={password} WHERE user.id={userid}")
Yeah. You really want "mydb::sql" to not take a basic_string, only a basic_formatted_string, so it will not compile if the conversion actually happened somehow.
Yes. The basic idea is that there's a specifier that allows a formatted string to transparently decay into an ordinary string (à la array-to-pointer decay) so that "auto" doesn't produce dangling references, and so that chains of more than one implicit conversion can take place.
This seems pretty similar to Rust's `format_args!` macro, which however avoids these issues by being much more verbose and thus something people are less likely to use like in those examples. It does however have issues due to the abundant use of temporaries, which makes it hard to use when not immediately passed to a function. I wonder if C++'s fstrings have the same issue.
One of the two other proposals is user defined type decay, which lets you choose what type auto will be deduced as. i.e. "auto x = y", x might not have the type of y, instead it can be anything you choose…
This is like implicit type conversion on steroids. And all this because C++ lacks the basic safety features to avoid dangling pointers.
> lacks the basic safety features to avoid dangling pointers
It doesn't. Unfortunately, C++ programmers choose not to use basic safety features for performance reasons (or aesthetics, or disagreement with the idea that a language should take into account that a programmer might make a mistake, but at least performance is a good one), but C++ actually has quite a few tricks to prevent the memory management issues that cause C/C++ bugs.
Using modern C++ safety features won't completely prevent bugs and memory issues, just like using Rust won't, but the mess that causes the worst bugs is the result of a choice, not the language itself.
Tell that to the designers of the C++ standard library, and the new features being added. They're the ones that keep adding new features that depend on references and pointers instead of std::shared_ptr or std::unique_ptr.
I don't think this is the only reason. If it were, they could easily have added overloads that work with both std smart pointers and with plain pointers for compatibility. Or they could add pointer type template parameters, maybe with concepts for the right ownership semantics.
shared_ptr and unique_ptr aren’t useful for reasoning about the lifetimes of stack-based objects (unless you’re willing to require that such objects always be dynamically allocated, which is often not a reasonable requirement).
Has he? He at least used to be the biggest proponent of it, "just follow these standards and development practices that I had to meticulously develop for the US military, that no tool can automatically check, and you'll be fine!".
Smart pointers were added to the language 14 years ago. You're free to use old C++ with raw pointers and manual memory management, risking dangling pointers, or use modern C++, which provides smart pointers to avoid those issues.
And yet most if not all of the standard library keeps using pointer or reference arguments, not the new smart pointers that would actually document the ownership semantics.
Most arguments to standard library calls don't need to take ownership over memory, using a raw pointer or (const) reference is correct. Generally - smart pointers to designate ownership, raw pointers to "borrow".
If a function takes a raw pointer, you need to check the docs to know if it is taking ownership or not. There is no general rule that applies to the whole of std that functions taking raw pointers assume that they are borrowing the value.
And even if you could assume that pointer parameters represent borrowing, they are definitely not guaranteed to represent scoped borrowing: the function could store them somewhere, and then you end up with other issues. So shared_ptr is the only solution if you care about safety to represent a borrowed pointer. And of that's too costly, but the std designers did care about safety, they could have introduced a std::borrowed_ptr<T> that is just a wrapper around T* but that is used uniformly in all std functions that borrow a pointer and guarantee not to store it.
Why? If all standard functions that take no ownership/keep references are using raw pointers then it behaves same as user code/C++ devs expect: if a function is taking a pointer then it claims no ownership. You take a look at standard_function(T*) and see raw pointer and then can assume it is not taking ownership or keeping references
I would not say stop using it. But just stick to the really needed features, and stop adding more features every 3 years. Nobody can keep up, not the developers, not the compilers... is just insane.
C++ desperately needs a solution for wrapper types that should eagerly decay into the wrapped type unless you pass it somewhere that explicitly wants the wrapper type.
Tangent: this sort of thing can be implemented without any change to libc++ (the runtime). Updates to compiler versions are sometimes postponed by users with big codebases that treat a libc++ change as something major.
Why don't we see gcc or clang or msvc back porting stuff like this to an older version with a sort of future tag. It's normal to see __future__ in the python ecosystem, for instance.
thank you for the clarification. You are 100% right about the general difference. I didn't consider the level of "confidence" python has in directing it's own evolution that I don't detect in the C++ committee
If a codebase is fragile enough that libc++ changes have to be assumed breaking until proven otherwise, why take the risk? Presumably the application already has a "standard" way of formatting strings. If it ain't broke yada yada
It's not about assumed breaking, it's that when you upgrade libc++ you can become incompatible at runtime with your distro or any other number of libraries outside your control in ways that are difficult to detect
It would be nice to take care to allow the use of GNU gettext() or any other convenient translation tool.
Recap: _("foo %s") macroexpands to gettext("foo %s"), then "foo %s" is extracted to a lexicon of strings by an external tool, which can be translated and compiled into .po files, which are loaded at runtime so gettext() can use a translated string based on $LC_MESSAGES. (And there is also _N(..) for correct plural handling.)
To do this with f-strings, _(f"foo {name()}") (which is a bit ugly...) needs to translate to make_formatted_string(_("foo {}"), name()) -- note that the _(...) needs to be called before calling make_formatted_string, to be able to return a translated string.
I would wish for a proposal for f-strings to consider translating strings, because we live in a world with many languages. And maybe cite gettext as a convenient method, and think about what could be done. Or point to a better tool. Or state: 'in that case, f-strings cannot be used'.
The C++ language itself shouldn't be tied to any one specific application or third party tool, though. Just because they exist doesn't mean you are forced to use them, this is one of those cases where f strings don't make a lot of sense. Things with localized labels or text ideally have an id that gets looked up, so you can't do English-based string composition. Every locale gets looked up, no "just pass through the key if locale X", and lookup failures don't "still work", they result in super obvious, user-reportable nonsense.
So, the f-string in Python is "spelled" that way because another leading character was the only ASCII syntax left for such a thing. It's odd that PRQL and now potentially C++ might copy it. In the PRQL case it was a new thing so they could have chosen anything, double quotes (like shell interpolation) or even backticks, that seem to make more sense.
Also the f- prefix was supposed to be short for format and pronounced that way. But "eff" caught on and now devs the world over are calling them "eff strings" ... funny. :-D
That is a valid point and something I've also been thinking about lately. I can't speak for the others but in my case the Python string interpolation syntax was the one I was most familiar with, other than bash, so it was just the default. The big idea really is to have string interpolation and the syntax is somewhat secondary but we do aim for ergonomics with PRQL so it is a consideration.
Since then I've seen more alternatives like `Hello ${var}!` in JS/TS and $"Hello {var}!" in F#. Not sure that there's a clear way to prefer one approach over the others.
What would you consider to be factors that would make you prefer one over the others?
ease of typing: so regular quotes are better vs backticks (even with a prefix), F-prefix - better than $, requiring Shift
ease of learning: here letter-mnemonic seems easiest: so I-prefix for "interpolation" or E-prefix for "expression" or maybe V-prefix for "variable". Or maybe F for "formatted" is also fine?
> [...] another leading character was the only ASCII syntax left for such a thing.
Not really? The original PEP [1] for example considered `i"asdf"` as an alternative syntax. Any ASCII Latin letter besides from `b`, `r` and `u` would have been usable.
And observes that this additional feature is needed to avoid dangling references. And, as a long time C++ programmer, this illustrates one of the things I dislike most about C++. In most languages, if you make a little mistake involving mixing up something that references something else with something that contains a copy, you end up with potential overhead or maybe accidental mutation. In Rust, you get a compiler error. In C++, you get use-after-free, and the code often even seems to work!
So now we expect people to type:
auto s = f"{foo}";
And those people expect s to act like a string. But the designers (reasonably!) do not want f to unconditionally produce an actual std::string for efficiency reasons, so there’s a proposal to allow f to produce a reference-like type (that’s a class value, not actually a reference), but for s to actually be std::string.
But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
explicit auto s = f"{foo}";
Does what they programmer actually typed: s captures foo by reference.
What could possibly go wrong?
(Rust IMO gets this exactly right: shared xor mutable means plus disallowing code that would be undefined behavior means that the cases like this where the code might do the wrong thing don’t compile. Critically, none of this actually strictly requires Rust’s approach to memory management, although a GC’d version might end up with (deterministic) runtime errors instead unless some extra work is done to have stronger static checking. And I think other languages should learn from this.)
> But, of course, more advanced users might know what they’re doing and want to bypass this hack, so:
explicit auto s = f"{foo}";
> Does what they programmer actually typed, so s captures foo by reference.
Wouldn't this problem be best solved by... not declaring s to have a guess-what-I-mean type? If you want to be explicit about the type of s, why not just say what that type is? Wouldn't that be even more explicit than "explicit auto"?
A general issue with C++ (and many statically typed languages with generic) is hilariously long type names that may even be implementation details. Using auto can be a huge time saver and even necessary for some generic code. And people get in the habit of using it.
There are many cases in C++ where the type is unspecified (std::bind for example), or even is unutterable (lambdas, or unnamed structures). I think F-strings would be an example of the latter.
You can always box of course (std::function, std::any at the limit), but it has a non-trivial cost.
IOW I believe it's the same thing as Rust's format_args! macro, but trying to get away without needing a separate format! macro by using implicit conversions.
std::format_args! gets you a Arguments<'a> which we'll note means it has an associated lifetime.
Today-I-learned, Arguments<'a> has a single useful function, which appeared before I learned Rust but only very recently became usable in compile time constants, as_str() -> Option<&'static str>
format_args!("Boo!").as_str() is Some("Boo!")
If you format a literal, this always works, if you format some non-literal the compiler might realise the answer is a compile time fixed string anyway and give you that string, but it might not even if you think it should and no promises are given.
But there is no Arguments::fmt ? Are you thinking of the implementations of Debug::fmt and Display::fmt on Arguments ? Trait implementation isn't the same kind of thing at all.
There is exactly one useful thing you can do with an `Arguments` object: call `.fmt()` on it.
The whole reason for std::Arguments very existence is to call `std::Arguments::fmt` on it.
`.fmt()` is a trait implementation, but that doesn’t change anything (not sure what “kind of thing” refers to here). It’s still a function on std::Arguments.
I think you're quite muddled about what's going on here
The full name of this type is std::fmt::Arguments not std::Arguments and even so there's no such thing as std::fmt::Arguments::fmt - there is no function with that name, we can only talk about this name (since it doesn't exist) if we bring into context a specific trait such as Display or Debug
So the full name of the thing you think is the "one useful thing you can do with Arguments" is
<std::fmt::Arguments as std::fmt::Display>::fmt
or perhaps it's
<std::fmt::Arguments as std::fmt::Debug>::fmt
... as I said, Arguments implements both traits, and their sole function has the same name so we need to disambiguate somehow if we mean one of these functions or the other. For the function defined on Arguments itself, as_str, it's already unambiguous.
In the end the Debug and Display traits are all just ductwork, which is why as_str caught my attention.
It is not a shortcut because it can't be implemented without knowing the `Arguments` internals. `format_args!("{}", "boo").as_str()` returns None for example.
It’s a shortcut in the sense that most, if not all optimisations are shortcuts. This one allows you to shortcut the usual formatting machinery if the result of formatting is a static string.
Like all shortcuts, it’s not something you can always rely on.
Interesting viewpoint: I see this as a distinction without a difference. I’m interested to know why you see it differently? What is its use, if not as a shortcut?
C++20's concepts IMHO are a massive update over C++11. You can basically remove almost 90% of inheritance with them without incurring in any issue (you could do that earlier too, but at the expense of incredibly hard to read error messages - now that's basically solved thanks to concepts).
I don't find the error messages produced by concepts much better than old school template errors. Maybe I got used to the latter with experience and definitely the compilers got better at generating useful error messages for templates as the years passed. On the other hand when I have to review code where a significant portion of the source relates to concepts, my heart sinks.
In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing). And for what? Potentially better error messages?
Another gripe; of all the generic overloaded words available to describe this C++ feature, "concept" must be the least descriptive, least useful. Why pick such a meaningless name that does absolutely nothing to even suggest what the feature does?
> In my opinion, C++ "concepts" are the least useful C++20 addition to the language - awful syntax, redundancy everywhere (multiple ways of writing the same thing).
They're not the least useful C++20 addition, in fact they're amongst the most useful ones.
In particular the addition of the "requires" expression is the real killer here.
> And for what? Potentially better error messages?
Removing even more enable_if and making template code even easier to read (you could do some of that with if constexpr + static_assert in C++17, but there were gotchas). Oh and it allows you to check for the presence of members in classes, which you couldn't do before.
>[...] "concept" must be the least descriptive, least useful. Why pick such a meaningless name
That's the name that Stepanov used 30 years ago to describe the informal type constraints of templates and has been in use in the community since them. Choosing anything else for the language feature would not make sense.
C++ concepts are a failure due to them only checking one side of the contract. And the other is basically impossible to implement without breaking other parts of the language
Came here to post the same thing. C++11 was a major and practical step up from previous versions. I haven't seen anything in future standards that looked like it a tool I'd use day-to-day building actual production software. Much of the subsequent versions added things probably interesting to compiler and language academics. "Default constructible and assignable stateless lambdas?" Really?
Off the top of my head, C++17 brought slicker notation for nested namespaces, digit separators for numeric literals (so you can more easily read 1'000'000'000), improvements in type deduction for pairs / tuples (so std::make_pair / make_tuple are basically unnecessary now), guarantees in the standard for copy elision / return value optimization in specific circumstances,. Oh, and structured bindings (so you can now write `for (const auto& [key, value] : map) { ... }`).
edit: I guess digit separators came in C++14, I'm always a little fuzzy there since at work, we jumped straight from 11 -> 17.
C++20 brought a feature that C had decades prior: designated initializers, except it's in a slightly crappier form. Also, spaceship operator (three-way comparison).
Looking at cppreference, it looks like C++17 also brought if constexpr, and standardized a bunch of nonstandard compiler extensions like [[fallthrough]]. C++20 continued standardizing more of those extensions, and also brought concepts / constraints, which are a lot easier to use than template metaprogramming.
You're at least somewhat right though -- none of these are paradigm shifts as C++11 was compared to C++03 (especially with the notion of ownership, especially in the context of std::unique_ptr and std::move).
Optional is nice but slightly awkward in a non-garbage collected language.
IMO variant is one of those things that should not exist in standard.
It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
> It tries to implement discriminated union in C++ but that feature is lame without true pattern matching. And you can’t implement pattern matching without thorough syntax level support. So in my books it’s in this academic “let’s pretend a while we are using some other language…” category.
I agree, they should have made it a language/syntax feature. However: if you wanna do a sum type, it does do that. I'd rather have that than nothing.
std::filesystem is useless because like so many platform abstractions it doesn't handle the last 1% correctly.
IMO the C++ standard should focus platform-agnostic functionality and leave the nitty gritty platform interaction to standalone libraries that can be patched and/or replaced as needed.
The safety and readability are nice; but WHY do they have to be in order? That is so typically clueless. Such an obvious feature, screwed up in a way that only a C++ committee member could.
Initializers for members are _always_ run in the order the member is declared - this applies even in constructor initializing lists, see https://en.cppreference.com/w/cpp/language/constructor#:~:te... - it doesn't matter what order you declare the initializers in, and clang will warn you if you write your initializers in an order other than what they will be executed in. Designated initializers are the same way, it's probably best that the behavior is consistent across methods of constructing.
Why is it this way? As best as I can find it's so that destructors always run in reverse order of construction, I guess there could be some edge cases there that matter. It's not the strongest argument, but it's not nothing.
> Why is it this way? As best as I can find it's so that destructors always run in reverse order of construction, I guess there could be some edge cases there that matter. It's not the strongest argument, but it's not nothing.
I have seen my share of teardown races that were fixed by reordering data members. There's nothing quite like having a mutex torn down before the object that it's guarding.
Running destructors in reverse order of construction is really the only thing that makes sense in an RAII world. It's the same argument as running the destructors in reverse order of construction when exiting a scope.
That's still not a great reason for designated initializers being restricted in the same way they were in C90, especially given the advantage of hindsight. It makes a ton of sense if you have explicit dependencies between data members created during construction, but I can't see a way that you can create those dependencies with designated initializers.
Why does construction and destruction order need to be deterministic?
Well, consider what would happen if you had members whose value depends on other members. For example, one member is a pointer to another. Or perhaps one member uses RAII to hold a lock and another controls a resource.
Deterministic construction and destruction order is a fundamental feature of C++. Calling it clueless is just an indication one does not know C++.
> Why does construction and destruction order need to be deterministic?
That question presupposes a particular compiler implementation of designated initializers. Indeed, C90 had the fixed-order requirement until C99 decided this was unnecessary and removed it.
> Well, consider what would happen if you had members whose value depends on other members.
Can you specify a designated initializer in that way, though? Either you specify a value, or you don't; I'm not aware of a way to introduce dependencies between members with a designated initializer. Yes, you can add a default initializer to a specific member, but that only kicks in if it's unspecified by the designated initializer.
With a constructor initializer list, sure, you can absolutely introduce dependencies on previously-constructed members. But that's not the case with a designated initializer.
To me moving from C++11 to 17 and then 20 was just a matter of convenience. When digging on how to do this and that I've found few things that just saved my time here and there. Also couple of valuable libs I wanted to use required newer C++ versions.
Initially, C++20 and C++23 extended its use cases, and combined with concepts is a pretty sweet spot for compile time metaprogramming without SFINAE or tag dispatch tricks.
Much better than having yet another syntax for macros.
Is it a coincidence that all these quality life things start to pop up after C++ is facing real competition for the first time? Seems a bit odd to add print after using std::out for 30 years.
Nerd alt-history story: What if Graydon decides he should attend WG21 and so instead of Rust what we get is a decade of attempts to fix C++ and reform the process, followed by burn out?
Then we'd be supporting a different language that shares the same or similar ideals as Rust. Whether that's something already in existence or something entirely new.
Rust isn't really that unique, there are plenty of other safe languages out there. And if Graydon was alone in wanting something like Rust then Rust wouldn't have grown in popularity like it has.
Rust exists because enough people thought there was a need for Rust to exist. So if that wasn't Graydon with Rust, then it would have been someone else with something else.
This isn't meant to take anything away from Graydon nor Rust. Just saying that innovations seldom happen in silos. They're usually a result of teams of people lusting for change.
I think rust was helped by being part of Mozilla and really helped when they got experienced devs (guys who made ruby Bundler) to build the Cargo package manager pre 1.0
And helped a bit when they took a lot of stuff out the stdlib into packages for the new package manager.
And helped a lot with a heavy focus pretty early on great compiler messages (inspired by elm) and with a focus on tools and documentation more generally.
Like a lot of things in life, rust was in the right place at the right time to get popular. I do think the deep want for something better and safer then c++ helped but they made a lot of good choices(not necessarily the best choices but good enough choices) and had some money backing them. I think it was far from inevitable that some other language to compete with c++ would have come out anytime soon if rust hadn't been around (and hadn't made good enough choices). It might have happened but decent chances it wouldn't have.
> I think rust was helped by being part of Mozilla and really helped when they got experienced devs (guys who made ruby Bundler) to build the Cargo package manager pre 1.0
That's one of the reasons Rust became as widespread as it is now. However we are talking about a "what if Rust never existed" scenario. I'm confident that kind of scenario we'd be talking about a different-yet-similar language, maybe one that never got invented in our version of reality, as having the same or similar forces that helped that hypothetical language.
My point is that people wanted a successor to C++. So it was going to happen. In our reality it was Rust. But if Rust wasn't created by Greydon then someone else would have created something else to fill that void.
> I think it was far from inevitable that some other language to compete with c++ would have come out anytime soon if rust hadn't been around (and hadn't made good enough choices). It might have happened but decent chances it wouldn't have.
I very much disagree with this assumption. We have D, ObjectiveC, C#, Zig, Go, OCaml and others born out of the need to to iterate and improve on what came before it. But nothing had really caught on in the domain of safety + zero-cost abstractions principle. And particularly not aimed at C++ devs. It's been a contentious point for years -- a void people have been looking to fill. So it was only a matter of time before something caught on.
But this is all hypothetical. Plus if you subscribe to the many-worlds interpretation of quantum mechanics, then arguably we're both right :D
Rust was helped by being a Mozilla language, and some of the personalities it had around it.
The big plus of the language was proving that Cyclone ideas to improve C, from AT&T research project were sound and could be made mainstream.
And now other languages are building on it as well, that is why Swift, Chapel, Haskell, OCaml, D are also having a go at a mix of linear types, affine types and effects.
However many folks credit Rust for type system features that are actually available in any ML derived language, or Ada/SPARK, so it isn't as if knowledge is that well spread.
> Rust was helped by being a Mozilla language, and some of the personalities it had around it.
Indeed. But my point is there was already widespread movement behind building a programming language. So if Mozilla hadn’t taken charge then I’m certain someone will.
My point is that Rust was born from a wider desire for change rather than that desire existing because of Rust. Thus that desire would have been met in one form or another regardless of the invention of Rust.
Yes. There are a very small handful of early adopters in the year 2025 for a feature ostensibly added in C++20.
So, like I said, modules don’t exist in practice and I’d be shocked if in 2030 modules were considered normal.
C++11 was pretty game changing. C++14 and C++17 only took a few years to reach widespread adoption.
It’s very safe to require C++17 today. C++20 was a little slower and because of the modules fuckup it’s a bit inconsistent. But it’s largely fine to use.
C++23 probably needs another year or two. But also C++20 and beyond haven’t added much that’s worth upgrading for.
Like I said, it is a matter of point of view, and yes such is the karma of ISO driven languages with multiple implementations, when one cares about cross platform code.
There are many folks that don't care though, for them it is "one platform, one compiler, language standard is whatever my compiler allows me to do, including extensions".
I am also quite bullish on the opinion that eventually, C++26 might be the last standard, not that WG21 will stop working on new ones, rather that is what many will care about when using C++ in a polyglot environment, as it is already the case in mobile OS platforms, the two major desktop platforms and distributed computing (CNCF project landscape).
> C++20 and beyond haven’t added much that’s worth upgrading for.
std::format is pretty nice (although not yet available on Ubuntu 24.04 LTS.
Lambda capture of parameter packs is actually huge!
And ... I think it still remains to be see what the outcome of modules will be.
One hopes (against hope) that the big payoff for modules will be in tool-ability of C++. IDE support for languages like C#, Java, typescript is vastly superior to C++ IDE tooling. Perhaps. Maybe. Modules will provide a path that will allow that to change. I don't think the benefits of modules have yet fully played out.
Ironically C++ had such tooling in the past but got lost, a bit like Roman technology as the Empire felt.
Visual Age for C++ v4.0 had a Smalltalk like experience with a database storage for the code, and Lucid Energize C++ already had something that people now know as LSP (Cadillac on their implementation), with incremental compilation and linking (at method/function level).
They failed commercially due to high prices and hardware requirements.
We have had C++ Builder for decades for GUI RAD development, Delphi/VB style, but due to how Borland went after the enterprise and various changes of hands, very few are aware that it exists and its capabilities.
C++ Builder with VCL was Java/.NET before these were even an idea looking for an implementation.
Problem now is that C++ has become a specialized tooling for high performance code, language runtimes, drivers and GPGPU, so you write 90% of the code in Java/C#/nodejs/..... and then reach out to native libraries, for various reasons.
Still, Clion, Visual Studio, C++ Builder, are quite good as far as development experience goes.
> This is the sort of change that adds complexity to the language but reduces complexity in the code written in the language. We take those
An admirable statement of policy, but I'm not sure it's possible. Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Depends on case to case basis. I wouldn't generalize it to every case. As a daily C++ engineer, I think overall many features added over the years have mostly been positive. There are features that I don't use and I don't think it really affects much. That said, I do get the sentiment of language becoming too syntactically complex.
I like this feature as string formatting is something frequently used and this certainly looks cleaner and quicker to write.
> Adding complexity to the language means there are more gotchas and edge-cases that a programmer must consider, even if they don't use the feature in question.
Since this is C++, this is not a problem we have to consider
This is a meme by now, yet it isn't as if Python 3.13 is a simple as Python 1.0, Java 23 versus Java 1.0, .NET 9 with C# 13 versus .NET 1.0 with C# 1.0 and a Framework reboot,....
C# has already enough material for pub Quiz, and no, not all of them are syntatic sugar, and require deep knowledge of the .NET runtime, and the way it interacts with the host platforms.
I imagine you never went too deep into unsafe, cross language interop, lambda evolution since the delegate days, events infrastructure, pluggable GC, RCW/CCW, JIT monitoring, the new COM replacement, how the runtime and language features differ across .NET Framework, Core, .NET MicroFramework, UWP, AOT compilation, Mono, .NET standard versus Portable Class Libraries, CLS friendly libraries,...
On top of that, all the standard frameworks that are part of a full .NET install on Visual Studio, expected that most C# developers know to at least have some passing knowledge on how to use them.
For other readers - more than half of these are irrelevant.
Writing general purpose application code rarely involves thinking about implications of most of these (save for NAOT as of lately I suppose).
Writing systems C# involves additional learning curve, but if you are already familiar with C++, it comes down to understanding the correct mapping between features, learning strengths and weaknesses of the compiler and the GC and maybe doing a cursory disassembly check now and then, if you care about it.
Many of those keywords as you call it, are part of a technical interview in any .NET consulting shop worth their business.
And while I expect any junior not to know half of them, anyone claiming to be a senior better have an answer, regardless of what I throw at them.
Naturally I don't expect anyone versed in desktop frameworks to master backend and vice-versa, but they better know the bits that relate to desktop in that case, across the whole stack.
The original comment was about the divergence of the complexity of a language and the complexity of programs implemented in the language. I think the comment you replied to with all its keywords and jargon beautifully illustrated the point
No one of sane mind accesses even one tenth of these on a daily basis.
They simply do not matter. For example - CLS-compatiblity, seriously? I'd return the favour and ask the interviewer why they disagree with the .NET's team stance that this lost relevance in early .NET versions more than a decade ago.
There are main framework and features to be aware of, there are some that may be relevant to legacy codebases you must avoid like fire, and there are those to which the only appropriate response would be "this never existed, if it did, forget about it".
(to Pjmlp - please do not equate knowing the terms with understanding them, and stop bringing up whatever was left by wayside of history to people who should have no business being bothered by this nonsense, thank you)
Yes of course, and I mean it somewhat seriously - C++ engineers are used to the language being too complex for anyone to completely understand. It's worth some more incremental language complexity to support niceties like fstrings without additional overhead.
how would this work with internationalized strings? especially if you have to change the order of things? You'd still need a string version with object ordering I would think
The question was, how would you use this if you have i18n requirements. Format strings are normally part of a translation. I think the bad answer is to embed the entire f-string for a translation as usual, except this can't work because C++ f-strings would need to be compiled. The better answer is, don't use f-strings for this because you don't want translators to monkey around with code and you don't want to compile 50 versions of your code.
Even if you told them, "just copy the names from the original string" it's still asking for trouble, and maybe even security holes if they don't follow instructions. But the biggest problem with the idea is surely that the strings need to be compiled.
Do what? Allow translators to reorder the appearance of arguments in a translated format string? It's a completely routine (and completely necessary) feature when
doing translations.
C++ also has std::format, which was introduced in C++20. This is just sugar on top of it, except it also returns a container type so that printing functions can have overloads that format into a file or stream directly from an f-string, instead of going through the overhead of a temporary string.
I'm wonder what this mysterious application is that is doing heavy formatting of strings but can't afford the overhead of a temporary string, and therefore requires horrifying and inscrutable and dangerous language extensions.
Being able to use string formatting without a heap is pretty cool.
Rusts string formatting machinery does not require any heap allocations at all, you can for example impl fmt::Write for a struct that writes directly to a serial console byte-by-byte with no allocations, and then you have access to all of rusts string formatting features available to print over a serial console!
I'm not sure about the horrifying and dangerous extensions part though, I'm not really a C++ expert so I don't know if there's a better way to do what they want to do.
For example, an high performance logger can ship the tuple object to a background thread for actual formatting and I/O, after converting the capture to by-value.
Formatting on the foreground thread would be a non-starter.
boot is not allowed caused by the complexity. So some people disallow boost, here is the solution, just add the complexity directly to the language definition!
Making any changes to the core language is a sensitive thing as it inevitably imposes new demands on compilers, a learning curve for all users of the language, and risks breaking compatibility and introducing unforeseen issues that will need to be fixed with future changes to the language.
Personally, I'd much prefer a smaller and more stable language.
Leaving curve can decrease as a result of better design, same re. the chance of those unforeseen issues (and it can even decrease the chance of existing bugs popping up)
But they got the type decay right without introducing further user-defined conversions, unlike this proposal. The syntax is ad hoc, thus so should be the typing rule.
I am tired of PDFs. They should have a dedicated website for presenting C++ proposals so everyone can comment and discuss. Reading Github issues are more enjoyable than reading PDFs.
I agree that we should have safe-by-default "decay" behavior to a plain ol std::string, but I'm also picking up that many aren't certain it's a useful syntactic sugar in top of the fmt lib? Many other languages have this same syntax and it quickly becomes your go-to way to concatenate variables into a string. Even if it didn't handle utf-8 out of the box, so what? The amount of utility is still worth it.
I'm going to make an asinine prediction. We will be exploring F-strings in future languages in 100 years time, encountering the same problems and questions.
I still use printf semantics in Python3 despite trying to get with the program for symbolic string/template logic. I don't need to be told it's better, I need some Philip-K-Dick level brain re-wiring not to reach for
"%d things I hate about f-strings\n" % (int(many()))
It's not broken (try it!). Any value is interpreted as an implicit 1-tuple if it's not a tuple nor a dict. A better example would have been `"..." % many()` where `many` returns a tuple or dict.
When I saw the title I thought “F-strings” might be some novel variant of P—strings. I was disappointed that this is just about formatting. I really would prefer safer string handling in modern C/++
F-strings is one of my favorite features of Python to be honest.
That doesn't automatically mean it's a good idea in C++, knowing C++ there are gonna be a whole lot of gotchas which aren't in Python, but it means that, at least in my opinion, how F-strings worked in Python is an argument in favor of them rather than against them.
Looking at Ruby strings it seems not. They have at least two special syntaxes: escaping and interpolation. They seem to have some weird "<<-" thing to, so that's another special thing. That's at least three things so far. Oh, and %q strings is a thing. Four things.
Swift has "normal strings", #"raw strings"#, and """ for multiline strings. For interpolation, it uses "The answer is \(1+2)", but in a raw string you need #"The answer is \#(1+2)"#.
So compared to Python, string interpolation is always "on" and doesn't need an f-prefix. Because it uses the string escaping syntax, it doesn't have to take over a regular character like {, which requires {{ escaping.
Is this supposed to be better than python? "Every string is an f-string, make sure you don't accidentally miss some interpolation." sounds like a step down, not like an improvement to me!
There are two errors you could make in Python. Accidentally using {} in a normal string where you wanted interpolation, and accidentally using {} in an f-string where you wanted literal {}. I definitely do the former a lot.
Yes it does, because normal strings aren't just a special case of raw strings when the number of #s is 0. The presence of #s change the string syntax. Otherwise you could as well say Python has one syntax, with optional f and r prefixes.
No, you can't. Because r/f changes and/or adds escape sequences, AND you can't have double-raw strings for example. In Swift there's 0, 1, 2, 3, 4, etc number of #. It's not raw strings at all! There's no -1 number of #. So for example to do regexes in python you normally use r-strings, to avoid \ escaping in the regex. In Swift you could do the same the opposite way: not by removing \ escaping, but by changing it to \#. And if you want to regex match for \#, you can do ##. And if you want to match for \## you can use ###, etc.
There is always a clean escape where you can write the literal that you want to write. Unlike in Python where there is no escape (amusingly).
Yeah, I really missed ubiquitous C preprocessor macros in C++, so let's bring them back, but now inside string literals. Sweet.
Seriously, I just keep being amazed that people are running with the idea of having a full-blown untyped and unchecked formatting mini language (that's what libfmt, which became C++20 format, literally calls it) inside string literals — i.e., the part of the source code that you're specifically telling the compiler to not treat as code.
Format strings in C++ are checked completely at compile time. There are no hacks or compiler intrinsics involved (like what C does for printf to verify format strings).
Eh? C++20 format is checked at compile-time. This has been possible ever since string literals became constant expressions. These features are within the standard compile-time capabilities. People have done impressive compile-time parsing and codegen using it.
reply