Hacker News new | past | comments | ask | show | jobs | submit login

I've been engineering videogames in C++ for almost 20years working on AAA games and I can tell you that modern C++ is not very appealing to the vast majority of our industry. Modern C++ it's overcomplicated and from what I can see all the best software engineers I met in my careen write very simple C++98 code, maybe a bit of C++11 but that's it. They keep it simple. They don't need a move constructor because they've been already clever enough to solve a prolbem from a different angle. Modern C++ is for experienced C++ developers and it's solving problems that experienced developers know already how to solve without all this extra complexity. And because of that it's keeping away young developers... This new C++ syntax is just a gimmick that we don't need. The C++ community seems it's feeding itself to create a new standard every couple of years. It's just disappointing...



C++98 is an miserable language, I have no idea who would actually be resistant to improving it. I fucking hated it, C++14 and beyond are such a relief. Move semantics aren't fun but they are necessary.

This kind of claim is extremely common HN but utterly foreign as someone who actually writes C++ for a living.


I don't like replying to this type of comment, but I'd like to point out that you're replying to someone who likely grew up writing 6502 assembly (or the like). Furthermore, even mediocre engine programmers have an extremely good grasp of memory management et. al.

I also don't think they're arguing cxx98 is a good language, especially by today's standards.

The argument they're making is that the tact C++ has taken in more modern revisions is actually _worse_ than writing code the shitty 1998 way. At least for axes games care about.

You mentioned move semantics, which is a great example. For a lot of game/engine code, move semantics are just adding a pile of complexity for negligible gain. Ie. Why not just use a pointer?


> The argument they're making is that the tact C++ has taken in more modern revisions is actually _worse_ than writing code the shitty 1998 way.

They didn’t have any argument. Just old man yelling at cloud.

> You mentioned move semantics, which is a great example. For a lot of game/engine code, move semantics are just adding a pile of complexity for negligible gain. Ie. Why not just use a pointer?

Because it is error prone.


Once you're passing pointers around, you have to manually track lifetimes of things they point to.


I have too written C++ for years---I worked on online game server frameworks---and you are partially right, C++98 is not for mere mortals, but we still tended to use only a part of C++11 and C++14 we saw fit. This is the point: not every new C++ feature is relevant to at least some of us, and the relevant fraction is ever diminishing.


> we still tended to use only a part of C++11 and C++14 we saw fit.

I think everybody does this and it's totally ok. Nobody needs all of C++.


C++98 is the reason Java came to power.


No; Java's success is entirely due to Marketing.


> Java's success is entirely due to Marketing.

Congratulations, this is the stupidest thing I have read this week.

Of course the 20 years of enormous popularity and huge success in multiple industries must be due to marketing. I mean, what else.


Congratulations, you just proved that you know nothing about why a product succeeds/fails in the market.

It was the initial push (with gobs of money) given by Sun that gave Java the momentum. No language was ever pushed so hard in the marketplace by any other organization. Without that push, the language would never have the popularity that it enjoys today. There is nothing inherently "superior" to the Java language. Far better languages have fallen into obscurity because they did not get the publicity (eg. Eiffel) that Java did.


That is not how logic works. "Better languages with less marketing are less popular than Java" means that "marketing matters" and that "quality isn't everything", not that "only marketing matters".


And step 1 in marketing a new language is a cute mascot, obviously.


Oh no. I was there. Cross platform C++ was hard. Cross platform Java was easy.


That wasn't it; There was nothing revolutionary about it. It was the most hyped/marketed language in History[1]. Sun threw ungodly amounts of money to market it and make it what it is today. Invented as a "Embedded Systems" language, pushed as a Browser "Applet" language, moved to "Server App" language and settled as a "Enterprise App" language.

[1] https://www.theregister.com/2003/06/09/sun_preps_500m_java_b...


I think the fact that it eventually ended up as a server app language took everybody by surprise - Sun never saw it as anything like that at inception I'm sure.

Adding JDBC and later nio is probably what got it there the most.


Yes. It was originally meant to be used for interactive television. It then transitioned to the web and to cross platform GUI's. It then transitioned to servers.


And rewriting Distributed Objects Anywhere from OpenSTEP/Objective-C into Java EE.


Java was a reaction to C++. Go was too. C++ feels like it was created by the smartest person in the room without regards for the the other 80% that would use it on a day to day basis.

I guess that's why I like simpler languages more, where I don't have to think about how the language or compiler is going to treat my code.


Rust was also a reaction to C++, and is a move in the opposite direction as go.

Go is OK. It's a little faster to get a first version working, but the result tends to be slower and more likely to crash on memory/concurrency issues than a rust version would be.

I use both; go is fine for spaghetti ~architecting~ devops-ing piles of microservices together. Rust is much better for the data path though.


How do you like Rust for say SaaS type services? JSON marshalling/unmarshalling, socket behavior, etc? HTTP request/response processing?

I spent the last two months coding in Go pretty much not enjoying it. Nil is not always equal to nil in third party libraries through an interface -- and the compiler wouldn't warn about it.


Not OP, but if you have gripes about type safety, you're likely to enjoy rust. The compiler will absolutely let you know if you are not handling optional or result types. As a bonus, they are built using regular generic and enum constructs with a little bit of syntax sugar to make them more ergonomic.

serde is also world class for serializing/deserializing, and can generate implementations for you based on struct definitions independent of data format (json/bson/yaml/toml/etc).

The only sharp edge you may bump into for a web service is in your choice to go either sync or async, and if you go async you must then choose a runtime (usually tokio) as these are libraries and not integrated into the std lib beyond just the `async` keyword and a few traits.


My hopes for Rust is that they take their async coloured functions back to the drawing board because they are really not fun to deal with, and improve the FFI story with C. Easy interface with C and it's weird memory rules is of utmost importance if we want to replace C with something a little more solid.

But yeah, serialization in Rust is a breeze compared to Go. Probably one of the things I hated the most, along with the errnil boilerplate.


I've been working in the async space for a few years now and it may just be survivor bias, but while I think there are definitely some issues, I'm still largely happy with it. It's still evolving after all. If they can get the cancellation issues sorted and async in traits that would be a good place.

What were your issues with the C FFI? That usually gets praise from people.


> Not OP, but if you have gripes about type safety, you're likely to enjoy rust.

Not type safety so much as program correctness. Type safety is just one aspect of it. Golang is "type safe" but still panics on null pointer accesses.


It was a reaction to SunVeiw failing in the marketplace. Sun wanted to give Gosling a new project to keep him around so they had him write Oak, a language for set-top devices. That eventually became Java. Andreesen was so jealous of the marketing money Sun was throwing at it he named his language JavaScript to get a free ride on that.


> This kind of claim is extremely common HN but utterly foreign as someone who actually writes C++ for a living.

OP said:

>> I've been engineering videogames in C++ for almost 20years working on AAA games

It seems like you're implying that every single business domain uses C++ the same way that your business domain does. Instead of insinuating OP is a common HN commenter that doesn't write C++ for a living, which seems to be an incorrect assumption, maybe you should recognize that not every business domain uses the same subset of C++ as you?

Your comment just reads as an arrogant opinion hiding behind a false sense of authority. It sounds like you're saying, "anyone that disagrees with my opinion must be some hobbyist programmer that doesn't know what a real programmer that does this for a living actually codes like". As if coding in a language for your job automatically implies that the code will be of a higher quality than hobbyist code. Its gatekeepy and gross imo.

I've worked professionally with disgusting C++11 code that was an amalgamation of code from some people that clearly knew what they were doing, and a lot of code that seemed to be from people that had no clue what was really happening. They had no clue because of all the hidden complexities in modern C++ that Herb Stutter is trying to reduce.


The fact that these standards (?) or versions (?) of C++ are so wildly different to summon these types of opinions is mind blowing to me. I like languages like F#, Elixir, Erlang, etc. that have cores that basically never change and just add quality of life improvements, and so I have never experienced this. C++ seems like such a minefield that C++ developers have little in common with how other C++ developers work, experienced or not.


C++ people have a large propensity to complain though. We're more than a decade after type inference, one of the largest QoL improvement possible, was introduced and some people still claim it was a bad thing and that all types should be written down in their entirety. Do you imagine this in your language's communities? I had discussions about this with dozens of c++ programmers in real life.


> Do you imagine this in your language's communities?

Absolutely! C# introduced "var" back in 2008, and to this day there are people arguing that you shouldn't use it except when it's outright impossible to spell out the type.

From what I hear, it's the same story with "var" in Java 10.


FWIW as a professional C++ programmer since I graduated university 7 years ago I haven't really had that experience.

There's certainly a large learning gap; both for "classic" C++ as well as the popular modern features. But once you get over that I haven't found other peoples C++ any harder to read than other peoples C or JavaScript or Python.

No one's reaching to esoteric features like "..." or atomics unless they're some low level library wizard (in which case all their friends are also low level library wizards) or the situation actually calls for it.


Java was like that for so many years, it was fundamentally unchanged for decades really. And now (for the last 5 years or so) it's changing rapidly, new code just doesn't look like old code ("var" type inference, lambdas rather than inner classes in many cases, multi-line strings with """, new switch statement syntax etc.) I mean I like it so far but it's kind of weird to be honest!


This comment is classic "not even wrong".

>This kind of claim is extremely common HN but utterly foreign as someone who actually writes C++ for a living.

The above comment is equally applicable to their comment.


> This kind of claim is extremely common HN but utterly foreign as someone who actually writes C++ for a living.

This should be a huge klaxon for the C++ community. Whether or not it's true, it's a massive problem for the C++ one way or the other.


In database engines, where C++ is pervasively used, modern C++ is a vast improvement over legacy C++. It is much simpler and safer. Writing a current C++20 database kernel in the equivalent legacy C++ would require several times more lines of code, and that code would be much more difficult to maintain aside from the much greater volume. A database engine implementation makes very good use of modern C++ features, the idiomatic legacy C++ equivalents were often very ugly and very brittle. I've done both.

I have never worked on games -- maybe that domain has simpler internals -- but there are important categories of C++ software outside of games that unambiguously benefit immensely from the modern C++ features. In databases it is common to ride as close to the bleeding edge of modern C++ as is feasible because the utility of the new features is so high.


> Modern C++ it's overcomplicated and from what I can see all the best software engineers I met in my careen write very simple C++98 code, maybe a bit of C++11 but that's it.

I'm sorry but this assertion does not pass the smell test.

Sticking with C++11 means you do not have std::make_unique, and any claim that these "best software engineers" not only fail to use smart pointers but also refuse to even consider instantiating a std::unique_ptr in a safe, standard way and for no reason at all is something that lacks any credibility.

> Modern C++ is for experienced C++ developers

It really isn't. "Modern" C++ is just the same old C++ with useful features that improve the developer experience (see aggregate initialization, for starters,nested namespace definitions, utf8 character literals, structured binding, etc) and don't require developers to resort to in-house trickery passed around through tribal knowledge to implement basic features (move semantics, constexpr, etc).

> It's just disappointing...

Speak for yourself. It's fantastic that people continue to improve the language and make everyone's life easier, instead of being stuck in the C++98 mud. Each and every single major standard release since C++98 brought huge productivity and safety improvements that everyone stands to benefit. Even standardizing stuff from Boost and the like is a major step forward.

It's totally fine that you personally prefer to not benefit from any of the improvements that sprung in the past two decades, but don't presume for a minute that you represent anyone beyond yourself when making Luddite-like claims.


> Sticking with C++11 means you do not have std::make_unique, and any claim that these "best software engineers" not only fail to use smart pointers

Pretty much every non-trivial C++ engine i've seen has its own equivalents for memory management. Even a game engine development book i bought in ~2001 (meaning it was written before then) had a chapter dedicated to implementing smart pointers.


Which doesn't mean that they are any better than the provided standard implementations.

They are there because it makes sense to have them. Now you don't have build and maintain your own ship.


> Which doesn't mean that they are any better than the provided standard implementations.

That standard implementation is a thin wrapper over malloc(). That standard malloc() is not necessarily good enough. The performance is not great, but worst of all when you have many small objects, they are scattered all over address space. Chasing many random pointers is expensive.

While it’s technically possible to use custom memory management with std::unique_ptr with that that second template argument with custom deleter, it complicates things. The code often becomes both simpler and faster when using custom smart pointers instead.

That’s not specific to videogames, applies to all performance-critical C++. These standard smart pointers are good for use cases with a small count of large long-lived objects. For large count of small things, the overhead is too large, people usually do something else instead.


> That standard malloc() is not necessarily good enough.

The operative word is "it enough".

Let's not fool ourselves by claiming that all memory allocations take place in hot paths, and that all conceivable applications require being prematurely optimized to the extreme because they absolutely need to shave off that cycle from an allocation.

Meanwhile people allocate memory in preparation of a HTTP request, or just before the application is stuck in idle waiting for the user to click on the button that closes the dialog box.

It makes absolutely zero sense to proselytize about premature optimization when no real world measurements are on the table.


Let’s not pretend all conceivable applications are, or should be, written in C++.

People mostly stopped using C++ to develop web servers which handle web requests, because they moved to Java, C#, PHP, Ruby, Python, etc. People mostly stopped using C++ to develop GUI apps which handle these buttons, because they moved to Java, C#, and now JavaScript/TypeScript.

What’s left for C++ is software (or sometimes individual DLLs consumed from other languages) which actually need it to achieve the required performance. Even despite the language is unsafe, low-level, and relatively hard to use which directly affects software development costs.


But at the same time, straightforward C++ code with no tricks is still orders of magnitude faster than Python or PHP, and usually faster than Java and C#. So you still don't need custom allocators etc to be "good enough" most of the time.


I agree about Python or PHP.

However, for Java or modern C#, in my experience the performance is often fairly close. When using either of them, very often one doesn’t need C++ to be good enough.

Here’s an example, a video player library for Raspberry Pi4: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo As written on that page, just a few things are in C++ (GLES integration, audio decoders, and couple SIMD utility functions), the majority of things are in C#. Still, compared to VLC player running on the same hardware, the code uses same CPU time, and less memory.


> However, for Java or modern C#, in my experience the performance is often fairly close.

Aren't you contradicting yourself? You started off complaining malloc of not being good enough, but now it's suddenly ok to tolerate Java and C#'s performance drop when compared to C++?

Which one is it?


> * Let’s not pretend all conceivable applications are, or should be, written in C++.*

This is a discussion on C++.

> People mostly stopped using C++ to develop web servers which handle web requests, because they moved to Java, C#, PHP, Ruby, Python, etc.

I'm not sure you understood what I said, or thought things through.

By the way, the top performing web framework in the Tech Empower benchmark is a C++ framework which uses C++'s standard smart pointers.

https://github.com/drogonframework/drogon

Also, one of the most popular web frameworks for Python started off as an April Fools joke. I'm not sure what's your point.

Lastly, the main reason why C++ ceased to be the most popular choice in some domains was because it was during a very long time the most popular choice in some domains, and still remains one of the most popular choices. Some of the reasons why C++ dropped in popularity is the fact that some vendors decided to roll their own alternatives while removing support for C++. Take for instance Microsoft, which was once responsible for making C++ the only tool in town for professional software development. Since it started pushing C# for all sorts of web applications, multi-platform applications, and even desktop applications, and also pushing the adoption of those technologies as a basic requirement to distribute apps in its app store, developers can only use technologies that exist. But does that say anything about the merits of C++?


> I'm not sure what's your point.

Over time, it became less important for C++ to be a good general-purpose language. When performance of idiomatic C++ is good enough, using C++ is often a bad idea: it delivers comparable performance to C# or Java, but it’s more expensive to use. While technically C++ has desktop GUI frameworks, web frameworks and others, they aren’t hugely popular: due to development costs, people typically prefer higher level memory safe languages for that stuff.

For use cases like videogames, HPC and similar, C++ has very little competition, because that level of performance is borderline impossible to achieve in other languages. It’s for these use cases people care about costs of malloc, cache-friendly RAM access patterns, and other things which are less than ideal in idiomatic C++.


Ironically Microsoft is the only OS vendor for mainstream platforms that still ships a GUI SDK that gives tier 1 treatment to C++ with WinUI, and even then the tooling is really clunky (back to VC++ 6.0 COM days).

On the Apple and Google side that ship has long sailed, with C++ used only on the lower OS levels, and as basis for MSL.

Naturally there are still the GUIs for game consoles left (although XBox dashboard uses React Native, with the previous generation being UWP, PS 4 famously used WebGL), and special purpose embedded devices.


For the longest time I've written my own smart pointers to manage the lifetime of pretty much anything that needs a cleanup - database connections, query objects, file handles, threads, mutexes and yes raw pointers.

All of this code built just fine on pre c++11 compilers, ran reliably, was performant and easy to maintain.

Rolling your own smart pointers is not something I'd discourage.


It might be easy to maintain to you. Somebody who has to pick up that codebase later would have to spend time and effort figuring out all those custom smart pointers and their idiosyncrasies. If all they do in the end is the same as unique_ptr & shared_ptr, it's all wasted time.


> Somebody who has to pick up that codebase later would have to spend time and effort figuring out all those custom smart pointers and their idiosyncrasies.

If we're still on the topic of game engines, someone who cannot easily figure out a smart pointer implementation has no place working with the game engine's code in the first place.

Unreal Engine has its own implementation for that stuff and it took me literally minutes to get to grips with it - same with the custom engines in companies i worked at before. This has absolutely never been a real problem in practice.


To be fair, video games are different to a lot of other types of software, in the sense that performance is absolutely critical but (certain types of) bugs can often be excused.

One could see why it's more important for a game dev to have a language that forces your mental models to more closely match what the hardware is actually doing, rather than using abstractions that make it less likely that you'll introduce bugs.


i am not the guy you replying but i have my 10+ years in the area.

You are voicing something that was true very long time ago. In world where you must be cross play cross platform to cover enough players there is no room for "what the hardware is actually doing". There only few games made by platform holders with exclusivity in mind are still there.


Games that use RAD game tools tend to ship a bunch of platform-specific SIMD code.


So do most games using any middleware. Unreal's math, string and core libraries use platform specific instructions, and do so in a c++ friendly way. That's not a justification for writing a bunch of hyper low level dangerous code to run a raycast!


yep, middleware authors have more time to write SIMD code because they have less moving targets like platform generations. Nobody needs to read that code anyway ^^


Agreed. If I want to write maximally performant code I'll write my own abstractions and use C or very little of C++.

If I want to write fairly performant code quickly I'll reach for C++ and use more of the toolbox.


I've been programming professionally in C++ for 15 years and can say I wholeheartedly disagree. I'm looking forward to whenever our engineering team adopts a newer compiler version in our legacy codebase (sadly we're stuck at C++17 for now).

Whenever I move from the old-style C++98 code units to newer modern C++ ones I breath a sigh of relief. It's beautiful, safe and modern; it reads like poetry.

If I get a code review where someone manually calls new/delete I will fail that code review.

That said, I agree this proposed "new syntax" is ugly and unreadable and unnecessary.


Well put!

Much of the C++ code out there is still C++98 and a lot of the "so called problems" had already been solved by using established code patterns (not GoF). "Modern C++" has added a lot of complexity but not for much benefit; its fanboys always use hand-wavy phrases like "much nicer", "makes programming simple"(!) etc. which mean nothing. After the introduction of STL and explosion in the usage of "Generic Programming" the only thing the language needed was Concurrency support for Synchronous/Asynchronous programming. Instead the ISO committee went off the rails into "modernizing" the language to try to make it like Python/whatever. The result is that experienced C++ developers are very cautious about adopting "Modern C++" while the larger "know little/nothing" crowd are raucously running around making all sorts of unwarranted claims.

IMO, today the greatest enemy of the language is the ISO committee itself.


I am not sure you have used C++ that much.

Claiming that move semantics, structured bindings, lambdas, designated initializers, non-impliciy self, override for virtual functions marker, [[nodiscard]], coroutines, modules, smart pointers, safer bitcasting, non-ambiguous bytes (std::byte), scoped enums, delegated and inherited constructors, constexpr, consteval and much, much more are not improvements for day-to-day... well, shows me that you must have not used it that much.


Nice list ... but that doesn't mean you understand what you have listed. Your list reads like what a noob "Modern C++" programmer would focus on (keywords and features to get through an interview) rather than any long-term (i have been programming in C++ since early nineties) real work experience.

Lots of good C++ based systems have been written before any of the above existed. That proves that they are not "needs" but merely "wants". Only some in the above list are worth adding to the language while others are just noise (as an example keywords like "override"/"default"/"nodiscard" just add to syntactic noise rather than any actual benefit). The members of the ISO C++ committee simply pushed through their pet "wants" for brownie points.


When override was introduced it immediately solved a few hidden bugs here so it definitely has value. I'd say that when doing development with inheritance it catches at least a bug or two a month at compile-time. Same for nodiscard, I started using it recently and immediately found bugs. So for me their value is infinite


As a counter anecdote i have not found any uses for the above and yet my code is running perfectly fine. That just reinforces my point that they are "wants" and not "needs".


Well, that is ok. It depends a lot on your codebase.

The fact is that you will have to check by hand or inspection what the compiler can tell you, right? That saves time and effort. Especially when refactoring.

Go refactor the signature 2 or 3 virtual functions with let us say a couple parameters each in two classes and 4 or 5 derived classes, one two levels deep (this is real code what I am talking about) and do it with and without override.

Try to measure the time it takes you with or without override keyword. You could use some refactoring tool, fair. But you do not have that available in every context.


>Go refactor the signature 2 or 3 virtual functions with let us say a couple parameters each in two classes and 4 or 5 derived classes, one two levels deep (this is real code what I am talking about) and do it with and without override.

This is just trivial and does not really support your arguments.

The way it was done before was to use the "virtual" prefix keyword for virtual functions across the entire class hierarchy. Merely a discipline which was enforced religiously via coding guidelines; it gave the programmer the needed cue while the compiler doesn't care. You then did a Search and Replace as needed.


How can you claim you do C++ and not evwn know that virtual did NOT check if it was an override of what you wanted or you were introducing an entirely new function? Your understanding of C++ is quite poor. You have a wrong mental model even about how virtual functions worked. There was no way for the compiler to say you were overriding or not hence now way to emit an error!!No religions here, just objectively better features. I bet you did not refactor hierarchies pre and post-C++11 otherwise you would not say that. It is not that trivial and marking overrides catches all intents of fake overrides. Before that was not possible.

Seems suspicious to me that you claim to have used C++ for long. You do not understand even how virtual functions worked in the language.


You are making inflammatory statements without understanding what has been written; it is not appreciated;

I said;

>to use the "virtual" prefix keyword for virtual functions across the entire class hierarchy.

What is meant is that the entire signature of the virtual function including the "virtual" keyword is reproduced in all derived classes thus ensuring that no mistakes are made.

I also said;

>it gave the programmer the needed cue while the compiler doesn't care

It was just good programming discipline rather than depending on compiler crutches.

Just to drive it home; here is an example: https://stackoverflow.com/questions/4895294/c-virtual-keywor...


> The way it was done before was to use the "virtual" prefix keyword for virtual functions across the entire class hierarchy. Merely a discipline which was enforced religiously via coding guidelines

No, that did not enforce safety or solved the refactoring problem I told you. It seems you do not want to listen that override fullfills the case where you assert that you are overriding (and it will be a compile error). You have a misunderstanding and a wrong mental model for how virtual worked. It did not enforce anything, just declared a virtual function, no matter it was new or an override. virtual can introduce a new virtual function by accident.

Example:

    class MyClass {
    public:
       virtual void f(int) {}
    };

    class MyDerived : public MyClass {
    public:
       virtual void f(int) {}
    };
Refactor:

    class MyClass {
    public:
       // NOTE: signature changed to double
       virtual void f(double) = 0;
    };



    class MyDerived : public MyClass {
    public:
       // FORGOT TO REFACTOR!!! STILL COMPILES!!!
       virtual void f(int) {}

       //void f(int) override {} // COMPILE TIME ERROR!
    };
> You are making inflammatory statements without understanding what has been written; it is not appreciated

No, I was not. I was just showing you do not know how the mechanism works with facts.

Above you have an example of why what you say does not work. I would recommend to talk more concretely and not to make too broad statements about tools you seem to not know in detail but it is up to you, I would not stop you. Just a friendly recommendation ;)


Have you really not understood what was written or are you just arguing for the sake of it? I cannot be spelling out every step of trivialities.

btw - Anybody who has been following this chain of responses can see who is the one using inflammatory language to hide their lack of comprehension and knowledge.

For the last time;

1) We did not depend on compiler crutches to help us.

2) We enforced coding discipline religiously so that all virtual functions are unambiguously identified with full signatures across the entire class hierarchy.

3) When you need to refactor you did simple search/replace on the full signature using grep/ctags/cscope/whatever across the entire codebase.

That is all there is to it.

This might be the most valueless discussion i have had on HN and that too over a utter triviality...sigh.


> 2) We enforced coding discipline religiously so that all virtual functions are unambiguously identified with full signatures across the entire class hierarchy.

> 3) When you need to refactor you did simple search/replace on the full signature using grep/ctags/cscope/whatever across the entire codebase.

That can break in a ton of ways. Still. For example, if the pattern is not correct for grep. ctags/cscope does not fully understand C++ also AFAIK. Then you are shifting from debug your code to debug your greps, etc.

Not sure how you did it exactly, but I see it as a fragile practice. Because you rely on absolute human discipline.

> This might be the most valueless discussion i have had on HN and that too over a utter triviality...sigh.

Sorry for that. Feel free to not reply. But do not take it to the emotional terrain so fast. I just argued since the start that C++ improvements are not an accumulation of "pet features" for the sake of it. It is you who presented that as facts without any kind of evidence in the first place.

To be clear, I do not care about C++ that much or anything particularly. But I have used it enough to identify one of your top posts as emotional and non-factual.

Greetings.


Your comment comes across as insincere. It reads like you have realized the triviality involved but not admitting it and still arguing that somehow programming discipline is not enough in this case (do we really need to talk about how to specify grep patterns?).

I should have probably said something like "override is just a compiler crutch for lazy programmers who can't be bothered to be careful in their job" but i thought it might get me banned from HN :-)

>It is you who presented that as facts without any kind of evidence in the first place.

My comments just in this whole thread refute your above statement.

>But I have used it enough to identify one of your top posts as emotional and non-factual.

I don't think you have understood the motivations behind the features you claim to have used, how they were solved earlier without those features and properly judging whether the change was worth it.


> Lots of good C++ based systems have been written before any of the above existed. That proves that they are not "needs" but merely "wants".

By the same argument, plenty of good code was written in C before C++ existed, so the entirety of C++ is "wants" rather than "needs".


Yes, In a very strict sense it is true for many experienced programmers and their domains of expertise. So when a expert embedded systems guy tells me that he will not touch C++ for his project, i understand and only ask him to look at C++ as a "Better C" and nothing more. All features are not equal and some are obviously more beneficial than others eg. simple user defined types i.e. class value types vs class hierarchies.

A good example is Linus Torvalds opposition to the use of C++ in the Kernel.


I do understand what I listed bc I basically have been doing C++ for a living for 13 years and started 20 years back at uni.

Of course good systems have been designed, but try to use the stl without lambdas. Or return big values by pointer bc u dnt have value semantics with all its associated usability problems. Try to write sfinae vs concepts or write a std::optional type without C++23 explicit self. Try to go and add boilerplate for free bc you did not have delegated constructors. Or try to build a table inside C++ at compile-time pre-constexpr. Do generic programming withou if constexpr. I have done myself all of that before C++11 and it was way more difficult to write a lot of code. Of course under your view anything is "nice to have". Then just grab assembly. But for the people like me that use it, I'd rather see it evolve with useful features.

Those keywords do NOT add noise. They add safety, since defaults cannot be changed. C++ is as good as it can get with its constraints. You can think it is bad, but C++ is an industrial language, real-world and with compatibility as a feature.

I can buy you could disagree with some of the decisions, but mostlyit fullfills well its purpose. Herb is just trying to create a C++ 2.0 that is simpler and 100% compatible. It is good, very good to not start from scratch if you are in industrial envs. You just do not throw away 40 years of code that has tested the pass of time pretty well, no matter it was written in C or C++.


I am not sure that you have understood my comment, hence let me explain;

I am a longtime programmer firmly in the C++ camp (as an example, see one of my earlier comments here: https://news.ycombinator.com/item?id=27854560). I am also not against the evolution of the language. But what I (and many other C++ programmers) are against is the messianic zeal of the "Modern C++" proponents hellbent on changing the language to be more "user-friendly" like Python/whatever in a mistaken belief that "C++ programming is now made simpler". By adding more and more features the interaction between them is now only more complicated then ever (i.e. When and How do you use them correctly and efficiently? How do you combine them into a clean design?) making the programmer's job that much more difficult (as an aside, this is also the reason beginning/new-to-C++ programmers give up on the language).

The above problem is compounded because C++ is already a multi-paradigm and multi-usage language i.e. situated in a co-ordinate plane with the above axes.

The paradigm axis:

- a) Procedural Programming i.e. "better C". - b) Object-Oriented Programming. - c) Generic Programming. - d) Compile-time Programming.

The Usage axis:

- a) Low-level Interface Programming eg. MCU programming. - b) Library Programming. - c) Application Programming.

Every feature ideally sits at an intersection of the above two axes. Thus for example, the "auto" keyword from C++11 is best suited for "Generic Programming paradigm" and "Library implementation usage" (eg. in the implementation/use of STL). But what is happening is that the "Modern C++" proponents are using it willy-nilly everywhere (because "hey i never declared any stinking types in <scripting language>") making the code that much harder to comprehend. Similar arguments can be raised against a lot of other features. This is the reason many of us are cautious w.r.t. the new features; we know the existing potholes, have worked around them and have our system under control and in hand. The ramifications of introduction of new features just for its own sake is unknown and makes us very nervous.


I do understand it, but you put yourself in a false dichotomy at the same time. You say you have to master every corner just because features are added. This is not true most of the time.

Of course you layer features on top of existing things. It is the only way to "simplify" the language and keep it compatible. For example, now you still have raw pointers: noone recommends managing raw memory anymore, the recommended thing is to return a smart pointer or do RAII inside your class (this last one pre-C++11).

How about virtual functions? Oh, now you have to know more (and this actually does not really hold true in every context, sometimes it is easier): of course you might need to know more! Recommendation is to mark with override! How about template metaprogramming with SFINAE! No! Hell, NO if you can avoid it! That is why constexpr, if constexpr and concepts exist!

It is not more difficult to program in the last standards, it is way easier to program. What is more difficult is that you need to know more. But you do not need to know absolutely every aspect of the language. They layered features that are more general so that you stop using the legacy ones, and that is an improvement inside the constraints of what C++ is allowed to do right now to keep it useful (compatibility).

If what you want is to learn to program in permutations of all styles, including the old ones, in C++ and in all kind of devices... I do not know people that are experts at all domains and levels of the stack, no matter the language you use. So again, you can use your subset.

> But what is happening is that the "Modern C++" proponents are using it willy-nilly everywhere (because "hey i never declared any stinking types in <scripting language>"). Similar arguments can be raised against a lot of other features.

You basically said nothing concrete above.

BTW, noone prevents you from using many of the inferior styles. And by inferior I do not mean non-modern, I do not buy modern for the sake of buying it, I use the features that make my life easier, and the same feature or library thing that makes my life easier in one context is the one I discard in another (for example dynamic memory in embedded or fancy ranges in videogames I might need to debug and understand to the vectorization loop level if I do not have the parallel algos).

You can still, as I told you in the comment above, ignore override, for example, and try the refactoring exercise I told you, making your life more difficult on your way.

Or ignore structured bindings in for loops and declare the variables yourself for pairs. Or you can do some fancy metaprogramming with SFINAE and spend 3 hours figuring out what is going on or why you cannot use it in the parameter type of an overloaded operator and have to use it in its return type or guess which overload you are selecting, because if there are two difficult things in C++ those are initialization and overloading.

In the meantime, some of us will be using some concepts to constraint the overload set, or, even better, present as interface the weakest of the concept fullfilling a function and use if constexpr inside to choose the optimized paths for each type as needed.

Those are improvements, BIG improvements on how I write everyday C++ code. The language is big, bc there is not a choice. But a good training and taste for what to use and what not to use in your area of work is necessary.


My other response is also applicable here: https://news.ycombinator.com/item?id=32894154


> "modernizing" the language to try to make it like Python/whatever.

I've been writing C++ code professionally for almost 20 years now, and Python for 10 years. I tried to think of any C++ features post C++98 that would make it "like Python", but I can't think of any. Can you give some specific examples?


We will not go down this path since everything is highly debatable but here is an article: https://preshing.com/20141202/cpp-has-become-more-pythonic/

It even states: C++ added tuples to the standard library in C++11. The proposal even mentions Python as an inspiration.


You say a lot of words, but have substance. Which “so called problems”? Which “established code patterns?”. Why “it means nothing” and to whom?

I can continue.


This is the old trope of asking for proof from one side in a forum where things are discussed freely. I could turn it around and ask you to justify all the new "features" added to say C++11..20. I am here for the rest of time.

But to give a few concrete examples;

>Which “so called problems”?

Apparently "variants" were introduced to solve problems with POD "unions". Mere complexity for not much benefit. Nobody i have worked with ever said "unions" were difficult to use.

>Which “established code patterns?”

RAII as a solution to People harping about memory problems. I have written entire protocol message libraries just using RAII to avoid memory leaks.

>Why “it means nothing” and to whom?

When "Modern C++" proponents say everything should be written with the new features using hand-wavy phrases, "it means nothing" to experienced programmers. If we find a feature useful to model a concept and/or express something we will use it; but always weighed against how complex it is to read and maintain. A good example is template meta-programming taken too far.

The key reason C++ took off was because it added minimal "Zero-Overhead abstraction" constructs over the baseline "low-level and simple" C core. Suddenly programmers could have their cake and eat it too. The evolution of C++ should have continued in the same spirit but instead in a misguided effort to compete with later and differently designed languages a lot of complexity has been added for not much apparent benefit.


Variants are meant to introduce a tagged union like data structure into C++.

You can do this without variants by manually defining your own unions and managing the tag yourself, but this is very not type safe and requires extra boilerplate code to manage!

Maybe you have never used and don't care about this feature but it's actually pretty useful! Tagged unions make representing certain types of data very elegant, for example nodes of an AST or objects in a dynamic programming language. You can use OOP patterns and dynamic dispatch to represent these things as well, but I think tagged unions are a better fit, and you get to eliminate a virtual method call by using std::visit or switching on the tag.

I suspect that maybe you have never been introduced to sum types in general which is not uncommon! I am curious if you have experience with using them or not?

https://en.wikipedia.org/wiki/Tagged_union


variants/sum types is not some earth-shattering concept but has been implemented using tagged unions in C from the beginning of time. At one point in my career i had an occasion to write a "spreadsheet-like data structure" (basically a linked list of linked lists) in three different ways one of which was using a tagged union for the data nodes. The point is that people trivially rolled their own when needed and did not clamour for language/library additions.

I have already pointed out in some of my other replies why it is wrong to consider a variant as a replacement for POD union.


I do realize people have been using tagged unions for a long time. Having some library helpers goes a long way in making them more usable and more expressive. Having them built into the language as a first class feature is even better yet, but std::variant is a nice middle ground.

Technically you can implement it manually, but the same thing could be said about all language features, even in C. We don't need actually need structs built into the language, we can just allocate blocks of data and handle the offsets by hand. We don't need functions, we can just push stuff to the stack and use the call opcode. The same goes for various loop constructs, goto can do everything "for" and "while" can do!

I don't think "we used to roll our own X back then" is a strong argument for something being bad or unneeded. Abstractions are all about allowing the computer to do work for us, and making things less error prone and more expressive. This is why we have programming languages to begin with and don't write everything in assembly!


What you are expressing is a Sentiment and not an Argument. First see my other relevant comment here: https://news.ycombinator.com/item?id=32893171

Language design is a fine balance; the addition of Abstraction Features has to be balanced against the cognitive load it imposes on the Programmer. These abstractions also need to be tailored to a computation model supported by the language. For example, the Niklaus Wirth school of language design was famous for insisting on minimalist languages; You only added features if they were "needed" to support the model of computation and all extraneous/trivial features were omitted. C++ took the opposite route from the beginning which was ok to a certain extent since it increased the expressive power of the language (eg. multi-paradigm). But over time the balance is being lost and the cost of cognitive load is outstripping the benefit of an added abstraction. This is bad and not welcome. So looked at in this light how many of the features in Modern C++ are essential and how many are extraneous (eg. Concurrency features = Essential, variants/syntactic annotations/etc. = Extraneous)? That is the fundamental issue.


> Nobody i have worked with ever said "unions" were difficult to use.

I wouldn't trust anyone to write the correct boilerplate for

    union { std::string s; int v; }; 
unless they'd do just this all day long


std::string is not a POD.


... yes? One still needs to put it in sum types sometimes. This is what std::variant solves.


But that is the point; "union" is supposed to be a POD, if not all language guarantees are off.


> But that is the point; "union" is supposed to be a POD

no it's not. it's fine to have non-POD unions - but you have to be careful to call constructors and destructs of non-POD's explicitly. thus, variant which automates that.

Also, I didn't talk specifically about unions, but about sum types. There is a need for saying that an object X can be of type A OR type B, no matter the properties of these types.


>it's fine to have non-POD unions

Only from C++11 (or is it later?). So a problem was created by relaxing the existing requirements for a "union" to which a solution was proposed by adding a "variant"? Something which had no runtime overhead (but UB) now has runtime overheads.

Regarding your point about "sum types", agreed.


You mistakenly assume I am pro new "features".

I'm just pointing out that you didn't have anything concrete in your messages.

> Apparently "variants" were introduced to solve problems with POD "unions". Mere complexity for not much benefit. Nobody i have worked with ever said "unions" were difficult to use.

https://stackoverflow.com/a/42082456

First link in Google, how is moving from undefined behavior and not needing manually declare type of union (sic!) is a "mere complexity for not much benefit"?

> Nobody i have worked with ever said "unions" were difficult to use.

Sure, enough people say that about for and while loops while avoiding functional patterns like a plague.

> RAII as a solution to People harping about memory problems. I have written entire protocol message libraries just using RAII to avoid memory leaks.

Which "modern, overhead features" are solved by RAII?

> When "Modern C++" proponents say everything should be written with the new features using hand-wavy phrases, "it means nothing" to experienced programmers. If we find a feature useful to model a concept and/or express something we will use it; but always weighed against how complex it is to read and maintain. A good example is template meta-programming taken too far.

Who are these experienced programmers and how do you define those? Your circle of people?

> If we find a feature useful to model a concept and/or express something we will use it; but always weighed against how complex it is to read and maintain.

Modern features are literally easier and less complex than old way. Like in the variant example.

> A good example is template meta-programming taken too far.

And how often "modern c++ proponents" suggest meta-programming to solve ordinary problems? Because everywhere I've encountered C++ discussions, those were resorted either for internal usage or suggested to avoid at all.

> The key reason C++ took off was because it added minimal "Zero-Overhead abstraction" constructs over the baseline "low-level and simple" C core.

There's no key reason C++ took off. It took off, because it took off.

> The evolution of C++ should have continued in the same spirit but instead in a misguided effort to compete with later and differently designed languages a lot of complexity has been added for not much apparent benefit.

There's a reason why those differently designed languages were defined differently. Maybe instead of blindly hating evolution, try understanding the reasons behind it.


And you have mistakenly assumed that i am "blindly hating evolution". The distinction i make is between "needs" and "wants"; much of what has been added in Modern C++ is "wants".

>First link in Google, how is moving from undefined behavior and not needing manually declare type of union (sic!) is a "mere complexity for not much benefit"?

Because you have not understood the definition and guarantees of a "union"; and UB is not always a bad thing. "union" is explicitly defined to be a POD with all it entails. The stackoverflow answer does not provide anything new. If you want something more, you code it explicitly when needed. No need to burden the language; in fact it creates more problems because "variant" does not guarantee layout compatibility.

>Sure, enough people say that about for and while loops while avoiding functional patterns like a plague.

Of course familiarity and clarity always trumps "new patterns".

>Which "modern, overhead features" are solved by RAII?

The harping on "never use naked pointers" in your code.

>Who are these experienced programmers and how do you define those? Your circle of people?

Of course; It should be the same for you and everybody else too!

>Modern features are literally easier and less complex than old way. Like in the variant example.

This is what we are debating; it is not a fact that you seem to assume.

>There's no key reason C++ took off. It took off, because it took off.

There is always a tipping point. In C++'s case it was compatibility with C and new language constructs for higher level abstractions.

>There's a reason why those differently designed languages were defined differently.

Exactly; Each starts with a Computation Model and evolves a syntax to that model. The evolution should not be willy-nilly dumping everything and the kitchen sink into a language. C++98 was complicated enough but still manageable but what has happened from C++11 onwards is just too much complexity requiring even more effort from experienced programmers. You cannot hand-wave it away by saying "Modern C++ is a whole new language so forget baseline C/C++ cores" which is quite silly.


>They don't need a move constructor because they've been already clever enough to solve a prolbem from a different angle.

How would you implement something like unique_ptr without move semantics?

Can you give an example of angles being used to avoid needing move semantics?


Exceptional engineers I've come across unilaterally don't use smart/unique/whatever_ptr. A common theme is that they employ allocation strategies that avoid the need for 'micro' memory management in favor of 'macro' management strategies. Allocating large blocks of memory at a time and freeing it all at once is one specific example.


Having worked on a few decently large C++ codebases, I can confidently say that this is not how I view things. Shared pointers might be pretty drastically overused by some programmers but have use cases, and I think unique pointers are pretty invaluable.

I can't imagine writing a long running, memory conscious, and fast C++ program that uses whatever 'macro' management strategy you envision.


This is pretty simple actually! At startup, create some object pools, arenas, or bump allocators. Then, never heap allocate anything ever.

We also happen to do 0 other steady-state syscalls (not just 0 mmap, munmap, mremap, etc.), so we can just run the program under valgrind and any time valgrind prints something is a bug.

We didn't use C++ but some other software I've heard uses a similar approach is written in C++.


Arena allocation and smart pointer tackle fairly different problems. Not sure why you're conflating these different problems? I'm extensively using both of them on a fairly large code base (>10M) daily basis. Without smart pointers, I'm confident that engineers will need to spend 2x more time on figuring out actual ownership of pointers.


In principle, with arena allocation you don't need to care about ownership at all, or at least you only care about it at the arena boundary.


If you're working on a nice code base that has a very clean boundary across teams and infrastructures, have the arena boundary follow that principle and the system has relatively straightforward lifetime and ownership, then yes. Obviously this is not true for so-called "large scale software systems". Have you tried to bounce objects across arenas thanks to all the teams that wanted to "optimize" and "simplify" their memory allocation? Good luck with debugging that.


Why should you track ownership of the single objects if you're not going to free them one by one?


You can also allocate an arena of memory, and hand out unique_ptr<std::span> of CHUNK_SIZE slices of it

So I also don't see how these are related


The point of arenas is usually to not care about ownership for object graphs that are allocated in the same arena, and to avoid immediate destruction for said objects.

The typical pattern is to create an arena whose lifetime is the lifetime of some important operation in your program (say, serving a single request or processing a single frame), do all of the logic of that operation via simple allocation in the arena (ideally bump-pointer allocation), don't free/delete anything at all, then when the operation is finished, just free the arena itself (or don't, and reuse it for the next operation by just resetting the pointer).

This implies, or at least allows, a very different style of writing C++ than usual - no need for tracking ownership, so no RAII, so no constructors or destructors, so no smart pointers.

Of course, you can also write this kind of code with RAII and smart pointers (with nop destructors for non-resource objects), using placement new to construct objects inside the arena and so on. But it's not necessary, and some find it easier to avoid it.


I think arenas are more of an alternative to shared_ptr than to unique_ptr.

A std::string is an example we can use, it acts fairly similar to unique_ptr. Sometimes you just need to heap allocate something, work with it and/or store it as a member, maybe pass it to a function or whatever, and then have it cleaned up at the end of the scope or when the object that it's a member of is destroyed. I don't think an arena can replace this use case.

If we need something with multiple handles, that doesn't have a trivial and predictable lifetime (such as objects / variables in a programming language interpreter, or entities in a video game), you can reach for shared_ptr or an arena. In the specific example I gave I would certainly prefer the arena, and would implement a garbage collector to deallocate stuff when it became unreachable.

The arena pattern is fairly common in Rust, because in Rust even a reference counted smart pointer doesn't let you bypass the "shared ^ mutable" enforcement from the borrow checker! Having everything owned by the arena greatly simplifies issues with the borrowck, and you can just allow callers to take temporary references and drop them when they are finished with them. There is a crate called "SlotMap" that provides a data structure to assist with the pattern (there is a C++ SlotMap library as well somewhere).

Anyways I have rambled a bit, but I think unique_ptr solves a different problem than what you describe, which instead is more of an alternative to reference counted pointers (shared_ptr).


Most typically when using arenas, you don't want to pay the cost of de-allocation at all while the arena is still alive. So, you actually have to "fight" unique_ptr, since you don't want it to call any kind of delete() or destructor, and you essentially get nothing in return.

If you use a bare pointer instead, not only do you get the same guarantees, but now if you need to modify the code to actually share that object, there's no need to modify any references from unique_ptr to shared_ptr.


Yeah! That's what I meant when I said arenas solve a different problem! I don't think I mentioned using unique_ptr with the arena and I didn't mean to suggest that if I did. My point was that arenas are not a replacement for unique_ptr at all, and instead solve a different problem where allocations don't have a simple and deterministic lifetime.

With an arena ideally you can just pass out actual references (T&) to callers instead of pointers!


I believe that there are certain situations where one may choose between scope-based memory management and arena-based. Even for the example of the rendering code of a frame, instead of allocating inside an arena, you could choose to allocate an object graph owned by some frame-level object, that is de-allocated once the frame is rendered.


As someone who got scolded by Jeff Dean once for using smart pointers, I can attest to this.


Thanks for the +1 Justine <3 You're my flippin hero


unique_ptr does not require that move semantics to be added to the language.

Boost has a full implementation of boost::unique_ptr that is identical to std::unique_ptr but works with C++98 [1].

https://www.boost.org/doc/libs/1_80_0/doc/html/boost/movelib...


According to the docs this is actually emulating move semantics on the compilers that don't support it.

I guess this means that you don't have to have it at the language level to support unique_ptr, but you do need some form of move semantics at least. I think having it built into the language as a proper feature is preferable personally.

https://www.boost.org/doc/libs/1_80_0/doc/html/move/what_is_...

>Rvalue references are a major C++0x feature, enabling move semantics for C++ values. However, we don't need C++0x compilers to take advantage of move semanatics. Boost.Move emulates C++0x move semantics in C++03 compilers and allows writing portable code that works optimally in C++03 and C++0x compilers.


C is overcomplicated and all really hardcore engineers that I've met either write assembler or carve binary machine code into stone blocks with their bare teeth at moonlight.


The problem with videogames niche seems to be that the debugability of code suffered, as well as the speed, in debug mode.

There are lately some improvements to that, like making move/forward an "intrinsic" from the point of view of the debugger. Also the SIMD interactions seemed to be a problem in certain loop vectorization.

As for "you are clever bc you do not use move constructors"... I can be clever and use assembly instead of C. But I am not sure it would be the right choice in all scenarios. If you have a tool and can take advantage of it, just make use of it.


That's an interesting perspective. I started writing C++ after C++11 came out and personally I can't even imagine writing C++ without auto, move semantics, lambdas, variadic templates, std::unique_ptr, std::atomic, etc.


Ive been working on AAA games for a decade writing c++, and I couldn't disagree more.

An enormous amount of "smart" old school c++ is at best equivalent to a well written modern equivalent, and at worst more dangerous, error prone and slower. Anything that uses references for writable out parameters combined with a bool return value (or no return value) for success is a great example. On my last project, I rewrote a bunch of fundamental Unreal Engine path handling methods, replacing the old school bool GetXXX(TCHAR* In, TCHAR* Out); with FStringView GetXXX(const FString& In);

In the process of doing so I found multiple engine and game bugs that in the best case were logic bugs and in the worst case were memory safety issues. I also left the original function call in place and made it wrap the new version with no perf overhead for the callsites I didn't change, and a 30% increase for the cases that I did fix. Almost all of that is because I stopped requiring null checks, I was able to lean on RVO and move semantics, and I was able to avoid unnecessary initialisations in many places in the code. Most of the sites I replaced were calling FString ThingStr; TCHAR* Thing =<...> if (GetXXX(FooStr.c_str(), Thing)) ThingStr = FString(Thing);

And most were replaced with FStringView Thing = GetXXX(Foo);

> write very simple C++98 code, maybe a bit of C++11 but that's it. They keep it simple.

Assuming of course there's already a smart pointer library that you're using, modern c++ allows for simpler, more correct and faster code in many many places. Some examples are auto:

    for( map<string, int>::iterator it = map.begin(); it != map.end(); ++it)
becomes for (auto& it: map)

nullptr is now a thing over NULL, move semantics make it feasible to write logical code without needing to jump between logic and type manipulation for basic operations, static assertions exist, enum class, attributes (nodiscard, noreturn, deprecated),constexpr and all the evolutions that came with it, structured bindings, concepts, if/for initializes...

All of the above are just off the top of my head features that I interact with daily, even if I don't write them daily, all of which make my code and project safer faster and easier to understand.

> This new C++ syntax is just a gimmick that we don't need.

I disagree. It has many useful properties. It gave us the spaceship operator which massively simplifies the code needed to write correct objects. A single well written spaceship operator can mean numerous algorithms just work on containers of your type, rather than custom filter/search operations. The syntax of left to right definitions for functions _seems- superfluous but it drastically reduces the amount of work needed to be done by the compiler which should translate into real world compilation speedups. The import/include swaps could allow for new code to interact with well behaved old code in a safer manner. As someone else said on this thread, it's not always about one feature in isolation, it's about combining multiple features and the result being worthwhile.


and that is why rust is bad. it solves problems for you, just like modern c++ tries to do. whereas these problens need to first be intimately understood by the engineer, before they can be solved. otherwise you will be faced with lots of unexpected surprises down the line.


I hope you don't forget to prove the whole theory of integration whenever you are doing an FFT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: