Hacker News new | past | comments | ask | show | jobs | submit login
Writing Good C++14 by Default [pdf] (github.com/isocpp)
121 points by adamnemecek on Sept 23, 2015 | hide | past | favorite | 81 comments



Every time C++ is discussed here, I feel like I might be the only person on the planet that likes C++ and enjoys working in it. I get excited when Sutter publishes an article about some proposed new feature. I enjoy talking about code with new hires and giving them a set of Scott Meyers' books to read. I'm in awe of what the clever Boost contributors wring out of the language.

I don't think it's a stinking ball of meat. It's a living language with a deep history that's getting better all the time. It's a wonderful tool that I use daily.


Let's be friends! I did a lot of C++ back in the late 90s/early 2000s, and it kind of sat on the shelf for me for a while. I've recently come back and am discovering all of the joys of modern C++(11/14). I'm really really impressed. The first project was a cross-platform library on top of libusb. With all of the great concurrency things built-in now (std::thread/mutex/lock, etc), I managed to write the whole library without a single #ifdef! Generates Linux .so, OS X .dylib, and Windows DLLs. Amazing!


>I managed to write the whole library without a single #ifdef! Generates Linux .so, OS X .dylib, and Windows DLLs. Amazing!

Could you expand on how you are doing this? You're still using a Makefile or a similar tool with some kind of platform detection for the build itself, right?


Not sure whether the parent does it like this, but you can use cmake to describe the build, and then (for instance) run cmake --build to build the project in a 'cross-platform' way using the platform's native compiler toolchain, it works the same on Windows, OSX and Linux. I think what the parent meant with 'no #ifdefs' is that the C++11 std lib comes with more modules that make a lot of platform-specific code-paths unnecessary (for instance std::thread, std::atomic for threading, and std::chrono for high-resolution time-measuring).


Yup, you nailed it. cmake for handling the builds, and the C++11 stdlib covers every platform-specific thing I needed. The only platform-specific thing I needed to do was point the CMakeLists.txt at the correct libusb (libusb.so, libusb.dylib, libusb.dll). Absolutely beautiful.

Edit: to elaborate a little more about cmake... You can use cmake to generate both Makefiles and Visual Studio build files.

On Windows:

mkdir build; cd build; cmake .. -G "Visual Studio 12 2013"; cmake --build .

On OS X or Linux:

mkdir build; cd build; cmake ..; make


As the old saying goes: "There are languages that everybody complains about, and there are languages that nobody uses" ;)


You are not alone.

It is my favourite language for large scale and high performance projects.


I like it too and I'm happy to be using C++11 for software development.

It's mature, standardized, cross-platform and has a good tool & library ecosystem. When it comes to shipping something, I'll take C++ any day over the language that just turned 1.0 or the other one which doesn't have a debugger and can barely optimize. I guess there are those non-sexy languages you use to get stuff done and the ones you blog about.


It's good that C++ is evolving, but I'm not sure if it evolves in the right way.

Basically, consider a ball of meat. The meat is rotting, nobody would eat it anymore, so fresh meat is slapped on it. And the process repeats numerous times, for years.

In the end, you'll have a giant ball of meat, that'll look fresh, but once you poke the ball, you'll be hit by the stench of the rotten meat inside.

I'd prefer if they started anew, rather than keep slapping fresh meat on the ball.


This looks to me like making C++ more "rusty" which is a good thing IMHO, exactly for the reason that I don't need to bring all my existing code to a new language but instead can make it 'safe' in incremental steps.

There are many good reasons to evolve a language instead of inventing a new one, the situation is a little bit like with OpenGL: 'modern OpenGL style' has nothing in common with the original OpenGL, and many ideas have been introduced in past versions which are now considered bad or dead. Still I can take the OpenGL Gears sample (which is who-knows how old) and run it on a modern GL driver.

This is both a good and bad thing. The bad is that it is confusing to learn OpenGL if you haven't been following its entire history. Googling OpenGL questions will give you 10 different answers, all correct, but only for a specific time in its history. The good thing is that this approach allows me to bring a large piece of GL code incrementally to modern standards without having to rewrite everything.

Of course there are the new fancy 3D APIs (just like there are fancier statically typed, compiled languages compared to C++), but in general, people are underestimating the risks of rewriting a large piece of code (this is still just as true as when it was written 15 years ago: http://www.joelonsoftware.com/articles/fog0000000069.html)

What I would like to see in C++ is to have very detailed control over language features and enforce coding guidelines in my own code base. The more fine grained this control, the better. Something like 'don't allow pointer arithmetics', 'don't allow C arrays', 'don't allow multiple inheritance', etc... Going with your example: as soon as the rotten meat layer is reached, the compiler would no longer accept the code unless you specifically allow it with an exception to the 'modern rule-set'.

A combination of such rules could result in a 'safe subset' which would have the same compile-times guarantees as Rust, and if the rules are violated, the compiler will complain (I think/hope this is what the presentation is mainly about).


    What I would like to see in C++ is to have very detailed control over
    language features and enforce coding guidelines in my own code base.
Clang/LLVM are meant for this. I haven't personally written anything that parses the AST yet, but there are tools that do (clang-modernize, clang-format, YCM, etc...).

What I think some people are overlooking is the practicality of C++. From a language standpoint, there are various inconsistencies, multiple ways to do things, arguably dubious design choices, etc..., but there isn't a genuine reason those can't be avoided.

I personally would probably prefer writing all my code in Haskell, but C/C++ is still vastly more productive.


>What I would like to see in C++ is to have very detailed control over language features and enforce coding guidelines in my own code base.

Maybe something similar to JavaScript's "use strict" would be good, including the ability to restrict it to function/class/source file scope.


We already have that: `gcc -Wall -Wextra`, and a few more flags for specific things that some projects find too restrictive.


Assuming "that" means "very detailed control over language features" I would say no, these flags don't provide that.

Only very few flags warn or disable the use of entire language features. Most are warnings against dangerous ways of using some of the features.


I believe gp meant Javascript's "use strict" equivalent in C++, which I would say is comparable to the compiler flags.


"use strict" isn't comparable to warning flags at all, because it changes the behavior of code. I only brought it up because it can be applied selectively within function scope, not specifically for what it does.

I think what we need is a way to selectively disable language features. Changing the behavior of existing features is a much more questionable thing to do in my view.

The big question is if it's possible to disable some features without affecting how other features must work. It failed miserably for exceptions, but exceptions are a cross cutting sort of concern.


once you poke the ball, you'll be hit by the stench of the rotten meat inside

The whole point of this article seems to be to teach people to by default not poke the ball, unless really really needed, and in such caes do it while wearing a proper mask.

Which is not ideal (that would indeed be to start over I guess), but given the current situation probably the best they can do.


> The whole point of this article seems to be to teach people to by default not poke the ball

And it's sad that they have to teach people not to poke the ball and reveal the inner decay, rather than remove the rotten meat itself.


All serious languages have unsafe constructs that you can use as an escape hatch, even Haskell or Rust. Keeping the ability to do dangerous stuff in C++ isn't a compromise, it is vital. Building tools that makes it easier to separate the unsafe parts is a good thing, other languages got there first but having the possibility to clean up those billions of lines of C++ is great.

Also, it's time to lay off your metaphor, it obscures the issues and trades just on disgust value.


It's not inner decay. It just more possibilities C++ allows you to do, which can give you more power if you know what you are doing.


I love this metaphor. I've been trying to explain to people that the new c++ is just another layer of features on top of the old language. You can't learn c++ by just learning the last layer added.

Modern C++ is just a subset of the language, and nothing prevents the old language from popping up everywhere.


Comparisons of full-spectrum C++ vs Rust or some theoretical C++ that drops support for the old stuff are subtly unfair.

If you are using a new language then you are by definition not maintaining/upgrading old code. If you are starting a new project in C++, you can follow the modern style guidelines and get most everything you are asking for.

At this point, someone usually says "but nothing prevents my horrible coworkers from using horrible features." If your coworkers are horrible, they will find ways to be horrible no matter what language you chain them to.


  > If your coworkers are horrible, they will find ways to be horrible no
  > matter what language you chain them to.
While this is true in some sense, it's also a relative degree of difficulty. While you _can_ obviously write bad Rust code, like in any language, you have to go really far out of your way to disable the compiler's assistance. Of course, there are other kinds of 'bad code' than just unsafe...


> Of course, there are other kinds of 'bad code' than just unsafe...

Exactly. Many languages make it hard to be unsafe. But, every language makes it easy to be horrible in one way or another.



Thank you! That pdf renderer is unbearably slow on firefox.


my reasoning was that people prefer to read things in the browser over downloading them. afaik there is no way of achieving this only client side without installing a plugin.


I always use native PDF renders. No need to have the browser turn my PC into an airplane.


It didn't render on my mobile.


maintaining the legacy code written in the late 90s is the hardest part. It would be great if we could just press a button and the old code is now magically C++14


My problem is that I work with a bunch of 90s era engineers and they still want to code that way. It is what they know and they don't want to spend the time to learn any of that new fangled C++11. Most of them are 5-10 years from retirement if not working a shorten week already so why waste their time.


I haven't actually tried it, but clang's C++ Modernizer tries to automate some of the boring manual tasks in bringing old code up to speed (such as using nullptr instead of NULL or 0, or more the for(T x : c) loop construct). See http://clang.llvm.org/extra/clang-modernize.html (integrated into clang-tidy now)


Using C++ well has always been a question of judiciously selecting the feature subset that does what you need the sanest way. Looks like cognitive load for selecting that subset is increasing while the sane subset is getting smaller in relation to the bulk of the language.

In other news, downloading a single PDF from GitHub is an atrocious pain in the abdomen. I had to edit the HTML to get at that progenitrix-penetrating URL.


> In other news, downloading a single PDF from GitHub is an atrocious pain in the abdomen. I had to edit the HTML to get at that progenitrix-penetrating URL.

I've usually found clicking on "raw" triggers the download; but possibly doesn't work across all browsers?


Yup, works in Safari. I'll have to remember that.


"As clean and direct as any other modern language ..."

LOL

Honestly I needed to read the C++98 code to understand the modern one. Yes, C++ has advanced but the inherited waste is obvious.

Do other modern languages still operate with pointers?

http://words.steveklabnik.com/pointers-in-rust-a-guide


In your blind love for Rust you seem to forget that it in fact operates with pointers - everywhere.

The thing is that Rust either tracks the lifetime of a naked pointer (similiar to std::unique_ptr), or you wrap pointers in objects which facilitate reference counting to track the lifetime (similiar to std::shared_ptr).

In the end both kinds operate on generic, unsafe raw pointers - safe only thanks to ownership tracking and "boxing".

The only difference here is that Rust thankfully made it opt-out while C++ unfortunately needs to use opt-in.

But yeah... I'm eager to hear your explaination how Rust doesn't use those stupid pointers. Maybe you can create a new computer architecture too? I mean since x86 uses pointers and stuff. Maybe we should use garbage collected languages for that, huh?


In the end everything is "unsafe" assembler. "It's all unsafe if we penetrate the abstractions!" is basically "Everybody's naked under their clothes!" Yes, true, but profoundly irrelevant.

Safety is added by the higher layers. If Rust has pointers that are ownership tracked, then it is incorrect to think of them as raw pointers; they are safer than that. Safety is attained by creating compilers/runtimes/interpreters that can not be convinced to execute certain patterns of assembler code, or perhaps require rather explicit labeling.


> In your blind love for Rust you seem to forget that it in fact operates with pointers - everywhere.

I don't use Rust at all :-) I favor Nim, Haskell and Lisp. I am just pointing to Rust as a better alternative to cc14.

> I'm eager to hear your explaination how Rust doesn't use those stupid pointers.

Only in embedded systems where direct hardware access, memory and performance are issues pointers makes sense. In all other cases pointers are bad programming style.

http://words.steveklabnik.com/pointers-in-rust-a-guide


I don't use Rust at all :-) I favor Nim, Haskell and Lisp. I am just pointing to Rust as a better alternative to cc14.

This is some quality trolling. How can you recommend something that you don't use ?


I used Rust for a while. _Now_ I don't use it because I discovered other languages that work better for my current applications.


I think what makes C++ so challenging for me isn't C++98, C++11, C++14 or whatever comes next. The challenge for me is that I need to know the superset of all the features (and past 'features') at the same time, doubly so to interoperate with existing code or libraries.

The more gets layered on without taking anything away the more difficult it is to reason about your code let alone that of anyone else. And without taking anything away, all the cool new features can't defend you at all, they're just more rope.

The challenge for me is just how much context C++ forces me to hold in my head at any given time. And thanks to operator overloading, I can't trust anything I see -- when you can change the meaning of the comma operator, any piece of code can do literally anything other than what it looks like.

Ultimately, I stopped developing in C++ because there's just too much cognitive load for me. In a professional setting you're not coding for yourself, you're coding for you 6 months from now, and that guy has know idea what kind of tricks you were playing back in the day.


> And thanks to operator overloading, I can't trust anything I see -- when you can change the meaning of the comma operator, any piece of code can do literally anything other than what it looks like.

I always laugh at this complaint specifically against C++.

Except for Go and Java, all other mainstream languages allow for either operator overloading, or symbolic names for functions/Methods.

Even JavaScript ES7 might get them, http://51elliot.blogspot.de/2015/01/fluent-talk-summary-bren...


There's a difference between overloading `+`, `-`, `<<`, (which must obviously run custom code when applied to custom types), and overloading `,`, copying, assignment, moving, and tons of other "already works on everything" operations.

Both can cause problems when used inappropriately, but the latter is just chaos.


How would you handle deep vs. shallow copying and assignment for classes without overriding those operators?


My answer to that is "you shouldn't want to deep copy on assignment". It's a performance trap and makes the code harder to understand. Without pervasive copy/assignment/move ctors you would be surprised to see how little you actually want to explicitly invoke them (usecases may vary, of course).

Sadly this is the path that C++ has taken.


Shallow copy on assignment is dangerous because it leads toward double frees and therefore chaos in your heap.

Ironically, the original topic (upthread quite a ways) was pointer safety...


Yes, backwards compatability with C makes this a hard problem, but this is exactly what affine typing fixes: assignment is a shallow copy, but the old copy becomes inaccessible.


I don't think it's "backward compatibility with C". It's that C++ deliberately lets you go down to raw pointers. This lets you write things like memory allocators in C++. Yes, that makes it a hard problem to do only shallow copy, which gets us back to overloading assignment operators.

It sounds like you want C++ to be something other than it is. That's fine, for you. Use something else. But C++ is the way it is for a reason. A lot of people find those parts that you don't like to make it a more useful tool for things that we actually do.


> I don't think it's "backward compatibility with C". It's that C++ deliberately lets you go down to raw pointers.

Being able to be copy-paste compatible with C is "backward compatibility with C".


Yes, C++ is that (almost). But that attribute of C++ is not the source of the problems that we're talking about here.

I mean, yes, in a sense it is, in that C had shallow copy of structs (IIRC), but... let's review, shall we?

pjmlp whined about operator overloading, about how it made code difficult to understand. Gankro specifically singled out copy and assignment overloading as being unnecessary and confusing. jawilson2 raised the question of deep vs. shallow copying (implying that you need to overload copy and assignment in order to easily do deep copy). Gankro replied that you shouldn't want to do deep copy on assignment. I pointed out the problem with his/her view, namely, possible memory corruption. Gankro relied, blaming this on backward compatibility with C. I explained that it's not backward compatibility per se that creates the problem, it's the ability to have pointers. My point was that C++ was deliberately going to have pointers - it wasn't just because of backward compatibility. You argue that C++ is in fact backward compatible. In the context of the conversation, so what? We're not questioning whether C++ is C-compatible. It is, but that's not the point.

The point is, even if C++ were deliberately not C-compatible, if it let you play with pointers (and given the intent of C++, it would) it would still have the issue of shallow vs. deep copy, and overloading assignment and copy would still be the solution.


You have completely ignored the actually interesting point of my argument, and focused on pointless details. In particular, my statement on affine types. Or, in C++ terms, move semantics by default, instead of copy semantics by default.

Raw pointers don't deep copy (what would that even mean?) so clearly you're only talking about structs that impose special semantics on raw pointers. If those structs were affine (assignment was a move which marked the old copy as unusable), then there would be no need to overload assignment for that facet of safety. They would just be in a different place, which would be fine.

The only really special case is having a pointer into yourself (really into yourself, not just into some fixed heap location you own -- basically, caring about your own location in memory). This is the only thing, as far as I can tell, that C++'s approach gains over affinity. Personally I consider it a bit of a nightmarish thing to do without GC (it's a bit nasty even with it TBH), but I know all too well that people will demand anything and everything; especially once they're used to it. Although this wouldn't completely preclude this pattern, it just means you can't wrap it up in a pseudo-safe way -- particularly if you want to pass it to arbitrary client code.

However this would be a massive break from C, which is copy semantics all the way down. I suppose C++ could have further given the `class` keyword meaning by making all classes affine. I dunno if people would have tolerated that kind of difference at the time. Sort of a soft breaking change, yaknow? Then again, maybe overloading `=` is already a breaking change in that regard.

shrug

language design is hard


True, I was not addressing the main point of what you said, and quibbling about your wording on a side point.

So before C++ assignment and copy operator overloading, you'd have a C struct, and assignment was a bitwise copy. If the original is destroyed after that, you're safe. If it's not, and the struct contains a pointer to allocated memory, you're going to have trouble unless you're very careful about who owns the memory.

But even then, you had situations where you wanted to truly make a (deep) copy. That took a call to a function that knew the structure, and which parts to deep copy. So the need to do that was still there.

So if we move to affine types or move semantics or whatever, the need for deep copy doesn't go away. If you don't have an overloaded assignment operator or copy operator, you're going to have a function of a different name that does the exact same operations. So the need for both operations doesn't go away. But the move/affine approach does make the double-free problem go away (as does the current C++ overloaded copy approach).

> language design is hard

Yeah. Somebody wants every possible action to be doable, and different people want different actions to be easy or the default. Nobody's going to be completely happy.


I would personally arguing that a deep copy necessitating an explicit function call is a good thing, though. It makes potentially expensive operations obvious, and improves your ability to reason about program behaviour.


> pjmlp whined about operator overloading, about how it made code difficult to understand

Learn to read, shall you?

I am 100% in favour of operator overloading.


I never said you weren't. I said that your statement about backward compatibility, while true, was completely irrelevant to the conversation.

[Edit: I take it back. I did say you whined about operator overloading, when in fact you were quoting arcticbull.]


Ok


... however, Rust's semantics specifically prevent this double free.


Which, if we were talking about Rust here, might be relevant. But in this thread of the conversation, we're talking about C++ operator overloading and why you need it for deep copying on assignment. In the immediate context, how Rust does it is not relevant.


The discussio is about strategies to avoid the double-frees problem in a hypothetical "C++ with changes", and Rust's is one example. Seems fine and useful to give examples. (That's what Gankro's comment is too: Rust's strategy is affine types.)


Could be I missed something, but I looked all the way back to the root of this thread, and I didn't see (in any of the direct ancestors) anywhere where the topic was a hypothetical "C++ with changes".


In Rust, the = operator always represents a shallow copy (for types with move semantics, it will also statically invalidate the source) and there's no way to override this or otherwise permit user-defined code to run. Deep copying is done via a standard method, `.clone()`, which makes it obvious that user-defined code is running.


The languages that allow for symbolic names, apply them also to builtin types.


That's all true. But there's a dilemma we can never completely avoid. The more expressive power a language gives you, the more productive you can be when you really know the code in and out. The same expressive power will make it more difficult for others or for your future self to understand what the code does.

It's always going to be a trade-off. The more focused you are on one piece of code, the more it makes sense to use an expressive language. The more you switch between projects, the greater the developer turnover, the dumber the code needs to be in order to stay close to maximum productivity.

If we really want to have an impact on productivity, we need to change how teams are put together and stay together.


What do you mean by having to "read the C++98 code to understand the modern one"? Isn't this how "here is the new way of doing it"-examples work? Until you don't know it, of course you will need to read whatever version you do know. The point is that once you do learn it, you can use it.

And "inherited waste is obvious"? If you mean to say that it carries a lot of historic luggage, and you feel encumbered by it, then geez, the document in question was almost tailored for you.

C, D, Go also have pointers available. And, to be honest, thinking that pointers are "obsolete" suggests lack of understanding.

You know what, all your points seem to be against C++. A typical language evangelist.


> And, to be honest, thinking that pointers are "obsolete" suggests lack of understanding.

Who has lack of understanding? Rust proves that pointers are not actually necessary anymore.

> You know what, all your points seem to be against C++. A typical language evangelist.

What is wrong about C++ criticism? I was a professional C++ developer for a long time but now I am really glad that I don't need to maintain any C++ anymore. At the beginning C++ was real joy of programming but now it has grown to something which I don't like anymore.

There are other really modern languages much more clean and almost as performant as C++. Rust and Nim for instance. In Nim I am much more productive than in C++.


Rust has pointers all over the place - one of the core features is memory safety with pointers.


Pointers aren't the problem, null pointers are. Nullity (or the guaranteed lack thereof) should be baked into your type and you should be forced to handle as necessary when accessing a resource.

That said, C++ is moving in that direction, too. For me the difficulty is that to maintain backwards compatibility the compiler can't force you to go with the language, and you can defeat the protections simply by not using them or by casting them away. It's the accrued years of cruft and never telling people to stop doing things they way they have been that encumber C++.

IMO I'd favor a clean break. C++next should just drop support for all the old, un-safe ways of doing things and provide clean break. If you want to keep doing things the old way, stick to C++17. Moving forward there should only be one way to do things -- the safe way.


> Pointers aren't the problem, null pointers are.

There is also aliasing, nonobvious shared state, and ambiguous object ownership.

Getting rid of pointers is a no-go if you're interested in writing systems software. You need pointers just like you need goto and inline assembler. You do want to make doing unsafe things possible, but it needs to be contained and more obnoxious than doing the right thing.

Rust has one approach to this. Not many other modern languages actually want to let you point to arbitrary memory and start flipping bits, which is exactly what you need to be able to do to write a driver.


If C++ dropped C compatibility, I'm not sure there'd be any reason to use it. You might as well just switch to Rust at that point.


Hi. Please note that that blog post has a very prominent "Rust versions 0.8 - 0.9" on the top. Almost nothing in that post is accurate at this point, even the stuff that is has different terms now.

Also, if you don't use Rust, and do use nim, please use nim in these examples in the future rather than Rust, please. Otherwise, you make inaccurate comparisons, like you do here.


You can just as well write large pieces of C++ without using pointers or heap allocations at all. In fact this is considered 'good style'. As soon as you have stuff living on the heap, Rust also needs (smart) pointers to track these heap objects, and the compiler is clever enough to guarantee compile-time safety (although I guess this is the main reason why compilation in Rust is so slow), while in C++ these checks are done at run-time (of course compile time checks would be better).


Tracking ownership doesn't make the Rust compiler slow. Compiling Rust is already faster in general than compiling C++ (where the lack of a module system and code explosion from templates is absolutely killer), with the caveat that the Rust compiler hasn't yet implemented incremental recompilation and so subsequent compilations can be slower than subsequent compilations in C++ if your C++ build system is reusing artifacts intelligently. Incremental compilation in Rust is slated for after the middle-end overhaul which is currently ongoing.


Hmm ok, this has probably increased a lot since I last dabbled with Rust. This was about a year ago, and the speed reminded me of a C++ static analyzer, which kinda made sense to me :)

Even in C++ land there are massive differences in compiler speed Clang on Linux/OSX is easily 10x faster than the Visual Studio compiler without advanced tweaks (however, for some reason, clang on Windows is also very slow).


Yes, when I talk about C++ compiler speed I generally mean Clang, since it's a widely-available high-quality implementation with compiler speed as an explicit priority. In the context of Rust it's doubly relevant since they both use LLVM. Compiler speed wasn't an explicit priority for the Rust developers in the run-up to 1.0, but these days it's essentially top priority (hence the aforementioned middle end overhaul, though it has more benefits than just incremental recompilation), and the speed of the compiler has already more than doubled since 1.0.


> Honestly I needed to read the C++98 code to understand the modern one.

Was that your first time reading C++14? If so are you surprised that there was some learning curve?

> Do other modern languages still operate with pointers?

As much as I'm rooting for Rust to succeed, C++ will be around for quite a while longer so it might be a good idea to make it a bit friendlier.


The problem with modern C++ is that people stick to their programming styles. Many will use some of the new good features but they will also use the old ones (pointers!) because it's so easy to use them. Rust however forces people to accept new programming styles because the legacy ones (pointers!) are not appreciated.


Pointers are great for resource saving, eg. embedded. Don't underestimate them.


In embedded systems pointers make some sense but not in usual applications. This is the way how Rust can work:

http://mainisusuallyafunction.blogspot.de/2015/01/151-byte-s...


C++ is the monkey's paw of programming languages. Sure, it'll grant your wishes, but you'll end up paying some terrible price for it in the end.


How is this different with every other language? At least C++ is being maintained gracefully so one could update it with newer ways of doing the same thing and still work versus a language that is dead and you're stuck leaving it or reimplementing the program completely in something else. Or you're using a language that is new and version 0.6 completely breaks 0.5 so you can't ever upgrade without rewriting parts of it and yeah, you should upgrade because 0.5 has some horrible security vulnerability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: