Hacker News new | past | comments | ask | show | jobs | submit login

Java and Go were both responses to how terrible C++ actually is. While there are footguns in python, java, and go, there are exponentially more in C++.



As a person who wrote Java and loved it (and I still love it), I understand where you're coming from, however all programming languages thrive in certain circumstances.

I'm no hater of any programming language, but a strong proponent of using the right one for the job at hand. I write a lot of Python these days, because I neither need the speed, nor have the time to write a small utility which will help a user with C++. Similarly, I'd rather use Java if I'm going to talk with bigger DBs, do CRUD, or develop bigger software which is going to be used in an enterprise or similar setting.

However, if I'm writing high performance software, I'll reach for C++ for the sheer speed and flexibility, despite all the possible foot guns and other not-so-enjoyable parts, because I can verify the absence of most foot-guns, and more importantly, it gets the job done the way it should be done.


I've seen a lot of bad C++ in my life, and have seen Java people write C++ like they would Java.

Writing good C++ is hard. People who think they can write good C++ are surprised to learn about certain footguns (static initialization before main, exception handling during destructors, etc).

I found this reference which I thought was a pretty good take on the C++ learning curve.

https://www.reddit.com/r/ProgrammerHumor/comments/7iokz5/c_l...


> I've seen a lot of bad C++ in my life, and have seen Java people write C++ like they would Java.

Ah, don't remind me Java people write C++ like they write Java, I've seen my fair share, thank you.

> Writing good C++ is hard.

I concur, however writing good Java is also hard. e.g. Swing has a fixed and correct initialization/build sequence, and Java self-corrects if you diverge, but you get a noticeable performance hit. Most developers miss the signs and don't fix these innocent looking mistakes.

I've learnt C++ first and Java later. I also tend to hit myself pretty hard during testing (incl. Valgrind memory sanity and Cachegrind hotpath checks), so I don't claim I write impeccable C++. Instead I assume I'm worse than average and try to find what's wrong vigorously and fix them ruthlessly.


> Ah, don't remind me Java people write C++ like they write Java, I've seen my fair share, thank you.

I always find this remark amusing, given that Java adopted the common patterns in C++ toolkits that precedded Java.

If anything they are writting C++ like it used to be on Turbo Vision, Object Windows Library, MPW, PowerPlant, MFC, wxWindows,....


The remark is rooted from variable naming and code organization mostly. I've seen a C++ codebase transferred to a java developer, and he disregarded everything from the old codebase. Didn't refactor the old code, and the new additions were done Java Style. CamelCase file/variable/function names, every class on its own file with ClassName.cpp files littered everywhere, it was a mess.

The code was math-heavy, and became completely unreadable and un-followable. He remarked "I'm a java developer, I do what I do, and as long as it works, I don't care".

That was really bad. It was a serious piece of code, in production.


So basically " like it used to be on Turbo Vision, Object Windows Library, MPW, PowerPlant, MFC, wxWindows,..."


The biggest weakness of C++ (and C) is non-localized behavior of bugs due to undefined behavior. Once you have undefined behavior, you can no longer reason about your program in a logically consistent way. A language like Python or Java has no undefined behavior so for example if you have an integer overflow, you can debug knowing that only data touched by that integer overflow is affected by the bug whereas in C++ your entire program is now potentially meaningless.


That is a radically gross misunderstanding of what undefined behavior is and how it can (and mostly how it cannot) propagate.


Memory write errors (some times induced by UB) in one place of the program can easily propagate and later fail in a very different location of the program, with absolutely zero diagnostics of why your variable suddenly had a value out of possible range.

This is why valgrind, asan and friends exist. They move the error diagnostic to the place where error actually happened.


Actually it's not, Chandler Carruth notwithstanding.

If your C++ program exhibit undefined behaviour, the compiler is allowed to format your entire hard drive. Or encrypt it and display a "plz pay BTC" message. That's called a vulnerability. Real and meaningful security checks have been removed as "dead code" because of signed integer overflow (which is undefined behaviour by default).

If anything, I would guess the gross misunderstanding sprouted somewhere between the specs and the compiler writers. Originally, UB was mostly about bailing out when the underlying platform couldn't handle this particular case, or explicitly ignoring edge cases to simplify implementations. Now however it's also a performance thing, and if anything is marked as UB then it's fair game for the optimiser — even if it could easily be well defined, like signed integer overflow on 2's complement platforms.


> If your C++ program exhibit undefined behaviour, the compiler is allowed to format your entire hard drive. Or encrypt it and display a "plz pay BTC" message.

No, it isn't. That's a completely made up fabrication. And if you had a compiler that was going to do that, then what the standard says or if there's undefined behavior is obviously not relevant or significant in the slightest.

The majority of the UB optimization complaints are because the compiler couldn't tell that UB was happening. It didn't detect UB and then make an evil laugh and go insane. That's not how this works.

Compilers cannot detect UB and then do things in response within the rules of the standard. Rather, they are allowed to assume UB doesn't happen. That's it, that's all they do. They just behave as though your source has no UB at all. As far as the compiler is concerned, UB doesn't exist and can't happen.

When a compiler can detect that UB is happening it'll issue a warning. It never silently exploits it.

> Real and meaningful security checks have been removed as "dead code" because of signed integer overflow (which is undefined behaviour by default).

Real and meaningful security checks have been removed because the security check happened after the values were already used in specific ways, not because of UB. The values were already specified in the source code to be a particular thing via earlier usage. UB is just the shield for developers who wrote a bug to hide behind to avoid admitting they had a bug.

Use UBSAN next time.

> even if it could easily be well defined, like signed integer overflow on 2's complement platforms.

Signed integer overflow is defined behavior, that's not UB. Also platform specific behavior is something the standard doesn't define - that's why it was UB in the first place.

It is kinda ridiculous it took until C++20 for this change, though


> > UB allows the to format/encrypt your entire hard drive.

> No, it isn't. That's a completely made up fabrication.

Ever heard of viruses exploiting buffer overflows to make arbitrary code execution? One cause of that can be a clever optimisation that noticed that the only way the check fails is when some UB is happening. Since UB "never happens", the check is dead code and can be removed. And if the compiler noticed after it got past error reporting, you may not even get a warning.

You still get the vulnerability, though.

> UB is just the shield for developers who wrote a bug to hide behind to avoid admitting they had a bug.

C is what it is, and we live with it. Still, it would be unreasonable to say that the amount of UB it harbours isn't absolutely ludicrous. It's like asking children to cross a poorly mapped minefield and blame them when they don't notice a subtle cue and blow themselves up.

Also, UBSan is not enough. I ran some of my code unde ASan, MSan, and UBSan, and the TIS interpreter still found a couple things. And I'm talking about pathologically straight-line code where once you test for all input sizes you have 100% code path coverage.

> Signed integer overflow is defined behavior, that's not UB.

The C99 standard explicitly states that left shift is undefined on negative integers, as well as signed integers when the result overflows. I had to get around that one personally by replacing x<<n by x(1<<n) on carry propagation code.

Strangely enough I cannot find explicit mentions of signed integer overflow for regular arithmetic operators, but apparently the C++ standard has an explicit mention: https://stackoverflow.com/questions/16188263/is-signed-integ...

> Also platform specific behavior is something the standard doesn't define - that's why it was UB in the first place.*

One point I was making is, compiler writers didn't get that memo. They treat any UB as fair game for their optimisers. It doesn't matter that signed integer overflow was UB because of portability, it still "never happens".


> C is what it is, and we live with it. Still, it would be unreasonable to say that the amount of UB it harbours isn't absolutely ludicrous.

There's a lot of ludicrous stuff about C and I wouldn't recommend anyone use it for anything. Not when Rust and C++ exist.

But UB really isn't the scary boogie man. There could probably stand to be a `as-is {}` block extension for security checks, but that's really about it.


I'm sorry, C++?!?

Granted, C is underpowered and I would like namespaces and generics. But from a safety standpoint nowadays, C++ is just as bad. Not only is is monstrously complex, it still has all the pitfalls of C. C++ may have been "more strongly typed" back in the day, but now compiler warnings made up for that small difference.

Granted, C++ can be noticeably safer if you go RAII pointer fest, but then you're essentially programming in Java with better code generation and a worse garbage collector.

---

There's also a reason to still write C today: its ubiquity. Makes it easier to deploy everywhere and to talk to other languages. It's mostly a library thing though, and the price in testing effort and bugs is steep.


> I'm sorry, C++?!?

C++ had a defined multithreaded memory model and 2's compliment behavior before C did. Since you're all about UB, that kinda matters. A lot.


Well, I'll check who gets rid of all undefined overflows first. 2's complement is nice and dandy, but if overflow is still undefined that doesn't buy me much.

Point taken about multi threading.


I've written a whole bunch of all of those languages, and they each occupy a different order of magnitude of footguns. From fewest to most: Go (1X), Java (10X), Python (100X), and C++ (1000X).


Go has much more footguns in my opinion. Just look at the recent thread on the topic: https://news.ycombinator.com/item?id=31734110


Most of those aren’t “footguns” at all, but rather preferences (naming conventions, nominal vs structural subtyping) and many others are shared with Python (“magical behavior”, Go’s structural subtyping is strictly better for finding implementations than Python’s duck typing) or non-issues altogether (“the Go compiler won’t accept my invalid Go code”).

The “forget to check an error” one is valid, but rare (usually a function will return data and an error, and you can’t touch the data without handling the error)—moreover, once you use Go for a bit, you sort of expect errors by default (most things error). But yeah, a compilation failure would be better. Personally, the things that really chafe me are remembering to initialize maps, which is a rarer problem in Python because there’s no distinction between allocation and instantiating (at least not in practice). I do wish Go would ditch zero types and adopt sum types (use Option[T] where you need a nil-like type), but that ship has sailed.

I’ve operated services in both languages, and Python services would have tons of errors that Go wouldn’t have, including typos in identifiers, missing “await”s, “NoneType has no attribute ‘foo’”, etc but also considerably more serious issues like an async function accidentally making a sync call under the covers, blocking the event loop, causing health checks to fail, and ultimately bringing down the entire service (same deal with CPU intensive endpoints).

In Go, we would see the occasional nil pointer error, but again, Python has those too.


Java is largely based on Objective-C, not on C++. It's a bit hard to tell because they removed messaging though.

It is memory safe but otherwise I think it was an imitation, not a reaction, to ObjC features.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: