Hacker News new | past | comments | ask | show | jobs | submit login

It's not so much raw performance of a given language, but the stuff that "managed" languages and their runtimes do under the hood that's the problem. I've spent a lot of time debugging stuff like ARC (Automatic Reference Counting) creating hidden reference counting calls that were eating up a terrible amount of total CPU time. These hidden performance traps are much worse than having to think about (e.g.) proper memory management and data layout. I'll take manual memory management over any 'magic solution' anytime (they just increase complexity and if disaster strikes it is much more trouble to find and fix the problem).

Details: http://floooh.github.io/2016/01/14/metal-arc.html




Kinda depends on the cost of an error vs the cost of a slowdown though, and then there's a big difference between say Objective-C or the once-relevant COM/ATL etc. by MS, where you have all the C bugs and manual memory management plus somewhat automated memory management in some places with all the troubles of that, and say Java where at least you have a fully working garbage collector which deals with cycles and never leaks unreachable memory, and you also have guaranteed memory safety.

I'd rather trust my credit card number to a Java program than a C/C++ program - or an Objective-C program; see also http://www.flownet.com/ron/xooglers.pdf for the early Google billing server disaster, not possible with a garbage collector "thinking" about proper memory management and rather hard to prevent with very good developers thinking that they have thought proper memory management through. (And git had a bug where you could execute code remotely because nobody can really manage memory in C, etc. etc. etc.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: