Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Not thinking about ownership over references to heap allocations is very very much the big productivity win from GC.

I very much doubt that this is a big _productivity_ win. Languages in which it is idiomatic to "think about ownership over heap allocations" (C++, Rust) aren't obviously less productive than comparable languages where such thinking is not so idiomatic (C, Java, .NET, ObjC, Swift etc.).

It's somewhat common to use refcounting (shared_ptr<>, etc.) in the more exploratory style of programming where such "thinking" is entirely incidental, but refactoring the code to introduce proper tracking of ownership is quite straightforward, and not a serious drain on productivity.



GC might not be a productivity win for you, but for many people it definitely is.

I'm pretty sure that's true for the great majority of software developers, but of course they don't even use a non-GC language!

Part of the reason they don't is that productivity. Not that they chose it personally for that reason, but e.g. historically enterprise code moved to Java and C# for related reasons.

(I also agree there are people that are equally productive in non-GC languages, or even more - different people have different programming styles.)


Enterprise code moved to Java (and later C#) for memory safety, period. The level of constant bugginess in the C++ codebases just made them way too messy and outright unmanageable.


Wrong, and definitively not period.

The enterprise world moved to Java and C# because:

- It was a corporate language with corporate support and that matter a lot in many environment.

- It had at the time one of the best ecosystem of tools available.

- It was the mainstream fashion of a time and nobody get fired to buy Sun/IBM/Microsoft right ?

Most companies (and managers) could not less give a dare about your program crashing with a segfault (unsafe) or a null pointer exception (safe). It's the same result for them.


> It's the same result for them

Not in a security-related situation, it's not! And to a lesser extent, lack of memory safety also poses a danger of silent memory corruption. (Yes, usually the program will crash outright, but not always.) And it can be a lot harder to debug a crash when it doesn't happen until thousands of cycles after the erroneous access.

Sun and Microsoft wouldn't have built and pushed Java and C# in the first place if there hadn't been a real need for safer languages.


> Sun and Microsoft wouldn't have built and pushed Java and C# in the first place if there hadn't been a real need for safer languages.

Excepted they were safer languages before Java and C#: Ada, Lisp, All the ML family.... And all of them never lift off.

Java and C# have been successful because they were accessible and easy to learn ( partially due to their memory model), not because they were safe.

As a parenthesis, a beautiful side effect of that has also been an entire generation of programmer that has no clue of the memory model their language use underneath, because "it's managed", because it's GC.....without even realising that their 50 Millions nested/mutual object graph will make the GC on its knees on production. With the results we all know today.


Maybe, but remember that computers were very, very slow and with small memory, so GC's overhead used to be unacceptable (Emacs == eight megabytes and constantly swapping? I've seen it)..

I think that Java came 'at the right time': when computers became fast enough that the GC overhead didn't matter (except where low latency matter).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: