Hacker News new | past | comments | ask | show | jobs | submit login
Why Java Will Always Be Slower than C++ (jelovic.com)
11 points by theoneill on July 20, 2008 | hide | past | favorite | 20 comments



This essay has been around since at least August 2001. http://web.archive.org/web/20010802062551/http://www.jelovic...

It is fairly well-known. I wasn't too surprised when I saw it show up on Reddit recently, but I was disappointed to see it show up here. It's getting harder each day to see any kind of difference between Reddit and Hacker News.


Java allows you to write better multi-threaded code etc where as C++ lets you fight with all those details therefore losing productivity for some perfomance, which thanks to multi-cores the slowness comparing to C++ will be insignificant


More often than not it is not the language that decides the performance of a system, but the ability of the programmer to come up with sensible designs.

This becomes even more important as parallel and computing becomes more important. Designing and implementing systems that run on a single CPU isn't all that hard given all the tools, libraries and abundance of literature developers have access to at little or no cost. Writing software that spans CPUs and even computers is a different matter.

Writing an application like Twitter isn't hard. Making it scale shouldn't be, but apparently still is. Doing twitter in C++ might offset the problems they experience somewhat, but not meaningfully so.


"More often than not it is not the language that decides the performance of a system, but the ability of the programmer to come up with sensible designs."

A lot of people forget that. Most of the software I've encountered that displayed performance problems did so because of endemic architectural flaws and bad code. Pointless complexity usually leads to slow software.

In most well-written applications, you wouldn't be able to tell the difference between a version on C++ or a version in Java, because the software is rarely the bottleneck. Obvious exceptions are applications like games and 3D animation systems, where there's a lot of computation going on all the time.

A well-written word processor in Ruby might well feel MORE responsive than a poorly written one in C++, for example.


It is much more interesting to know the performance model of a given language than comparing languages. If you know the performance model, you know what costs, so you begin writing programs that avoid doing expensive things.

And it will always be the case that the clever algorithm or data structure is much better at getting some cycles back than switching language.

Part of the article is also wrong. A dynamic lookup can often be avoided if you have a dataflow analysis phase in the compiler for instance. It is almost "the" optimization for OO languages like Java.


Besides being very old, as heroev well pointed out, this is a quite obvious article.

Just like C++ is slower than assembly, which is slower than using microcode, which is slower than coding at the logic-gate level.

It's not that writing Java code is easier than writing C++. The fact is that writing memory leaks is a lot harder in Java than it is in C++.


The main difference being that finding the memory leaks in Java, or rather: unintentional object retention, is far simpler than in most C++ environments.


that's what I said. "Writing the memory leaks" is, after all, a lot harder in Java (or Python, or Ruby, or Lisp) than in C


Speed still maters. I still wait for my laptop to boot up. I wait for my compiler. I wait on Word when I have a long document.

Ummm, yeah, C++ can magically access the disk faster than Java. NOT!!


Last year we've built a storage unit for IP surveillance cameras where we've written our own filesystem optimized for continuous writing with zero fragmentation. We had only 4MB of RAM available and the requirement was to never use more than 10% of CPU, which was as powerful as 386DX (remember those?). These things operate 24/7, mounted on telephone poles, for years, without human supervision.

Yes, in C++.

As a side thought: if Apple were dumb enough to embrace Java instead of sticking to Objective-C, I doubt that OSX-powered iPhone (with a full-sized WebKit) would have been possible. Just look at Microsoft and their Vista disaster, this what happens when you start believing into "megahertz and gigabytes are cheap".


Yes, in C++

My point being, he cited a lot of example that would have been no slower in Java.

this what happens when you start believing into "megahertz and gigabytes are cheap".

As I say to the people I work with all the time, yes computational power is cheap, but it's cheap for everyone, our competitors included. The ability to throw raw power at a problem is not a competitive advantage.

As an aside, I wish CS was taught like that. "You are never going to need to write your own sort ever, but we are going to teach you about sorting so you understand that smart algorithms always beat brute force. Pay attention."

My personal opinion of Java is that its shortcomings were either necessary for or worth putting up with for a world of applications on demand running on heterogenous client devices (i.e. the original applet vision). Now that it's mainly used in known environments (i.e. your own servers) it has to compete on its merits alone, and that's where it struggles. But I really don't mind if other people want to shackle themselves to it :-)


By today's standards, your surveillance solution would qualify as an embedded solution and a fairly resource constrained one at that. It would make sense to use C or C++ in that situation: you can afford to spent a lot of effort on a fairly minimal set of features.

The same may not apply for other projects. I for one spend most of my time worrying about IO, availability and how to best take advantage of multiple CPUs and cores. In this, C++ isn't exactly giving developers anything for free (concurrency still not even being part of the language).

As for Java on mobile: there are projects that are going that route. I'd wait and see before I postulated whether this is a good idea or not.

I don't really understand how Vista fits into this discussion at all. In my opinion, Vista is the result of developers being paralyzed by the sheer weight of their legacy code and interoperability with legacy code. Completely different issue.


As for Java on mobile: there are projects that are going that route. I'd wait and see before I postulated whether this is a good idea or not.

I'm sure you've seen "Java: Please Wait..." on tiny mobile screens with famous Java logo (cup of coffee). I am yet to run into "Please Wait: Objective-C..." screen on the iPhone.


I wasn't talking about J2ME. In my opinion J2ME is utterly pointless.


Could that be because the objective-C run-time or the relevant apps doesn't have that functionality?


This essay is bullshit, the author doesn't understand anything he is talking about to the point of absurdity (allocating objects on the stack takes 0 time???) and he largely repeats the common-knowledge on slashdot, as geniuses like this are so often caught doing.


Allocation on the stack is changing the stack pointer, space efficient heap allocation usually involves tracing through a bunch of tables to find space. It's the initialization that gets you, especially complex object construction, in either case.


umm, for stop and copy and compact memory management, heap allocation is just a pointer increment


Exactly. In .NET heap allocations are practically as fast as stack allocations, although of course there are other things you need to be aware of when using a compacting generational GC, such as avoiding medium-lived objects at all costs. .NET does also provide value-types that can be allocated on the stack, and you use them for reasons that are better than "heap allocations are slow".

I think the moral of the story is: to get the maximum performance out of your memory management system, you need to know how it works. This applies equally well to C++.


Incrementing a pointer is an op. Doing it in a loop takes O(N) time, doing it in a nested loop takes O(N^2) time. These are the timings the author of the article considered terrible for heap allocation. If you have to trace through some tables, then allocating objects in a loop is not O(N), but rather some much more complex big-O, which would have to take into account things like how many objects have been allocated elsewhere and the time complexity of traversing the tables or whatever. Granted, he was wrong about the complexity here, but that is my entire point, he makes up all sorts of wrong facts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: