Does it matter if there's caching though? All modern operating systems cache / prelink / etc. Would you say C has a "30 second lag" because a basic "Hello world" program took 30 seconds to load including how long it takes the computer to start up from cold to run that program? After all, the operating system provides the APIs that C uses.. just as the JVM provides Java's.
If a computer starts an app in, say, about 0.2 seconds every time, it's disingenuous to keep referring to some "2 second lag" since that 2 second lag never really occurs in real use under production conditions. At most, it happens once.
Well... I haven't touched any JVM-based software in a few days on my 2.4ghz MBP with 4GB of RAM. Caches got cleared and pre-fetch "forgot" about it, I guess it figured that the heavy Adobe software I've been running needed it more.
So I just timed hello world (in Java) and it gave me 1.89sec.
That's pretty lame if you ask me. In fact, that's incredibly lame. Imagine if it wasn't just "hello world", but an actual piece of usable code! In fact I can even tell you how long it takes for my Excel parsing tool (Java, command-line based, uses Apache POI) takes to start up on a 100% "cold" machine: about 4 seconds.
I wonder how this simple tool would fare on Linux-based netbooks with total 512MB of RAM and slow CPUs... or something like iPhone.
That's a one-off. Startup times are rarely included in benchmarks since they're not representative of true day to day performance - which is the sort of performance most people care about.
An extra couple of seconds for booting up an entire framework you haven't used for a few days is nothing to worry about. If you were doing performance intensive work, it'd already be all cached and ready to go.
If a computer starts an app in, say, about 0.2 seconds every time, it's disingenuous to keep referring to some "2 second lag" since that 2 second lag never really occurs in real use under production conditions. At most, it happens once.