It lacks the I in IDE. Those different tools are not integrated.
In a java IDE the debugger steps through your source in the same editor view that you just used for editing. You can hover over each identifier while you code or debug to get the docs in a floating panel. Editing incrementally computes compilation errors while you type. Content assist has complete knowledge of the types at your caret position and thus can make accurate suggestions what can be inserted there. Saving the file while debugging automatically compiles the code, runs tests and splices it into the running VM so you can continue debugging the running process with the new code while also showing you which lines you changed relative to the most recent VCS commit. And of course there's more.
The I is Emacs. It adds a heapload of tools on top of what's already there, integrates what is already there, and is a frankly awesome enviroment. It has most of what you described, but it starts faster than an IDE, and is generally better: I don't think and IDE will be beating Paredit and Slime/Geiser, or JS2, or gdb-mode, any time soon.
Not to mention, compared to most IDEs, Emacs is trivial to extend. You know those really simple plugins that provide a tiny amount of incredibly useful functionality? Yeah, we have those, but most of them are so trivial to implement that they're just snippets you can copy into your config. Once you get the hang of elisp, you can be writing real, useful commands in a matter of minutes. Sure, not the big stuff, but still things that matter.
I've used emacs for years. I've invested a great deal of time in learning it. I'm not sure it was worth it. When I think about the opportunity cost involved in internalizing the kb shortcuts, apis, tuning emacs configs, getting various plugins working together, setting up this or that language support - and on and on - that time could have been better spent learning more useful things.
This feeling is especially strong when you use an IDE that does more out of the box with fresh install, than you could make emacs do after 6 months of tinkering and tuning.
I have a guilty confession to make. I don't know how to use Visual Studio. Which seems absurd because I am a heavy Emacs user. Last Time I tried to use Visual Studio (which was about six years ago) I found it kept getting in the way of what I was trying to do. It almost had too much complexity. I ended up throwing my hands in the air and 'saying forget this' I'm going back to what I know, which is Unix and Emacs. At this point I think I'm too entrenched in my habits to have the patience to give anything else a try. Maybe slavish adherence to my tools makes me a bad developer but if it isn't broken why replace it.
There's no shame in it but that doesn't mean that it's the best or most efficient way of doing things.
I've found that stepping through code in a debugger at a human pace, and getting to really understand what's happening when a bug occurs is invaluable.
One problem with this, is that your code often ends up in a state where only step-through debugging works anymore. It might become too complex to reason about just looking at the code, from the types alone, or by printing data.
Same problem happens with other methods too: If you develop solely with unit and integration tests, it might actually be quite difficult to get a step-through debugger set up to debug your application. I worked at a company where I was the only one who used a step-through debugger, and some uses of compile-time metaprogramming would frequently break the debugger.
And if you use multiple techniques as appropriate(unit tests, printf or equivalent, step-through debugging), it's generally easy to use any of them as appropriate.
Like every other tool in programming, different people have different preferences and experiences. Symbolic debuggers have always ended up being a waste of my time, but I don't try to extrapolate from there to everyone else's preferences.
With prints and a backtrace function, you can make up for a most debugger usage. But for tracking down heap corruption, a debugger with memory breakpoints is the bees knees.
It's funny but, in the embedded world, it's scary how archaic things are.
For example, just a few hours ago I convinced a colleague to try using the debugger. Our hardware has had a functional JTAG-based debugging toolchain for years, but people still haven't picked up on it. I'm the new guy who spearheaded it in the team -_-
For my 200 and 300 level embedded software papers, I was essentially stuck with using flashing leds and printf to debug code.
For my 400 level embedded systems design paper (which was actually a hardware design paper, we weren't graded on the code), we built boards that we could program and debug with JTAG.
I was stuck on a pain point for hours until one of the tutors showed me how to use GDB with the boards via JTAG. It took me literally 5 minutes to fix the problem. Being able to step through the code line by line allowed me to see exactly where it was breaking, and why it was breaking.
If I'm ever doing embedded development again (unlikely as I'm now employed as a web developer), I don't think I'll be able to function at all without a proper debugging environment.
Back when I was working on embedded, I typically wouldn't even bother looking at the debugger. They never helped.
The general workflow, when presented with a new system, was something like:
(a) Board would arrive. Admire it for a bit.
(b) Look suspiciously at the supplied CD. Gingerly insert it into computer. Oh, look, a Windows install.exe. Insert it into the Windows computer next to mine (with its screen and keyboard slaved to mine with x2vnc, which is great). Install.
(c) Load the terrifying, buggy, proprietary IDE. Close it again. (This was in the pre-Eclipse days. You really had no idea what you were going to get here.)
(d) Search through the vast pile of useless guff which it had installed for the embedded copy of gcc. Find it. Also find the BSP libraries, and link scripts.
(e) Realise it's a terrifying, buggy, proprietary-patched version of gcc where the source package doesn't match the binary.
(f) Attempt to find whatever terrifying, buggy, proprietary tool actually downloads images onto the board.
(g) From the command line, write a tiny makefile which uses everything found in (c) plus (f) plus the terrifyingly misspelt quote documentation unquote (supplied in a PDF on the CD) and attempt to produce and run a 'Hello world' image. Download and run it.
(h) Assuming (g) worked, bolt it on to our existing gcc-and-make based built automation and actually start work.
Any debugger was usually so tightly integrated to the IDE, which was always set up to assume a particular project layout which didn't match our source layout, that it was usually more trouble than it was worth; particular as our product had a lot of JIT stuff in it, which a source debugger couldn't help with much anyway.
The very best boards had an on-board monitor which gave you a Commodore PET-style assembler debugger. One even had hardware watch and breakpoints! The ability to single step through stuff, via a serial terminal, with no prior setup required, was amazing. It was sufficiently robust that even when the board really crashed badly it would drop into the monitor and you could examine what had gone wrong.
I have a particular hatred for the debuggers which required a slave task running on the board itself. (a) sucks to be you if you were running a different OS; (b) oddly enough, if your app crashed and scribbled all over memory, it tended to stop working...
</rant>
Edit: Oh, I forgot to say --- this was mostly before JTAG was ubiquitous, so debugging options were either a terrible, proprietary serial monitor or a terrifyingly expensive ICE unit. JTAG did come along later, and it was miraculous; you could even single-step through interrupt handlers with it! But it wasn't standardised and each board typically had its own interface, which wasn't supplied, plus its own software. Then not long after I got out of the game.
...I'm having flashbacks now. Downloading Java images onto an M-Core development board at 100-200 bytes per second with a 1 in 10 chance of a failed download for every megabyte. You think I'm exaggerating. I'm not. I still have the CPU from that board somewhere; I ripped it off once we were finished to make sure that nobody would ever have to use it again.
Fortunately, most new design starts in the embedded world these days are ARM SoCs. Most (not all) ARM chips use a standard 20 pin JTAG connector. The Segger J-Link supports a large number of parts for debugging.
The BSPs are still as scary as they ever were, though.
This is just like the anti-car people. "People ought to be restricted to slow and unpleasant methods of transportation because they should not want to leave their immediate vicinities."
There's a grain of truth in both cases - there's no energy efficiency like not needing to move around at all, and there's no memory/CPU/cost efficiency like not having sophisticated functionality in the first place.
But once you are used to the functionality that comes from heavyweight tools, "do without - it's better for you" is frustrating at best and more likely engenders hostility for the person saying it.
In a java IDE the debugger steps through your source in the same editor view that you just used for editing. You can hover over each identifier while you code or debug to get the docs in a floating panel. Editing incrementally computes compilation errors while you type. Content assist has complete knowledge of the types at your caret position and thus can make accurate suggestions what can be inserted there. Saving the file while debugging automatically compiles the code, runs tests and splices it into the running VM so you can continue debugging the running process with the new code while also showing you which lines you changed relative to the most recent VCS commit. And of course there's more.