>This is a bit of a personal rant, but we're living in an age with multi-gigaflop CPUs and multi-terraflop GPUs, and we have somehow regressed to a point where scrolling is laggy again? IMO this is absurd, as an industry trend in general. For this reason, I generally dislike the trend to write everything (including local apps) in HTML/CSS/JS. These technologies really weren't designed to support this sort of complex dynamic UI. While it's quite impressive what's possible despite this, the stress of forcibly warping HTML/CSS/JS to accomplish some of these UI feats shows through as horrible inefficiency.
I doubt the issue is simply the use of web technologies...the fact is the more powerful our computers become the more demand is placed on them. Your web browser is just one more thing running on your computer, there are a bunch of other things influencing overall performance. You could replace the entire web stack of technologies and you would probably end up in the same place performance-wise.
I envision a future where we have even faster computers but then maybe your OS would have integrated a bunch of AI that'll bring performance to a crawl :)
I doubt the issue is simply the use of web technologies...the fact is the more powerful our computers become the more demand is placed on them.
It's probably not simply the use of web technologies, but I'm quite sure it makes a difference. There's a lot of buzz about how slow Github's Atom is compared to Sublime Text, for instance. And BBEdit -- an old Mac text editor which often gets ignored these days as an unfashionable relic -- just loaded up a 24M, 181,000-line MySQL dump and is having no perceptible speed issues either scrolling the file or letting me insert new text on its first screen, even with syntax coloring enabled. The same was also true of Vim and Emacs (despite Emacs kvetching about the file's size when it loaded it).
It's my suspicion that the age of these warhorses may help them here rather than hurt them: serious editors twenty years ago required serious optimization work. While I do most of my programming in Sublime Text, there are certain times BBEdit is just the fastest tool for the job. If I was starting a new text editor, which I wouldn't, I'd be much more concerned about editing speed than it appears the developers of the current batch are.
> It's my suspicion that the age of these warhorses may help them here rather than hurt them:
Of course it does. Back in the day one couldn't take a megabyte file and say, "I'll just load it all into memory." When I worked in Visual Studio, it annoyed me to no end that VS had trouble loading large text files when ancient FoxPro (another MSFT dev product whose origins are from the 80s) would load them just fine. FoxPro didn't try to cram the whole thing in memory, just the parts it needed. I'm guessing vim and Emacs work in a similar manner. Don't use VS anymore, so I don't know if they ever fixed that or not. Particularly annoying is that it ever needed to be fixed, rather than being designed correctly (for which there was plenty of prior art) from the start.
Certainly true. (I'm old enough to remember TRS-80 word processors, only one or two of which were able to handle files larger than memory. SuperScripsit's ability to handle files as large as your floppy disk -- all, what was it, 180K of it? -- seemed amazing!)
Re: FoxPro vs. VS, it's probably quite unfair to suggest that it's in part because FoxPro didn't start its life at Microsoft, but I admit it was the first thing that came to mind...
Modern applications have a host of demands placed upon them today that older ones did not. Whether it's complex theming capability, accessibility support, or special font/unicode support, things aren't so simple.
I say this as a long-time vim/emacs user... (yes, I've learned and use both although I primarily use vim).
So while I'll readily agree that I doubt they work as much at optimisation, they're also being asked to do significantly more.
I think you're right, but Vim and Emacs and BBEdit all seem to have managed this without unduly choking. I'm not sure where the "fault" actually is in these specific cases -- if I had to place a bet it might be on the syntax scoping algorithms, as nearly every post-TextMate editor seems to re-implement TextMate's language files. (I don't know if Zed does.)
Let's not pretend emacs/vim are perfect; there are certainly situations where they slow down. I certainly have files where the syntax highlighting isn't so speedy or where it's temporarily broken.
I'm just trying to point out that attempting to compare applications from a certain period in computer history to modern applications is doomed to fail.
While I certainly believe optimisation work would help some modern editors, it would be silly to attempt to blame that all on optimisation work instead of recognising that the world is different now.
I can barely push my pc above 10% usage and yet I enjoy from time to time laggy interfaces and locks. Worst offender right now - Eclipse after being out of focus for more than 10-12 hours.
I've noticed that too on OSX - it's not just Eclipse, any memory heavy app such as xcode causes the same. I don't get this happening in Eclipse on either my Windows or Linux installs. I think it's just very aggressive paging to disk or something similar.
Ah.. weird I have no idea then. I've got an Eclipse running on Win7 SSD that has been open for over a month using 3GB of ram out of 16GB. It's very very snappy. I do have an Eclipse running on a macbook pro SSD 16GB and that one has about a 2-3 sec freeze when swapping back to Eclipse after backgrounding it for awhile, so I thought that was your issue. Maybe Win 8.1 works more similarly to OSX than Win 7?
Something in OSX paging is broken. See http://workstuff.tumblr.com/post/20464780085/something-is-de... for details. Apple refuses to fix this for years. This rendered my old Core2Duo/2GB RAM/spinning disk MBP useless even for web browsing. This sort of system rot is one of the reasons I'm running Arch on my Haswell MBP/16GB.
No doubt, as computational supply increases, so does the demand. Normally, this is a sign of health/progress; it only becomes a problem when demand shoots so far past the supply that we actually regress important user experience metrics (e.g. interaction latency or battery life experienced by the user). So why does this happen?
"HTML5" has rightfully exploded in popularity. In fact, these days it's hard to justify NOT building your front on this stack. But much like any explosion of a popular technology, the developer community will tend to push the limits of what it can reasonably do. This exploration is a good thing, but it can be frustrating until people settle into best practices of what applications do and do not suit a given tech stack.
> I doubt the issue is simply the use of web technologies...the fact is the more powerful our computers become the more demand is placed on them. Your web browser is just one more thing running on your computer, there are a bunch of other things influencing overall performance. You could replace the entire web stack of technologies and you would probably end up in the same place performance-wise.
Running a profiler would seem to suggest otherwise.
I doubt the issue is simply the use of web technologies...the fact is the more powerful our computers become the more demand is placed on them. Your web browser is just one more thing running on your computer, there are a bunch of other things influencing overall performance. You could replace the entire web stack of technologies and you would probably end up in the same place performance-wise.
I envision a future where we have even faster computers but then maybe your OS would have integrated a bunch of AI that'll bring performance to a crawl :)