I have tried. And I can't. It's just too slow when working with tons of output (which is common in my work). (term-mode is no better.)
Emacs is one of my favourite tools on a computer. I love Lisp (even Elisp, which is getting closer to Common Lisp as time goes on) and I love the interaction, but Emacs is not good for serious terminal/shell work and it, sadly, is pretty annoying as a tiling "window" manager.
Maybe with multi-threading it will get better, but until then, I can't see a compelling reason to work exclusively in Eshell.
>Emacs is not good for serious terminal/shell work
I am interested in this aspect of Emacs (since there're very few things traditionally done by Emacs that Emacs is too slow at), but confused by your comment.
My first guess was that the tons of output take longer to get inserted into an Emacs buffer than they take to get inserted into the buffer of a good terminal-emulation application.
But surely you realize that multi-threading wouldn't help with that, hence my confusion.
Can you give an example of a program that produces too much output for Emacs to keep up with?
Does the program generating the tons of output do a lot of cursor addressing (like, e.g., the progress bar of homebrew or curl does)?
> My first guess was that the tons of output take longer to get inserted into an Emacs buffer than they take to get inserted into the buffer of a good terminal-emulation application.
Correct.
> Can you give an example of a program that produces too much output for Emacs to keep up with?
Anything that produces a few hundred lines you didn't expect, e.g. a compile gone bad or an unexpectedly large diff. Unless you add something to "comint-preoutput-filter-functions" to discard output, you get to lean on C-c and wait for things to settle down.
I've run into similar issues. It seems to be the draw speed that is very slow. Each 80char line takes a good 10ms to render, which when working with a fast command that outputs a lot of text... The command will have been done for thirty seconds while eshell is still sputtering along showing output.
This is especially noticible with my on-modified-run-tests script - if there is a long stack trace, often times it's faster to kill the task and start it again rather than letting it finish.
If the output is getting parsed in some way then it can get really slow. For example, automatic highlighting of matching-parenthesis is great 99% of the time, but can hang the process for several minutes on large outputs (eg. a MySQL dump of serialised PHP objects).
On a related note, the "clear" function at http://www.khngai.com/emacs/eshell.php can help speed things up when your buffer gets large; it's basically like "reset" in bash. The reason it's useful is that the shell prompts in an eshell buffer are marked as read-only, so trying to clear a region containing prompts will fail. I'm sure there are other workaround for this, but running a "clear" command is easy enough :)
The thing that's so annoying is that it seems to process data that it really should ignore. It has nothing to do with display at all, it's a pipe.
(I wanted to update that example with using star-prefixed grep and sort since the info manual says that should avoid emacs built-ins, but after five minutes of waiting I gave up.)
I use ansi-term for quick tasks (e.g. re-run a build command), but you are absolutely right, if there's going to be a lot of output, I'm using my old trusty urxvt,
I love Lisp, and I am also quite fond of Elisp, but I am still hoping that Guile Emacs will be reality soon. I have missed a nice, well-functioning FFI in emacs once or twice, and guile would provide that.
And of course, ever since SICP I am still eagerly waiting to not ever having to program anything else than scheme.
I see two sides in shells, you want high-perf high-density UI or you want design oriented low bandwidth ones. For the former emacs is not good (I'd say it's not even lispy to witness and crunch lots of data visually, write abstractions/queries and such and think higher).
What I didn't mention in my post was that I did use SQLAlchemy Core to write some pretty complicated queries. It's actually quite good. I like it.
There were some spots that things got hairy though, and the code was pretty hard to follow. I don't fault SQLAlchemy here, but I wrote the query in SQL and it was simpler to work with.
SQLAlchemy is absolutely on the right track, but using the core doesn't diminish the fact that you need to know SQL to use it effectively.
I agree with your assertions that just an ORM is not enough, raw SQL is hideous, and that raw SQL spits in the face of programming language advancements. Sadly, it's the assembly language of databases and, unlike CPUs, doesn't have a good abstraction model.
So you didn't use SQLAlchemy ORM at all, yet you wrote a whole article about how ORMs "don't work", naming SQLAlchemy (strongly implying the ORM) as an example... if so, it would explain why all the complaints you have about ORMs seem to indicate a misunderstanding of the SQLAlchemy ORM ("attribute creep": query for individual attributes or use `load_only()`, `deferred()`, or other variants; "foreign keys": the ORM only selects from the relational model you've created, if your model has N number of foreign keys and the objects you're querying from span M of them, that's how many it will use, there is no "overuse" or "underuse" of foreign keys possible; "data retrieval": SQLAlchemy's Query object maps to SQL joins in a fully explicit fashion, no bending over necessary (see http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#quer... ); "Dual schema dangers" - use metadata.create_all() in one direction, or metadata.reflect() in the other, the "dual schema" problem is only in systems like Hibernate that don't offer such features (and actually it does, just not as easily); "Identities" - manual flushing and hand-association of primary key values to foreign keys is not necessary, use relationship(); "transactions"- ORMs don't create this problem and only help to solve it by providing good transactional patterns and abstractions).
I'd appreciate if you amend your article to clarify that you only used SQLAlchemy Core, if this is in fact the case. Your key point that one needs to know SQL in order to use an ORM is absolutely true. However, the value of the ORM is not that it hides awareness of relational databases and SQL; it is in that of automating the task of generating database-specific SQL as well as that of mapping SQL statement execution and result sets, specific to the database driver in use, to object-oriented application state, and keeping these two states in sync without the need for explicit and inconsistent boilerplate throughout the application. I discuss this in many of my talks (see http://www.sqlalchemy.org/library.html#talks).
If you worked in soda bottling company, you probably still know how to fill a bottle of soda by hand. It's the complex machinery that does this automatically which allows this task to scale upwards dramatically. Configuring and using this machinery wouldn't make much sense if you didn't understand its fundamental task of putting soda in bottles, however. The situation is similar when using an ORM to automate the task of generating SQL and mapping in-application data to the rows it represents. Removing the need for "knowledge" has nothing to do with it. The goal instead is to automate work that is tedious and repetitive on a manual scale.
The OP wrote: "What I didn't mention in my post was that I did use SQLAlchemy Core to write some pretty complicated queries."
What you seem to have read: "What I didn't mention in my post was that I did use SQLAlchemy Core to write some pretty complicated queries and didn't use SQLAlchemy ORM at all."
It seems to me that it would be more plausible to read: "What I didn't mention in my post was that I did use SQLAlchemy Core to write some pretty complicated queries, in addition to using SQLAlchemy ORM."
> Where RIM went wrong was underestimating the importance of software and the surrounding ecosystem.
RIM is quickly discovering that it is not a software company -- and never has been. They are also discovering that software is hard and is the future of mobile.
I say this as someone who was happily employed there a little as three weeks ago (I left to join a startup -- and yes, I absolutely loved my job at RIM). They have some good people but the majority of upper management is lost without a map.
They opted to create another mobile OS/ecosystem. The talent is certainly there, but the time is quickly running out. Apple has been working on iOS for years. Microsoft has been in the OS business for decades. Android is based on Java which has countless man-years behind the tools. RIM is working from QNX which is nice, but is not a full-blown OS/toolchain. There's a lot of work to do there; they've really only been at it for less than a year.
And let's face it: BBOS was something only its mother could love.
Taking into account the snark, it doesn't change the fact that you're presenting a false dichotomy.
Ungar isn't saying that we have to give up determinism. What he's saying is that to take advantage of massively parallel systems, determinism comes at a high cost.
Clearly, we can write programs on current architectures that are deterministic (well, perceived as such, at least) and we don't have to forgo that. But there is no reason to believe it has to be the only architecture. Ungar is looking at what happens when you try to do computation in a highly networked environment with low latency (like say, a brain).
Also, if you don't see it helping for the practitioner, perhaps practitioners aren't asking good (or enough?) questions.
More like: the more processors we add, the more performance can be gained by switching to nondeterministic algorithms for problems where getting a solution that is within 99% of the perfectly correct one and arrives in 1 second is better than getting a perfect solution in 2 hours.
I like the way you've phrased this, so let me give a contrived but quasi-realistic example: I have a network of nodes which calculates and returns some floating point value. I then accumulate those in the arbitrary order in which they come back from in the network. (According to Google) 1111.0000000000001 * 2 * 1111.00000000000001 = 2468642 BUT 1111.0000000000001 * 1111.00000000000001 * .2 = 246864.2. By using floating point we've already decided it's OK to give up some precision, however by applying the commutative property of multiplication now we've also given up determinism.
What I do to save my coder sanity is not try to write serious code in my web browser.
I've recently switched to [Solarized](http://ethanschoonover.com/solarized), which offers both options, but I think it excels at keeping contrast comfortably low.
Perhaps you should showcase some of those themes in the screenshots... I, too, dismissed your extension out-of-hand, and only noticed the theme selection dropdown after reading your comment.
Emacs is one of my favourite tools on a computer. I love Lisp (even Elisp, which is getting closer to Common Lisp as time goes on) and I love the interaction, but Emacs is not good for serious terminal/shell work and it, sadly, is pretty annoying as a tiling "window" manager.
Maybe with multi-threading it will get better, but until then, I can't see a compelling reason to work exclusively in Eshell.