For reference, this was a course project done in my class at Stanford (CS 242: Programming Languages) last fall, not a peer-reviewed publication. Overall a great project. The title is a bit general, but the goal was to identify what resources people found most useful when learning Rust and why. My favorite bit:
> Participant behavior indicates that they found the in-line Rust Enhanced compiler errors that showed upon saving to be quite useful. Participants were not told about this feature, and only discovered it, on average, 1058 seconds (17 minutes, 38 seconds) into the task. However, after finding the Rust Enhanced compiler error feature, participants began saving every 30.6 seconds, on average, with an average of 24.8 total saves per person.
I had previously automated it; from simple timers, to every X seconds of inactivity; and in my experience with it, the automation took away time when some editor/compiler stops your flow (errors out/fatally because you haven't completed your current typing), or more importantly, that setup started either colliding with the key press save muscle memory (leading to some frustration when the editor just paused slightly for analysis twice) or slowly removing that muscle memory when working with other editors/setup that didn't have the feature enabled.
I've since turned back (ages ago) and prefer this mental trigger to save and check for compilation/analysis at a pause in the flow.
I really feel that works best, I'm sure the automatic save will work for others...
I do Angular and I 100% agree, as I keep all compiler error outputs on for every save, and you find/squash a trillion bugs before they can turn into anything bad.
> “Given the positive sentiments towards in-line compiler errors and the lowered work-flow overhead resulting from removing the requirement of compiling within the shell, we recommend language developers create packages to make compiler mes- sages accessible in-line. This is consistent with prior work suggesting that the location of error messages and their struc- ture is critical [8].”
Ugh, this type of result is so depressing to me. I am a “young” developer (early thirties, most experience with modern C, Python and Haskell).
I just want to work in the shell! Stop foisting shitty UI tools on me to code with. They are not good. They just aren’t.
I just want a shell, non-window mode Emacs, and command line search tools. My workflow is read read read type type type ctrl-z from Emacs to the shell run run run fg 1 back to Emacs. Always use git grep and ag to find code, never ever use “jump to definition” features not even in huge codebases (it’s just too slow mentally to rely on that context switch instead of ctrl-z git grep ... emacs -nw ...).
The absolute most I’ll tolerate is syntax highlighting.
The idea of compiler errors popping up in my editor with little Xs or squiggly underlines like a MS Word doc makes me throw up a little in my mouth. Gross.
Every single time I’ve invested effort to become a proficient user of these types of tools, like intellij stuff and sublime and VS Code, it just kills productivity and I go right back to Emacs + shell programs.
Got to ask, what makes you use this workflow? What is it about your work that makes it more productive than more standard ones?
I get it to some extent; I'm a heavy Emacs user myself. But for instance, why C-z / fg1 between shell and Emacs, instead of using compilation mode[0], or having your shell in an Emacs buffer, via M-x shell, M-x ansi-term or M-x eshell? It works well with non-GUI Emacs; in fact, that's how I do most of my work when traveling (terminal Emacs via SSH to home desktop).
I’ve always found the experience of using shell within Emacs to be poor. I spawn a lot of shell tabs in my workflow as well. If I want anything to do responsive compilation or testing, I’ll just have a separate tab open running some conttest command.
I also find when I use tmux, I avoid splitting windows or panes. It’s always just one singular, dumb window per each shell tab, and just multiple sessions. I’ve tried and tried with it and the overhead of manipulating window placement is just worse than flipping between shells and treat it like all possible windows are always equal to the full shell window.
I use tmux the exact same way. Partially this is because opening multiple terminals is trivial, and it allows all the normal sizing functions that window managers have figured out over years (or that I specifically coded as shortcuts into FVWM2).
Why would I want to use a shitty pseudo-window system inside of a real window manager, when I can just use multiple terminals and connect to a tmux session in each?
Given that I also do a lot of my development remotely, so it doesn't really matter what computer I'm on as long as it has either PuTTY or OpenSSH on it, and all it really takes is the remote system being set up initially with vim (which generally exists on the remote system already), optionally with a single .vimrc copied over, or for a more heavy environment a .vim directory with a few pathogen plugins.
It is a trade-off. There are some things that are somewhat harder to do in a text environment (although it's generally just "harder", not nearly impossible), but there's also quite a bit of versatility. I have quite comfortably worked from a netbook for a month before. Smaller screen size and keyboard was the only problem.
> Why would I want to use a shitty pseudo-window system inside of a real window manager, when I can just use multiple terminals and connect to a tmux session in each?
With my workflow, Emacs is the window manager. It's good at that, especially in text mode.
While terminal emulation / alternatives in Emacs have some rough corners, your all shell interactions are within Emacs buffers, which give powerful workflow advantages if you want to refer to previous results, or if you want to do some ad-hoc scripting.
Yeah, I've tried that with vim windows. It wasn't to my liking.
> your all shell interactions are within Emacs buffers, which give powerful workflow advantages if you want to refer to previous results, or if you want to do some ad-hoc scripting.
I'm not sure I'm understanding the advantage you see. I use named tmux sessions that survive beyond terminal closing that I reconnect to. I have history for months in those. I can use two of those and have an editing window and a window to see compile/test results. If it's a remote connection and the network fails, I just reconnect and reattach the tmux session, and I'm in the same position I was, with the exact same screen shown and history. I can move from home to work with the same amount of easy as well.
Since I would run tmux anyways just for that capability, I'm not sure the benefit I get out of folding both those windows into the editor, other than harder to manage resizing, and harder to manager if I want to add or remove windows temporarily.
I'm not sure what making the buffer part of the editor is gaining there specifically, but I think there is something that I'm not gleaning from how you're explaining it. I'm just not sure it's enough to counteract the pain of in-editor window management (for me).
I work somewhat similarish to you (or at least, closer to you than the environment suggested in the OP), but I really don't understand why you're so disturbed. Don't use tools you don't want to use.
The reason it bugs me is that popularity drives what will be supported by companies. I used to have plenty of linux and Emacs support from official helpdesk channels. In my last several jobs, not at all. At my current job, even though we rely heavily on linux servers and there’s a strong minority of Emacs users, the company (over 300 engineers) mandated that all support for configuring our in-house build tooling and monorepo tooling will be 100% Sublime on Mac, no support of any kind for Emacs.
In one company, the primary language was Scala, and one of the primary Emacs tools for Scala is Ensime. But the company had its own bizarrely wrapped version of sbt running inside of pants tasks, so without explicit support from IT, it was not possible for a developer to use Emacs+Ensime for that Scala codebase.
I am perfectly happy for people who use these UI-heavy tools productively. But the marketing shtick attempts to raise it in popularity and promote it, like this very paper’s recommendation, seem harmful to me.
Code tooling should be extremely fractured, bespoke and customized, by its very nature, and that should be embraced and supported. Instead it seems like PR battles and a rise of (IMO) ineffective proprietary tooling.
My observation is that many geeks try to abstract away the complexity that they don't understand. Those proprietary tools start as a way to understand, but then become monsters onto themselves.
I flip back and forth between "working the big, working in the small" (paraphrasing Bill Joy). Sometimes closer to the metal is best. Other times I prefer the IDE with all its conveniences.
But I am the first to admit that sometimes I do get trapped by the IDE, distracted by fighting the tool instead of solving the real problem, and have to take a step back.
Said another way:
I only really object to IDEs and other tools of convenience when they hide the metal.
Like when you have the wrong mental model for a problem, and you're fighting the model, instead of finding a better model.
Because different people are productive with different tools.
I think your reasoning is completely backwards on this: it’s not expensive to support many different sets of tooling. No, what’s expensive is paying a lot of money for people who can be very productive for you, but then preventing them from giving you that productivity due to poor tooling. When the poor tooling has been standardized, your good developers are heading to the exits.
I used to be like this a bit, but I've come to really embrace the extent to which Visual Studio (not "Code", the real one) can help. The integration does have to be above a certain quality level in order to help though.
Why is "jump to definition" a context switch? In VS it's a single keypress.
Yeah especially compared to interrupting the editor, grepping code, parsing results getting back in, opening file and line number..
Find in files/project in Visual Studio makes that so much faster, also you can easily jump through all results without ctrl z and fg thousand times.
Versus a single hotkey press. Where is the context switch? :)
I use the first workflow myself pretty often, only with vim, because I work a lot per SSH. And it is just massively slower.
The single hotkey press is virtually never applicable though. If all you need to do is navigate to the definition, after the single first time you find it, it’s a matter of ctrl-b <file> enter.
At least in my experience, 99.9% of the time, I need a fast and organized way to see all the places where something is used or mentioned. The one special case of the definition is only a super tiny fraction such that the difference between a hotkey or an emacs key combination is utterly meaningless.
I do also think that being transported to some location of some file is a context switch that requires you to stop and think about where did the hotkey take you, what part of the source tree, what else is there.
When you git grep and open a file of your choice, the experience is just different. The path there aides you in understanding what to look for or do, at virtually no extra cost because it’s so fast and easy even in large codebases.
Maybe there is some slight benefit of the IDE handholding and hotkey approach in a codebase you literally never looked at before during the first few times you search for something. But that “burn in” effect isn’t enough to offset how the UI gets in the way, features that have to be accessed by menus or clicks, inevitable failures of integration with linters, compilers, or runtime programs.
No mention of Mozilla IRC?! If it weren't for the seasoned, patient Rust evangelists helping in #rust-beginners, I probably wouldn't be using Rust full time right now.
Have not read this yet, but I'm very excited to find some time to do so. However, it seems that this does not reference any work done in natural language acquisition. I assume because the study authors are not conversant in the field. I'd love to know if anybody has seen any work that does relate the two.
I suspect if you dig into the origins of Perl, you'll find some interesting stuff. If not on purpose, as a case study of what happens when a linguist makes a language (and what he thinks worked and what didn't).
> Participant behavior indicates that they found the in-line Rust Enhanced compiler errors that showed upon saving to be quite useful. Participants were not told about this feature, and only discovered it, on average, 1058 seconds (17 minutes, 38 seconds) into the task. However, after finding the Rust Enhanced compiler error feature, participants began saving every 30.6 seconds, on average, with an average of 24.8 total saves per person.