I remember when I was starting out that people talked about this problem a lot. But it's been years since I've heard it said unless I was the one saying it.
I still routinely wonder if the right solution is to build up an 'action bar' the way video games often do. Microsoft got into this neighborhood with the Ribbon but I feel like they missed the target.
You graduate to more sophisticated options and the ones you use all the time are mapped to whatever key you want it to be. Add a hidden flag for setting up a new machine or user where you jump straight to expert mode and you're done. This costs the expert one more setting to memorize, but the payoff for doing so is quite lucrative.
Shortcuts seem to work great on QWERTY but less awesome for everyone else. Just let me set my own so I don't have to use an extra finger on Dvorak or a Japanese keyboard.
There are some examples of this UI pattern of advance the user in their skill on your program and match them (which I'm a fan of):
* Old-school programs used to have Basic/Advanced/Expert mode
* Games like LoL etc. slow-introduce heroes/champions/characters so you learn to play them
* In an IDE like IntelliJ with KeyPromoter or variant installed you start off with clicking through the UI and it tells you each time what the shortcut key could have been
* Clippy was a failed attempt. It's really hard to guess at intent.
I feel like there are three paradigms that can cover 100% of software.
The first is re-mappable hotkeys. An action bar would be great, but failing that - obvious tooltips of the hotkey on every single action. Also a complete and searchable reference that's easily discoverable.
The second is a quick-actions panel (a la Alfred, Spotlight, Cmd-P in Chrome Inspector and VSCode). Having this as a generally available searching hub is a huge timesaver, and lets you quickly find actions that you might not have bound. It's also (IMO) the best interface for quickly navigating between objects in different trees.
The third is a bog-standard menu system. It should use the same patterns that are ingrained in every software user's head from their first program (File, Edit, View, ..., Window, Help).
If every program had these features I feel like there would be very little UI friction at the expert level and a good deal of comfort at the beginner level.
Providing a good default "action bar" and mapping of keys for the beginner is left as an exercise for the UX team :)
That may be the other thing. Apps have gotten a lot bigger on average. How many shortcuts are really consistent across applications? A dozen? That might have been half of the shortcuts 20 years ago. Now that’s a quarter and nobody bothered creating bindings for half of the menu entries.
Hell, Outlook gets it wrong and MS used to harp on this loudly. I can’t count how many new folders I’ve created while trying to start an email.
I find that a lot of less experienced devs I work with like to prioritize "ease of use" in API design over other things, such as testability, orthogonality, decoupledness, reversibility, etc. If an API is "easy to use" from a client perspective, they often deem it a good one. API ease of use is definitely important, but weighed against other constraints, which are more fuzzy and more about long-term maintainability. Sometimes making an API slightly harder to use (often requiring additional client knowledge about the domain before using it) is worth the trade-off against ease of use since it means it's easier to extend in the future.
It's definitely a skill to learn what helps long-term usability vs short term usability.
IMO "public" facing APIs should always be easy to use and only require the minimum information from the user necessary. An example of an outstanding public API would be nlohmann's json library for C++[0].
Whether that API is merely a wrapper for an internal API that is more testable (i.e. allows injection, etc.) or whatever is another matter.
I think there can be debate on what is "minimum information". I'd also say "easy" for one developer may be challenging for another developer if the domain of the model is foreign to them.
A lot of frameworks require up-front knowledge to work with. To some, that's not "easy", but it allows the client to do so much because what the framework is providing is not simple.
In other places, the API can be dead easy because what it's providing is so simple.
I agree with the sentiment in the headline, but want to offer a counter example.
Consider Emacs vs. Notepad++, for the purpose of editing code. Emacs in this example represents "maximal flexibility" for having a configuration interface where almost every aspect of its function can be (re)programmed, and Notepad++ represents "maximal cooperation" for having a configuration interface and limited toolset tailored to the task specifically at hand (editing code). I'm not going to contribute code to either project; submitting patches for a "maximally cooperative" system to adapt it to your use case is just an advanced and inconvenient form of "maximal flexibility".
In my experience this has the opposite relationship to that described in the article. Getting started with Emacs is a significant investment (as per the article, "everything is possible but nothing is easy"), while Notepad++ is pretty much pick-up-and-go out of the box, but over time the extensibility of the former pays off in a better functionality/amount of work ratio.
There is an example of "maximal flexibility" in the article, but none for "maximal cooperation", and I'd like to see one.
I recently worked with a developer who proudly told me he has over 25 years experience. Unfortunately, he wasn't a very good developer.
In talking to him, he told me he switched jobs on average every 3-5 years. And I realized this was the problem. He didn't have 25 years of experience, he had 5 years of experience, 5 times over because he never went deep enough to master his craft. He stuck with Notepad++, never bothered to understand topics like hard tabs vs. soft tabs, never added automated linting or reformat to his coding workflow, didn't bother writing tests (or testable code).
I'm sure many of us have worked with someone who's terminal is misconfigured, has a messy PATH or their computer is in a general state of disarray. This is an example of short-term usability. Long term assumes general mastery of a tool/system... But this isn't always the case.
The general point the author is making sounds true but I am sceptical regarding any criticism against make. Building and/or handling dependencies was basically a solved problem as soon as make was invented and all of the new stuff in this area just seems plainly unneeded to me. Also, when people invent new build systems one can end up with projects where one part is built using one build system and another part is using another. Since things can depend on each other in arbitrarily complex ways because of code generation this will lead to build either too much or too little. In particular having a build system for a particular language is such a strange concept.
Make works pretty well, that's why it is used. But there are a few points it does not cover:
- Recursive building is complicated. it is not really easy to compose a small unit into a larger project, so that one just needs to copy the sub-unit into a folder.
- in some instances, it is necessary to type extra instances of "make clean", because the dependency analysis of make does not understand that dependencies have changed.
- say you build a program which includes /usr/include/ncurses.h. Then, you do a system update which replaces your ncurses library. Or, you do a git pull which changes that header. Make will not rebuild your binary without "make clean".
- files appearing or disappearing along the include path can change the result of a "make clean;make all", but are not detected by make's dependecy analysis. Here an explanation of D.J. Bernstein: http://cr.yp.to/redo/honest-nonfile.html
- doing atomically rebuilds: http://cr.yp.to/redo/atomic.html, so that a new dependency is either there, and complete, or not there
- parallel builds (which is related to the previous point, and also to the necessity to have complete dependency information).
Make has no way by itself to know what headers a c file needs. It would need to parse c code to do that. C files typically compile independently and then get linked together, so there are no dependencies between c files the make needs to know about when compiling the object files. The only dependencies make knows about are what you tell it.
Gcc does parse your c files and knows which headers are needed. It can print out a list of headers used to compile a c file to an object, and prints it out in make syntax.
You can add a short auto-dependency rule that creates and includes your header dependencies automatically.
Look up make auto-dependencies.
> in some instances, it is necessary to type extra instances of "make clean", because the dependency analysis of make does not understand that dependencies have changed.
The only sense in which Make will analyze a dependency is that it will run rules for which the target does not exist or has a less recent mtime than any of its dependencies. When what you describe happens, it's not a failure of Make's dependency analysis, but your failure to specify the dependency. This is easier said than done with the Make+CC combo, but I attribute that problem to quirks of C and CC, not to Make's dead simple and basically infallible dependency analysis.
The only way to cause Make's dependency analysis to "fail" is to modify the mtime of an already built target to be more recent than that of a dependency that would otherwise be more recent. That's only a failure in the sense that it might not desirable, it's still well defined behavior of Make.
> say you build a program which includes /usr/include/ncurses.h. Then, you do a system update which replaces your ncurses library. Or, you do a git pull which changes that header. Make will not rebuild your binary without "make clean".
If this happens, it's because you have a dependency on /usr/include/ncurses.h that you have not specified for the target. Again, easier said than done to specify the location of system wide header files that are maybe only resolved using pkg-config because you have no idea of their location on the users' systems, but that's not a problem that Make supposes to solve for C and the ecosystem of applications that exist to patch up its deficiencies.
Even worse is for example if you've specified your dependency on the ncurses header, update the system and end up with a new ncurses header that has new includes of its own that you have not specified. The only way you'll ever know is by reading ncurses header file. If you don't, the next system upgrade might update those unspecified indirect dependencies and your Makefile will shrug because you haven't specified those as dependencies. Broken, but really not on the Make end.
> files appearing or disappearing along the include path can change the result of a "make clean;make all", but are not detected by make's dependecy analysis. Here an explanation of D.J. Bernstein: http://cr.yp.to/redo/honest-nonfile.html
If the developer in the scenario described had simply added "vis.h" as a dependency to the rule that depends on it, it would not have happened. Creating "vis.h" and adding it and running make again would have solved the problem.
> doing atomically rebuilds: http://cr.yp.to/redo/atomic.html, so that a new dependency is either there, and complete, or not there
GNU Make for example does parallel builds fine. Yes, it's necessary to have complete dependency information (which in Make is achieved only by specifying it) but that's a necessity of any build system that supposes to reliably track dependencies in every case.
In general, your criticism seems aimed specifically against Make+CC. In that sense I completely agree that it's not a great combination and that there are probably solutions much better tailored for building C code that address its quirks specifically. Make works well however when you know what and where your dependencies are. C, when used to build software for n>1 frequently updated operating systems presents a header and library labyrinth that makes that a non-trivial problem.
Compilers like GNU cc (and clang, FWIW) have a flag you can pass to have it dump GNU make compatible dependency files, and you can have it include system header files (I turn this on); and in the other direction, there is a feature of make designed to fetch and include files if they exist and, if not, to actually consider it a target for the current build system to quickly build, mostly for these dependency output files. If you use GNU automake--which is not at all required (I do not tend to use it, for example) but is the intention of the ecosystem--this is all set up for you (though maybe without turning on the feature for system header files to be included?). If you are having issues with C code and make not knowing the dependencies, you are using the tools wrong.
The problem with make, is that it's yet another DSL to learn.
Every single tool in your toolbox could introduce a new one, and it leads to fatigue.
Task runner? New DSL (make)
Test runner? New DSL (robot)
Deployement? New DSL (Ansible)
Batch spawning? New DSL (tox)
Etc
Of course, dev in half of those DSL is a total pain, because, like with most DSL, the tooling sucks: terrible completion, linting, debugging or composing experience.
So people write/leverage tools they can use with their favorite language. And why not? You have to install it anyway (make isn't default on windows, it's not even installed on vanilla Ubuntu!).
It's in Python, the language of my projects. So I can use the same tooling, the same libs, the same install and testing procedures. And so can people contributing: no need to train them. Most devs don't know how to use make (most of them are on Windows after all).
Using make adds zero benefit for me: doit just does what make does (ok, that sentence is hard to parse :)). But make adds an extra step of asking to install it while I can just slip doit among my other python project dependancies. I have to google the syntax. I can't use tab to complete names, of right click to get documentation. And if (actually when) I need to debug my make file, God have mercy.
It's not against make. It just doesn't provide enough value to justify the cost of it.
This is a snippet from meson tutorial, and that’s why I’m still using make (and not cmake) in all my personal C projects. I have no idea what flags will be passed to gcc and how to change them (-mmsbitfields was required on windows for gtk). Second, I may have no pkg-config environment when I link windows executable against msys2-installed libs in plain cmd. These shorthands may be a long-term win for a regular project in strict unixlike or msvc-env.bat, but it is not a build system. It is a fixed recipe book (as seen from the tutorial page, don’t take it as a criticism). It substitutes a simple knowledge on -I, -L and -l with a cryptic set of directives. You spoke gcc very good, now you have to speak meson’s local dialect and be able to catch and fix subtle errors in translation.
The problem is, one has to investigate into seemingly-easy build system to tune it to their needs. It is much harder than just fixing CFLAGS+= or LDFLAGS+= in a Makefile.
For me, a better build system would look like a set of rules, but not in a Makefile, but in a general-purpose language, like:
I'd recommend you take a look at how Bazel works (Meson may be similar if you look further, but I haven't used it much myself). The default interface you get is relatively "high-level", but everything behind the scenes is a general-purpose system like what you describe, and you can customize it pretty deeply.
What makes it really great IMO is that the language and tool are designed for best practices. For example, your scripts can't actually execute anything: they can tell the build system what command would be used to build the file and what the dependencies would be. The sandboxing allows the build system to be pretty hermetic without much effort. This means that it can always parallelize your build, and incremental builds are always fast and correct.
> Perhaps the best known example of this kind of tool is Make. At its core it is only a DAG solver and you can have arbitrary functionality just by writing shell script fragments directly inside your Makefile.
I am still fascinated by redo (https://redo.readthedocs.io/en/latest/), which provides exactly the same purpose as Make, but turns its interface inside out: Make is a DSL for describing an acyclic dependency graph, with attached command lines to compile, link, install or run stuff, with a minimum amount operations. The fact that Make is not language-specific, and one just uses shell commands to build stuff, makes it versatile. But there are edge cases which have the effect that people often to prefer to do "make clean", because dependencies might not be captured completely.
"redo" is internally more complex, but basically it is a set of just ten shell commands which are used declarative to capture dependencies, and are part of build scripts. For example,
has the effect to rebuild a program if the file "/usr/local/include/stdio.h", or the source file "helloworld.c" was changed - be it by the programmer or by a system update. That's it. No "clean" command is needed.
The result is, in terms of user interface and usage, fascinatingly simple (I tried it with a ~ 20,000 lines C++ project which generated a Python extension module using boost.) The lack of special syntax needed is just astonishing.
But I wonder how well it would run with all the additional complexities like autoconf and autotools - knowing that the all complexity of these tools is usually there for a reason.
I don't see how this follows from what you've said. IME, `clean` is for when I want things rebuilt despite "nothing" having changed. Maybe it's actually the case that nothing's changed, and I want a full build because I want to know how long my build takes. Or maybe I've changed implicit dependencies - compiler version or similar.
> IME, `clean` is for when I want things rebuilt despite "nothing" having changed.
Of course, you can use clean in this case. You can do that also with redo, just delete all the intermediate files and start a normal build.
However the point is that you do a "make clean" most often to be sure that all dependencies are refreshed, because you are actually not sure that your Makefile captures them all.
A good example are builds in yocto. This is a system to build complete embedded Linux images based on make, with many subprojects. But if you change, for example, the kernel to support large files, or a 32 bit time_t, you can't be sure that a simple "make" delivers a correct system, because the dependencies on the system headers are not include in all the makefiles.
Please forgive the confusion, as I've only dabbled in Make, but...
> You can do that also with redo, just delete all the intermediate files and start a normal build.
...isn't that pretty much what "make clean" generally does? I mean, in a project of any significant size, "all the intermediate files" aren't going to be trivial to delete without a pre-built "delete all the intermediate files" script...which is what "make clean" is (at least partly) for.
Or are you just saying that "redo" (which I'd never heard of before, so, again, please forgive my ignorance) can also process a pre-built list of "intermediate files" to delete with some particular command or option...?
> ...isn't that pretty much what "make clean" generally does?
Yes, exactly.
The point is that "make clean" is used because the Makefile does not capture the dependencies perfectly. So, "make clean" ensures a clean rebuilt. But that can cost a lot of time, especially in large projects or when using C++ header libraries such as boost::python or Eigen.
Now, if you were sure that your depedency description is correct, you could skip "make clean". And this is what redo provides, because, at the one hand, it allows to use dependency information provided by the compiler, and moreover, it covers a lot of corner cases precisely. The corner cases not always working is why "make clean" is typically used for.
If I understand, the real point was that clean is less often needed.
Having said that, you could certainly list system headers in a Makefile (... hopefully a generated one) if that's the behavior you want, so I am not sure what difference is actually being pointed to.
> Having said that, you could certainly list system headers in a Makefile
You can. But that's a lot of headers. You can instruct gcc to tell the dependencies to you, but this information is hard to pipe into make because you'd need to generate a new Makefile from it (which is more or less what some larger projects do).
Using redo it is easy to use this compiler-generated dependency information, and the paramount reason for this is that it is not a DAG dependency graph DSL with added shell snippets but just a shell script with added shell commands which define dependencies. This is what I mean with "it is Make turned inside out".
I should note that in general, decomposing things into multiple shell utilities is something I'm a huge proponent of, and assuming redo does it well that's really cool!
That said, I don't think "depending on system headers" is somewhere make is lacking. The -M argument to gcc or clang gives you a well formatted Makefile which includes system headers. You have to explicitly ask they be excluded (-MM) if that's what you want (which it was often enough for the option to be added, it seems).
Ah, and no, it does not follow from what I said, I am just reporting about redo's properties.
As said, using redo is very very simple, but it is also mind-blowingly different from make. It is not difficult to understand, but the original documentation explains it far better than I could here.
I (ab)use Make for anything but compiling C sources to binaries. Eg. all my Python projects use Make to conditionally create virtualenv's, install packages, run code linting/tests. For which Make is just the most ubiquitous and simple tool available.
But one thing that still annoys me with Make are the workarounds needed to work with stuff which state can not be derived from a file (or only from complex paths). Things like having a background process running (eg. a test db) or a external package that needs to be installed (which can be easily queries using apt, but is much harder to determine reliably using system files). I end up having to create intermediate hidden files which sometimes decouple from the real state. Then things really become messy with Make for me.
Would redo be a better solution in this case as well?
> But one thing that still annoys me with Make are the workarounds needed to work with stuff which state can not be derived from a file
I am not sure about that one. I think the reason is that make is not made for general scripting, but it is made for generating a deterministic build product, with minimum time, from a deterministic input (the source files). In that sense, it is something like a "pure function" in functional programming. If you build software, you do not want to depend the result today on your CPU fan's speed, and tomorrow on the moon's phase. In other words, you really try to make the result deterministic.
What you certainly can do is to first generate a kind of task file from the current system state (e.g., needs to re-create a test db), and letting redo build these targets.
For installing / updating stuff, I do not see problems, but I believe package managers such as guix or nix or pip have more specialized options for this.
If you trust yourself to capture the dependencies totally then it doesn't really matter whether you use make or redo, you could add the stdio.h dependency to either. But the only way to actually achieve reliable clean-less builds is to run it in an environment where you either take away all access to non-dependencies (like Nix[0]) or automatically record all accessed files (like Tup[1]).
GCC also supports emitting header lists in a format that Make understands, but that won't cover non-GCC targets, or be as comprehensive as doing it in the build system.
GCC actually gets it wrong, as does almost every compiler and makefile combination.
Consider this realistic example. Realistic in that I've encountered it in the real world and seen buggy demos shipped because of it, with non-reproducible bugs.
We have a search path for headers, like so:
-Imyinc -I../dep1/inc -Idep2/inc -I/usr/include
We compile a source file which contains:
#include "creative.h"
GCC outputs a dependency which is read into the Makefile next time:
object.o: ../dep2/inc/creative.h
Looks good! Later that day we update from our upstream dependencies:
git pull
Which adds a file ../dep1/inc/creative.h
Recompiling, the project works:
make
...
make test
=> 144,123 pass, 0 fail - well done, you earned a coffee!
We captured automatic dependencies, this should be good to ship.
But no:
make clean
...
make
...
make test
=> FAIL!
What happened? Turns out someone upstream moved (by copying) creative.h to ../dep1/inc and forgot to remove the old copy in ../dep2/inc. Then someone edited a data structure in that file. Automatic dependencies gave everyone false confidence in the build results seen by different people.
That kind of automatic dependency is insufficient because it doesn't capture "negative file results" during path searches, where output depends on a file not existing, and changes if a new file is created.
It is possible to use Makefiles with this handled accurately in the automatic dependency tracking, but I haven't seen it done often.
An equivalent problem occurs in caches which automatically track data dependencies used to generate results. For example web pages generated from a combination of files, templates and data, build artifacts such as container/VM images, or at a much lower level, data dependencies inside calculations with conditional branches. To do caching right, they must validate negative tests as well as positive that arise in searches. I've seen a surprising number of cache implementations which don't try particularly hard to get this right.
> But the only way to actually achieve reliable clean-less builds is to run it in an environment where you either take away all access to non-dependencies (like Nix[0]) or automatically record all accessed files (like Tup[1]).
You can also let the compiler record dependencies. Gcc, for example, does have an option for that.
To mention, I think Nix or Guix are indeed good complements to things like redo! Guix System and NixOS are just geared to define the whole system, while make or redo build a single program or a set of artifacts.
One thing that one usually does with make is that one also figures out on what header file helloworld.c depends during the make run. Is the above code fragment hand coded or generated? I also am wondering if this also works if one only wants to build part of the project. make can take a list of files to rebuild as arguments. This also provides the ability to only create some set on intermediate files.
> Is the above code fragment hand coded or generated?
It is a handwritten example, there is no code generation involved. However, standard rules can be defined by patterns for specific file extensions.
Basically how redo works is that it runs the builds scripts to generate a target for the first time (these are shell scripts ending with *.do), and along the way it records every dependency for each target in an sqlite database, using a very small number of shell commands to define them in a declarative way. To re-built, it then uses this database of dependencies.
Because it knows all the dependencies, and because the build product is always $3 and then mv'd to the target, it can build everything in parallel by default.
I don't see how that's any different. You've made your dependency on stdio.h explicit, so the build tool knows about it - and if you'd done the same thing for make, it would've done the same thing.
Context matters, and context changes over time, making the whole process more difficult. When Apple was exploring how to build the Lisa (their first GUI system) they did user testing on both a single button mouse and a multi-button mouse. New users were confused by a multi-button mouse, so Apple went with a single button mouse. But as the years go by, and users get more sophisticated, it's clear that multi-button mice are better. There's no clear right choice that spans time. How fast are users going to come up to speed? What level of difficulty are you foisting on new users? Will they persist through that difficulty?
I don't know. There is some truth to this, but it's the same kind of argument I've heard from Rails people, EmberJS people, maven people, and many others; it's the famous "you're holding it wrong" argument.
Sometimes you really are the best judge of what you need, and when the system works against you because for whatever reason the designer of the original system thought your problem is not an actual problem, you have to resort to weird monkeypatches or weird workarounds.
I would be fine with the "this problem has been solved" argument if we could all agree about objectively right solutions, and in some cases we can (e.g. you don't want to handroll your own crypto), but more often we just don't.
I like the two graphs to organize the conversation. IMHO the key to success (max bang for the buck) is better composability and as little boilerplate code as possible. Haskell and Julia (for example) do this fantastically well.
The essential theme of long term usability is very close to what Guy Steele talks about in his fantastic talk/article Growing a language — you want as little as possible embedded into the language, and you want as much as possible farmed off to the libraries so that users can compose the pieces they like without drowning in too much code.
You could make the same argument for presales vs postsales.
There are numerous examples of this:
The look of the apple keyboard in a store, vs day-to-day functionality (or admittedly many apple products, such as a glossy display)
Any RGB product for sale now -- RGB keyboards, RGB mouse, RGB computer, RGB system memory. (get it home, turn it off)
meanwhile, a trackball or weird vertical mouse might be completely unapproachable, but for the folks who need them they are usable forever after putting in the time.
Interestingly I find that this is quite embodied in two of my favourite languages - Go and Haskell.
Haskell is all about composition. And it does so by dictatorship of its type system.
Go is dictatorial in the sense that there is one way to do things, but encourages compositionality, coming from a unix world where pipes are everything.
I wouldn't say Rust is 'struggling'. Compared to other languages, like say D and Nim, it's doing well.
No language could possibly topple its incumbent rivals overnight. To do this, it would have to have great advantages over existing languages, and excellent interoperability, and an approachable learning-curve. That's essentially a contradiction. If your language offers a new and better way to do things, it pretty much must be unfamiliar to those who use older languages. If the concepts involved were similar, you'd be releasing a library, or publishing a compiler optimisation, rather than developing a whole new language.
Perhaps that's a bit of a generalisation though. TypeScript isn't doing anything new, it's just adding an old and familiar feature (static type checking) to an old and familiar language that lacks it. In TypeScript's case, the feature isn't ground-breaking, but it's valuable enough that it may be worth the pain of using a different language.
It seems like C++ offered all of "it would have to have great advantages over existing languages, and excellent interoperability, and an approachable learning-curve" and worked hard and made whatever compromises were needed to do it. Which is probably why it took off relatively quickly.
For the deeply conservative domain that infrastructure code is, it has a fantastic fast and enthusiastic adoption. It will take a while until it is used as a standard choice for building python extensions and such (which also makes sense, this stuff should last for some while).
I still routinely wonder if the right solution is to build up an 'action bar' the way video games often do. Microsoft got into this neighborhood with the Ribbon but I feel like they missed the target.
You graduate to more sophisticated options and the ones you use all the time are mapped to whatever key you want it to be. Add a hidden flag for setting up a new machine or user where you jump straight to expert mode and you're done. This costs the expert one more setting to memorize, but the payoff for doing so is quite lucrative.
Shortcuts seem to work great on QWERTY but less awesome for everyone else. Just let me set my own so I don't have to use an extra finger on Dvorak or a Japanese keyboard.