As someone who uses neither Just nor Make, I'm trying to understand the value proposition.
From what I can tell, Just accomplishes what a set of scripts in a directory could also do, but in a more ergonomic fashion. Since the justfile is a single file, you're not cluttering up your directory. You don't have to look for all *.sh files, but can instead do a "just --list". And using "just target" is probably easier to type than "./target.sh" since the just version has no punctuation.
What are some of the other benefits of Just that makes it superior to "a set of scripts in a directory"?
They are both directed graphs that support dependencies, i.e. if you run `make a` or `just a` and they depend on b and b depends on c, both tools will build c first, then b, then a.
What often happens is that people should use `make` (or `just`), but don't, and instead end up writing a poor replica of `make` as a custom python script for example.
And then that python script perhaps shells-out to a subprocess because you can't run something in python directly as a lib, so now you have to import and invoke subprocesses, and so on. Building and deploying a static website using just or make is 4 lines.
This should be in their README. After looking at the repo and some of the comments here, yours was the moment I went from “what exactly is the value prop?” to “that could be useful for me”
I don’t know where to start when the script is in ./Scripts/ or ./tools or ./bin or ./shared/tools. Make has a convention that its file is called Makefile. Just has Justfile. Easy to find. Justfile - here I don’t even have to find it, the «just» executable will search up the directory tree.
Most people with a programming or Unix sysadmin background know about Make and what it is for (i.e. compiling source code, plus building and installing system tools).
Knowing that 'Just' is an alternative to Make for running project utility scripts, it may gain popularity and become as well known.
I'm not sure `make` is any less of a mess, to be fair. Especially when you're trying to use `make` for general project automation rather than the standard set of build tasks.
Then this is good, but hopefully scope creep doesn't creep in.
I'm not a software developer, but a scientist, and eventually like 5 shell scripts fill a directory. Having them all be in one place sounds neater and easier to document and come back to a few months later without needing to either open up each individual file to read its own comments or to make a separate readme file, seems good for small projects.
You can also run it from any subdirectory as if they were in the root of a project (it searches the parents for the nearest justfile). E.g 'just build' from /proj is the same as when it is executed from /proj/lib1/module1/ without having to think about what the relative path should be.
Because task runners come from the lineage of build systems, I (previously) would have thought that smart management of dependencies was a critical feature, especially skipping outputs whose inputs haven't changed, so that tasks can be run efficiently. Make and rake both do this.
But I guess, enough of the time, what people want is just a nice tidy way of organising tasks to run. Not drastically different to a set of shell scripts, but organised differently, with a few extra features and less boilerplate per task.
Job pipelines - it's easy to say that the build-release step requires the test step. Admittedly this is still possible in a shell script, but it is either messy (folder of helper scripts?) or requires code duplication.
> Since the justfile is a single file, you're not cluttering up your directory.
You can do the same with any other scripting language, because with most popular languages it's quite easy to handle cli-parameters on that level.
And using a single messy file is usually not a good pattern. But on the other side, "just" comes with builtin tab completion which you would not get for any random script out-of-the-box. And if you are only using half oneliners or very short scripts, the messiness of a single file does not matter that much. Though, how well this will scale over time is a different topic.
> And using a single messy file is usually not a good pattern.
I find my Justfiles rarely ever get large enough to become messy, and then I just factor out the large tasks as scripts and it's all nice and tidy again. Almost all of my projects with a Dockerfile have docker-build and docker-run-local just tasks and those are pretty much always one-liners, and usually that's as complex as it gets. I treat Justfiles more like runnable readmes than a framework for homebrew build systems; if I need a proper build system, I'm probably going to invoke it via a one-liner in a just task.
I also love just, but I try to restrict my usage to projects that don't have larger communities or user populations, as the getting it installed aspect is nowhere near universal as make. my favorite is mixing scripting languages and shell in the same file, albeit its got some rope.. but its productive and intuitive. https://github.com/kapilt/aws-sdk-api-changes/blob/master/ju...
You don’t need to use all of make’s features (I’ve been using it for something like 30 years and still can’t fathom sets of it). But TBH I just don’t see anything in those Justfiles I couldn’t do with make without needing to do anything special…
Most people don't have the advantage/burden of three decades of experience with the tool. Make's only advantage as a task runner I see is being ubiquitous. If that isn't a concern, no reason to use upper Pleistocene Make that isn't even a task runner over a more ergonomic tool. Make is so warty I have yet to see a team using Makefiles not run into any of its idiosyncrasies, but with just? Smooth sailing. Just works. No surprises. I'm a fan.
Getting the list of make targets by default is pretty nifty. As is not having to preface everything with PHONY.
Do any of these make alternatives reinvent the paradigm? No, but they do offer some quality of life improvements I wish were within reach without jumping through hoops.
What's the problem with ".PHONY" ? Just the name ? Would it be alright for you if it had a different name, like "INTERFACE" or "COMMANDS", or "NOFILE" ?
I never thought about .PHONY as a workaround to anything. Just a slightly unnecessary annotation that you may add to the makefile if you want to be pedantically correct.
Invoking scripts with a language env and arguments both stand out to me.
Both are possible in make, but are extremely non-obvious and come with a bunch of caveats and require some heady code blobs at the beginning of the file.
I've been using Just since at least mid-2018 (that's the oldest commit I can find), and we're using it on almost every single project at $WORK. It's easier to comprehend than make, doesn't have random GNUisms or BSDisms, it's easier to work with than a collection of random 5-10 line scripts, and despite being a bespoke tool, it's intuitive enough to a point where it immediately feels familiar.
Adding .PHONY targets and so forth is a bit inelegant, but I can share a makefile with confidence that any Linux/Mac OS/BSD user can use it without needing additional software, and I will never have to worry about make becoming unavailable or no longer maintained. Just my personal opinion.
> but I can share a makefile with confidence that any Linux/Mac OS/BSD can use it without needing any additional software
I'm sure you're kidding, but in the case that you're not: Make portability is gross.
We have ./configure steps precisely because Make is difficult w.r.t. portability, but even if that wasn't the case and you were just using make as a command executor: you still have enormous warts.
Oh, and yeah, you'd need whatever additional software too.
Be it: headers, linters, formatters, libraries or test suites that you've bundled.
Valgrind is a popular make target, but "Make" does not bring in valgrind (for example).
Honestly one of the most backwards things the Go community did was adopt "Make", it's so kludgey even as a pure command executor that I can't really take anyone seriously who argues for it's use.
I'm not saying "Just" is a replacement, I don't know what is.
Most of the arguments for using Make boil down to "I enjoy typing `make <something>`" and "you probably have it installed already?".
This. People here are acting like make is installed by default on all Linuxes, but it absolutely is not. And the various BSD makes are very different to GNU make.
Make portability is bad enough on UNIX-like OSes, to say nothing of what a crapshoot it is on Windows. Even if you do have make installed on Windows, there's no guarantee that what it shells to is going to be able to run all the commands people tend to put in there.
Plain old shell scripting is much more portable than make, because a shell is definitely installed on every UNIX-like OS, and there is a very clear baseline of functionality that works in every Bourne/POSIX descendant. And it's quite likely to exist on a modern Windows developer machine too because bash is bundled together with Git.
My theory of writing developer scripts is to prefer the tool that already exists in the language you're developing. Gradle for JVM, npm for Node etc. Otherwise just use shell. Make feels like wrong tool for the job.
> there is a very clear baseline of functionality that works in every Bourne/POSIX descendant
Where is the best place to learn what this is? I'd love to make sure I'm writing portable shell scripts when I do have to.
There's also the issue that "shell scripts" often involve using binaries which you might not even realise are binaries (is "echo" a shell builtin or a binary? I forget) and which may differ from system to system. I've been bitten by grep issues before writing scripts across Ubuntu and OSX.
https://www.shellcheck.net/ and it’s accompanying cmdline tool/ lsp integrations is a lifesaver for preventing that kind of thing. It’ll warn you if you’re doing anything not portable and even smartly changes it’s behavior depending whether your shebang line uses bash or sh (iirc)
Do you need portable shell scripts though? IMHO, it really depends on the context.
If I was about to ship an open source application that came bundled with some shell scripts, then I agree portability is good, so that I know the script would run for people who might not have Bash installed.
But at ${DAYJOB} I much prefer to let Bash run all my scripts, and I make that explicit via ‘#!/usr/bin/env bash’.
Bash is still evolving and the Bash devs are adding new features that I would miss in pure sh. Case in point: A (somewhat) reasonable way of working with arrays.
Make works remarkably well. You’re just confusing it with C compilation. None of these complaints have anything to do with make.
Autotools, which generate configure scripts, was built to work around the specific issues associated with old-school C cross platform compilation (with shared libraries, version differences, and misc libc editions). Ditto valgrind, et.al.
So, yeah. Make’s fine. You just don’t like C. Which, that’s cool, just unrelated.
I'm not writing C. so I'm not sure what you mean, I mentioned ./configure as a solution to a problem because it was obviously a big enough problem;
To go into issues though:
Make itself executes by default with `sh` which is wildly different between platforms.
Even if you write portable enough shell; Paths are still incompatible between OS's and distros.
You still must ship your tools, which is a direct contradition of what is mentioned.
In fact; I just googled it and this chapter from Managing Projects with GNU Make; talks about the issues in making Make (GNU Make, as opposed to BSD Make, which is different enough to have broken my things!): portable
./compile works around c cross compilation issues, where different platforms have different files with the same names.
Fixing that within Make would require it to be platform aware. Not just “is this Linux” but “which flavor of Linux is this and what version of that flavor”. It’s also highly specific to C.
Perhaps your other complaints here are valid, but they’re issues I’ve never run into myself, in my 20 odd years with it.
EDIT: Would you blame Just for not handling C cross compilation capability built in? Would you blame Just if someone automates the creation of Just command files to work around a particularly nasty workflow?
I don't need to google to find fault, but I figured the make book might have something to say about portability and in better words than I can construct at 3am.
Look, I'm not taking your tools away, there is no need to be defensive.
Make isn't going anywhere, but it is a bad tool, the syntax is completely arcane and it's designed for things few people actually need these days.
The most common case I've seen of modern Make usage is `make docker` and for Golang, where it doesn't get an opportunity to stretch its legs as a dependency manager at all -- making it a glorified task runner.
The portability aspect is all I mentioned, because that was all that was in question.
But if you really want me to get into it, I can be quite cruel.
Just because you spent 20 years learning or using something does not mean it is a good tool, I'm glad it works for you, truly, but it is an abomination and people only continue to use it for a sense of sunk cost fallacy or by telling themselves that "most people have it installed already".
I’m...talking about using make for its intended purpose, which is selectively building artifacts whose dependencies have changed, e.g. compile executable A if source code files X Y or Z have changed and leave it alone if not.
What exactly are you talking about? Are you upset that make isn't a cross-platform package management system?
You don't know what alternative there is to Make and then in the next breath you say the only arguments for it are personal preference?
> What exactly are you talking about? Are you upset that make isn't a cross-platform package management system?
Ok, so you were serious with your claims of portability, that is concerning.
The majority of times I've seen Make used it's primarily been as a task runner.
For example:
TAG=some-service
SVC=website.com/$(TAG)
BUILDER=golang:1.19-alpine
export REVISION_ID?=unknown
export BUILD_DATE?=unknown
RUN=docker run --rm \
-v $(CURDIR):/opt/go/src/$(SVC) \
-w /opt/go/src/$(SVC) \
-e GO111MODULE=on
build:
ifeq ($(OS),Windows_NT)
# Workaround on Windows for https://github.com/golang/dep/issues/1407
$(RUN) $(BUILDER) rm -rf vendor vendor.orig
$(RUN) $(BUILDER) rm -rf vendor vendor.orig
endif
$(RUN) -e CGO_ENABLED=0 -e GOOS=linux $(BUILDER) \
go build -o service -ldflags "-s -X main.revisionID=$(REVISION_ID) -X main.buildDate=$(BUILD_DATE)" \
./cmd/some-service/...
# $(RUN) $(BUILDER) rhash --sha256 service -o service.sha256
docker build --tag="$(TAG):$(REVISION_ID)" --tag="$(TAG):latest" .
run:
docker-compose up
serve:
go run ./cmd/some-service/...
dev:
ulimit -n 1000 #increase the file watch limit, might required on MacOS
reflex -s -r '\.go$$' make serve
^ the only thing this is "using" of Make is the name "Make" and it's so much worse to actually debug than a bash script, it's even got workarounds for various platforms inside of it.
Edit: We're actually using this - and I remember there was a modern version with colors and a cool short form. But I can't find it - anybody else got nice examples?
(btw, most of times I've used "Make" has been times where the pain has been done for me or I'm using Python, Go, Docker or in one case: Rust. I have never touched Autotools or C-compilation myself with Make, my arguments have nothing to do with C compilation at all.)
These are just examples of poor portability (thus workarounds) and tools that are not installed when you run `make` as the GP suggested, the claim was that you could write a make file and kinda not have to worry about anything else.
Not the case.
I could just as easily talk about the pathing issues between MacOS and Linux: or the multiplicity of issues surrounding `sed` (GNU) and `sed` (BSD), or that the `black` python formatter won't be installed by default.
It is not as easy as the author claimed, you need whatever dependencies you call on, obviously.
Unlike OP, I get the impression you’re referring to the good faith clause. I think they did that admirably, balancing a stated assumption that the intent was well meaning and knowledgeable jest with a friendly and informative rebuttal clearly intended to be helpful if their assumption was flawed.
But you could be referencing any number of other things in the guidelines. It would be incredibly helpful if people on HN who engage in community moderation actually explain their reasoning. Here I’ll help:
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
If you’re bothered enough by someone’s comments to link the guidelines, sharing a thought about which part of them is pertinent would be more thoughtful and would facilitate a healthier discussion.
> Please don't post shallow dismissals […] A good critical comment teaches us something.
This is one I’m working on too. If it’s worth challenging something, it’s probably worth challenging it with some proverbial meat. It might seem obviously wrong to you, or to me, but it doesn’t always seem that obvious to everyone else.
I’m certain you meant well with this, but I also think OP would benefit from clarification. I know I would too, and I expect the discussion would benefit as well.
> I can share a makefile with confidence that any Linux/Mac OS/BSD user can use it without needing additional software
Makefile portability can be tricky. Especially if you try to do something fancy with the makefiles.
GNU Make has features and syntax that the other Makes don’t. Likewise there are features and syntax that some Make programs have that GNU Make doesn’t.
Make is not _that_ portable. If you're using high level languages and only need a task runner to kick off your compiler, watch rules or similar and need portability, you could write an executable bash script with functions that serve as your commands;
#!/bin/bash
set -e
function build() {
echo "Your build steps"
}
function clean() {
echo "Your clean steps"
}
eval $1 $@
Which you can run with
$ ./task clean
$ ./task build
You can also write these kinds of task scripts in JavaScript or Python - which might be easier to manage compatibility with Windows
Yeah that's fair. I only use make as a task runner for high level languages where the compilers that take care of those aspects of compilation. My makefile commands are never much more complicated than "cargo build" or adding compiler flags to my "go build" command
I actually do exactly this in some of my personal projects. don't need the eval, store $1 in a variable and shift $@. and you might also want to check that $1 is a function defined in the script, lest the task runner execute some other arbitrary command
No reason why you couldn't do that too - it's to taste. I generally like to have fewer files in the top level of my repos to keep them approachable. You could also have a folder that contains scripts like: `./scripts/build`
Actually, I did a hack that does that too (this one is nicer, though). I can’t find mine offhand (too many makefiles), but it is also a Makefile target that runs grep -B1 on the Makefile itself and spits out each target’s name and the comment I usually add to it… And my Makefile template uses .env files too… I think I cover most of the list in my day-to-day use.
That works well if you have targets with simple comments that are directly above the target and don't extend beyond a single line. It may be important to provide several lines of help. This gist has other solutions as well:
I like similar tricks. Another trick I like to use is being able to do a `make showconfig` and have it print the list of variables & their values that I care about.
You can see that here. You can also see my Make+BASH solution for documenting targets.
If you're going to use it like Make without the build system parts, why not just have a directory of tiny scripts? More portable than either, the only boilerplate is a shebang line, and you can use static analysis tools (shellcheck), formatters (shfmt) and the like.
Take the example they've screenshotted in the readme[1]:
alias b := build
host := `uname -a`
# build main
build:
cc *.c -o main
# test everything
test-all: build
./test --all
# run a specific test
test TEST: build
./test --test {{TEST}}
The equivalent would just be something like the following:
build.bash:
#!/usr/bin/env bash
set -o errexit
cc *.c -o main
test-all.bash:
#!/usr/bin/env bash
set -o errexit
"$(dirname "${BASH_SOURCE[0]}")/build.bash"
./test --all
test.bash:
#!/usr/bin/env bash
set -o errexit
"$(dirname "${BASH_SOURCE[0]}")/build.bash"
./test --test "$@"
If you want to get really fancy you could make a common.bash with safety pragmas and things like the host string.
As matter of practice, I always install git-bash or MinGW or CygWin on my Windows boxes. There's only one instance I've found where these solutions were not sufficient for my uses cases and it was an admittedly horrible corner case where a customer's network forced me to not "do the right thing(tm)".
Having tried all 3, I’ll warmly recommend WSL2 instead, which is basically linux. But many people do work on windows, even write software for it. While I’m in your Linux based corner, some people aren’t, and it’s nice to share a task runner with them.
Last time I checked, git bash doesn’t ship make, either.
Huge fan of just; I add a Justfile to pretty muc hevery new repo I create regardless of language or stack.
My personal favorite feature is the ability to load environment variables from a `.env` file and set them for all commands run. Just have to add this to the top of your `Justfile` to make it happen:
As someone who _extensively_ uses Makefiles everywhere to speed up things (why bother remembering how to start a server in a particular language when “make serve” will work anywhere), I almost understand why this exists, but then I remember that make is available everywhere and has tab auto completion and I have to wonder why…
Because make is dog shit if you need to intertwine make with bash. You have to remember various escaping rules (double $$ signs or not depending on whether you want to refer to a make variable or interpolate a bash variable), tabs instead of spaces that new devs often (quite rightly) get tripped up by, and various other idiosyncracies you can waste hours on.
sure, let's stop looking for better ways to do things because this one 30-year old way is capable of being twisted into what you need, no matter what it is.
I get the reverence around make but don't blind yourself to potentially better ways of doing things.
I would even go so far as to say that intentionally avoiding new ways to approach these things will necessarily blind you to better ways once they do come along, and I wonder how many good tools have died because "[x] does all I need." (replace "x" with whatever you like.)
This simple-ish guide made me realise that make is not as suited to my needs as a mostly web developer, and that just would work better. NPM functions are doing just fine for me now, and some of the routines I'd map to npm commands anyway I don't know how much benefit it would actually bring to me.
What use case is that? I looked over the README, including the parts where it claims to do things Make can't (I don't agree with much of it). I remain unconvinced that it supports use cases Make can't.
It's not that Make can't work as a task launcher, it's that it's not designed as such.
The simple fact that a build target isn't executed if the build file already exists, requiring a workaround to run it, points to the difference in purpose and philosophy. Those little details, together with the more modern design, accumulate to make Just a tool that is simpler to use for that purpose.
Just is built in Rust rather than C. Rust is generally nicer to work with and maintain than C libs so huge win there development wise. Feature wise, only see a couple things with Just right now that differentiate it from Make.
I'm not sure what you're confused about, maybe you missed the point of this being shared on ycombinator news? Just being written in Rust versus C is part of an open source movement to modernize these old tools and the significance of using a modern language versus an ancient one is better support and features going forward. The reason this tool is currently top voted on ycombinator news is purely because it's written in Rust and of interest to programmers who care about their open source tooling.
This is a weird opinion to me, but I reckon you're an ops person who doesn't code a lot so I respect not wanting to have to look at code if you're an ops/admin type. Most programmers would grasp the relevance of modernizing their tools and the maintainability and feature gains from it. The open source movement in general is based entirely upon being able to look at the source code of your tools and modify and update them.
I hate to be the bearer of bad news, but your assumption is incorrect. I possess deep expertise in a number of areas, code flows with ease. Reading code is it's own skill, too. I'm not easily intimidated by any programming language or problem. On the technical front I've done everything from development, SRE, ML/AI, founding a company, being a leader and executive at small and very large companies.. it's all fascinating in it's way. But the most fascinating things I've found in the universe are people.
When something doesn't work as expected, I dive in as deep as the rabbit hole goes to get across the line.
Curious what led you to arrive at "Aha! They must be an ops person", will you humor me with an explanation?
Would you throw out sqlite, written in C for a Rust clone?
It is possible to reimplement a relatively easy tool like make, and having learnt from its historical shortcomings it can be better irrespective of the implementation language. But that’s a different point.
This is actually very different than `make`, since it will always run the tasks even if their inputs haven't changed[0].
It's a bit bizarre to me that their example involves building code, since that application generally benefits from this "idiosyncratic" behavior of `make`.
You probably would want the behavior of `make` for the `test` command too, to avoid running potentially time-consuming tests unnecessarily. Bazel caches test results to avoid this, for example.
Caching test results sounds like a horrible idea, since the test could rely on external state or dependencies that weren't checked. For example, if you're testing something that opens a socket.
People often come up with this absurd argument. If all your tests are opening sockets, it must be a nightmare as they may fail randomly every time you run them.
Keep tests that open sockets and otherwise interact with resources not controlled by the test environment itself separately, what most people like to call integration tests. Only those cannot be cached - and it's a huge trade off and while you get a lot of value from those tests, exactly because they can't be cached, you want to keep those to a minimum.
Most applications I've ever worked on talk to the internet or to other servers on the network, you can't close your eyes and pretend everything runs on one machine. You eventually have to test all of your software.
Certainly, you can optimize some of your tests by caching them! But they should be fast anyway, because your software should be fast. The software I currently work on has massive test suites containing around 50k tests that run in about 3 minutes per suite on my PC. It's time consuming, I suppose, but caching would mean that I'm not actually flushing out latent bugs. Instead, we run all of the tests every time and do things like randomize the execution order, so we find bugs before customers do.
At the point where your software interacts with threads (or the kernel!) you've already brought in non-determinism and trying to act like your tests are really deterministic is naive.
I've written so many tests in my life I think I know what I am talking about. It's not naive, it's what professionals do.
Learn to separate tests that use outside resources and behave undeterminastically from the ones that are never flaky, make sure to cache the latter. If you don't do that you're still in the dark ages of testing when tests are a pain to touch and fail all the time for no reason.
Many folks believe that builds and tests should ideally be "hermetic", and not depend on external state. This can make tests more robust, and facilitates tasks such as bisecting to identify the cause of regressions.
I'm a big fan of fast, self-contained hermetic tests, but at the end of the day you should be testing the actual behavior of your software. This means all your dependencies should be versioned so you can bisect actual behavior, not mocked test behavior.
I've also rarely ever touched software that is actually free of nondeterminism, so I am deeply skeptical of caching anything but the simplest test case. And those are fast.
How do you propose testing software that is meant to use sockets? Just don't?
Mock the entire socket API, so you're not actually testing your software?
At the end of the day you need to run some tests with external interactions, and you shouldn't cache those.
Also keep in mind any time you perform syscalls, you have external interactions. Their behavior could change and you want to know if they have intermittent failure scenarios. Using threads? You're nondeterministic, you should keep running tests.
I do love me some make, and have since forever.
but we do have to look for a successor
as cleaning out all of make's historical baggage
would be a disservice to too many (really any is too many).
If you are not sure of what I am talking about, try typing
`make -p`
those builtin rules can be disabled if you are not building
ancient artifacts but that we have to is why one of these
work-a-likes is going to win some day.
I dig the general idea, but question the value add over a directory of `scripts` that follow sane conventions (ie `script/test`, `script/build` etc). Is the main thing that you can do `just -l` to see available commands? I have never really reached for `make` when I've had a choice, as I've done mostly ruby, JS, or java where you have more sane, humane tools (i.e. Rake, Yarn, Maven though that one is never fun).
My general approach is every repo should have something that follows https://github.com/github/scripts-to-rule-them-all, written in sh (maybe bash, its 2023), linted with shellcheck. When you need something fancy Rake is great or grab some nice bash command line helper and source it from all your scripts. Is a command listing really worth another dependency over what you get from `ls script` or `cat script/README` ?
I've migrated from a ./scripts directory to a justfile and greatly appreciate the concision and modularity. I understand and have tried source'ing common bash files, but have never managed to make it feel quite right. ```just``` has bought my current project a lot of time until we replace it with some behemoth like bazel.
I use https://github.com/davetron5000/gli for this, since I work in ruby. Adding something like just or gli to your project is a huge win. Every dev can just `just update_db` to refresh their dev db, `just update_secrets` to update dev secrets. Whatever. So much better than putting snippets in a wiki or whatever.
I like gli because it gives you subcommands, like `gli database refresh` etc.
Another simple tool similar to this is makesure[1]. It’s written in shell so the idea is you include makesure itself in your repo, which avoids needing to install another tool to run commands on your project.
It’s very simple so isn’t good for everything, but works well as a simple command runner.
Looks cool! Including the script in the project directory is the way, and I created on makesh[1] with that in mind. Since it's a submodule it can be easily pulled around with a repo and updated.
Glad to see people are finally going back to ahell scripts since they are very ergonomic and with a little portability in mind, they can be cross platform too.
I've been using it for a while and it's pretty flexible:
- dependencies
- parallelism
- programmatically generated tasks (since the config file is just a Python file)
- "udf" to specify when a task is up-to-date and can be skipped
I like the idea of trying to rethink the Make interface, but it just seems like most projects could actually benefit from build targets (conditional execution based on file existence/age), even if it's not the first thing you need to automate. I don't want to give that up because PHONY is confusing.
Nah, just does not support automatic parallelization of the dependency graph. This is a deal breaker for me. Also I'm not a huge fun of a yet another built-in language that tries to mimic a full-featured programming language.
I have been using Just for 3 months and it is such a fantastic tool. I never looked back at make. I don't even look at npm scripts anymore. I love Just.
That looks cool but I fundamentally hate yaml so it’s a no go.
I would rather lose my hair screwing with a makefile like thing than add more yaml to my day to day; I currently say “I hate yaml” at least 4 times a day.
If it’s not too much trouble and I don’t need comments I just write json as yaml.
Tools like these are handy, for sure. Problem is: If you're collaborating in a project, then you're requiring a new dependency to be installed in everyone's machine.
I feel like this a pretty non-issue so long as you document both that the dependency is required, and how to install it. Just is much easier to install than Python!
Is it? Python is basically everywhere, and if it's not already installed, your package manager has it ready. `apt install python3`, done.
Also, with Python you have... Python, all the language's power to extend your scripts as needed. `Just` works with a shell like bash, and I pretty much prefer python for scripting. Bash scripts get complicated very quickly.
It does, but you probably don't want to use the Python from your package manager. That tends to cause all sorts of problems when you need a newer version or different versions for different projects down the line. This is especially the case if you intend to use any packages in your python code. Managing dependencies in python gets complex quickly.
These issues are part of my daily work. I’ve started converting the make targets/commands to shell scripts because the hacks and ugliness that you have to do to provide make with arguments isn’t worth it. It seems like the more advanced shell features you want to use in a makefile, the more make gets in your way.
Not that I fault it. It’s supposed to be for making programs hence the name. We’re abusing it by turning into a script collection.
My personal favorite for small projects is invoke: https://www.pyinvoke.org/. I prefer it with python because it is just another lean dependency I can directly install along with other dependencies. Works pretty well unless you wanna chain and run long shell commands.
My biggest pet peeve with pyinvoke is that you can’t pass arbitrary arguments through to the underlying task. For something like invoking pytest you need to replicate the arguments you use in the task definition.
Yup! I totally get that as I ran into it the first time I used pyinvoke. I got around this limitation by using pyinvoke to just specify the tasks, their arguments and what other tasks they rely on, and let the tasks that share arguments delegate their core to the common function. It is an inconvenience, so I was planning on contributing this missing feature upstream to the library.
Make is firmly in a category of 'better the devil you know' for me. Not that I deeply know it, I use just a subset of it anyway and for that it's fine.
If there's was a short Make: The Good Parts O'Reilly book, then I would probably read it.
Just is designed for running commands, make is designed for tracking dependencies. I use them both in the same project calling each other if need be: there isn’t a competition here.
That’s often pragmatic if you have 3-4 commands. But when you have more eventually you’ll end up with a cluttered Makefile and a poor command runner. just -l shows all the tasks available. Just commands take cli arguments instead of being forced to use environment variables which are susceptible to typos, etc. I can write a 3 line script in normal bash syntax instead of slightly different make syntax and then easily move it to be a separate shell script when it gets bigger.
I use github.com/TekWizely/run for this use case. It's a robust way to build one-off command-line "APIs" for managing projects and documenting processes. I deploy it in production environments along with application-specific Runfiles for devops.
It makes life a lot easier when onboarding new folks, or for remembering how to do something months later.
I also use make, but limit it to just building software. All of my repos have a Runfile and a Makefile.
I use make extensively for a company wide build system across most/all products. Normalized everything.
Someone at the company introduced Just in their projects. I’ve used it quite a bit now, it’s great EXCEPT that you cannot include other Justfiles. So abstraction is impossible. If I want to implement something like a push feature, that has to be implemented everywhere, with no way to centrally update all projects.
Support for including other Runfiles was recently introduced, with support for globbing and the ability to indicate if an error should be generated if no files are found.
I've come across this a few times, it seems to cope very well with all the things I'm abusing Make to do. I'm hesitant to add niche tooling requirements to my projects though.
Can anyone comment with their experience using this? (in particular the social ramifications)
Well, let me put it this way: I never use a tool that isn’t bundled with the OS or the runtime I’m developing for, because I don’t want my environment to be a special snowflake (and I develop stuff on Linux and macOS, with essentially the same CLI tools).
Using the Nix package manager solves these problems.
With nix, all sorts of niche packages are available, so installing just isn't difficult. (& packages nix installed only go in /nix/store, so the filesystem isn't messed up).
If you want the same programs (the same version of programs, even) across different Linux distributions, and macOS, nix is the best tool for that.
If you're worried about your environment being an unreproducible snowflake, nix's main advertised feature is reproducing package installation.
Although, yeah, Nix suffer the same cost of "1 more niche thing to install".
Direnv is the real QOL improvement. You just add a .envrc file, write a flake.nix file listing your dependencies and anytime you enter a project directory you have every project dependency/tooling instantly available, but they otherwise don’t bother you at all.
It’s funny how things evolve. This is clearly something similar to make.
All sorts of tools use xxxxfile for their config. But the tool “make” it’s actually short for “make file”. That’s what make does, it makes files (and has modified date dependencies)
What would it take to unseat Make? Make is installed everywhere, so it is really hard for me to justify leaning on new tooling. Make is ok, but it has enough deficiencies that I longingly look at tools like this.
I definitely don't think Just will ever unseat Make. Just doesn't have file-based dependencies, so it's not a build system, just a command runner.
As far as unseating Make as a command runner, I think that might just take Just being available in more places, since one of the main advantages of Make that many users cite is that it's available everywhere. Just is already available in a lot of package repos, but not all of them. Finally packaging Just for Debian[0] would help a lot.
The issue appears to be that if you make a public Github repo, his bot will crawl it and add it to a repository of Justfiles for him to check against? That really isn't a privacy view in my mind. As soon as you add something to a public Github repo it is instantly indexed and made available on a bunch of (shady and legit) Github mirrors.
I'm still waiting on a tool that installs an activate.sh/activate.bat file that will bootstrap the tool when it's not installed, and then load up the environmental variables so that clean/build/deploy/test becomes active.
Manually installing stuff sucks. It should be cross platform, automated and local by default whenever possible.
This looks incredibly useful. Random question since I couldn't find it in the README – can i define a recipe (or recipe list) in a common location in my home folder and then use them from anywhere? While having a per project config is the intended use case, I'd also like a bunch of global ones.
Make is acceptable for running arbitrary bunches of commands. For anything grander, I just write shell scripts that call shell scripts. Once you know shell well, it's very easy to throw together a simple build system. For a complex build system I'd go find a complex build tool.
Hey, this is really cool! Coincidentally I’ve been doing something really similar with `alias j=‘make -f ~/Makefile’` for a single giant makefile across multiple projects; I also combine it with fzf to allow fuzzy-searching for target names.
I think looking at the features, you see "minor quality of life UX improvements", and it's not mind-blowing. (just commands implicitly run with the workdir of the justfile, just --choose uses fzf, soft tabs not required, etc.).
Make never stuck for me - I couldn't quite get it to fit inside my head.
Just has the exact set of features I want.
Here's one example of one of my Justfiles: https://github.com/simonw/sqlite-utils/blob/fc221f9b62ed8624... - documented here: https://sqlite-utils.datasette.io/en/stable/contributing.htm...
I also wrote about using Just with Django in this TIL: https://til.simonwillison.net/django/just-with-django