I have letters like r and b aliased in my bash profile to check for and run a bash script, if it exists, in each project directory (r = ./run.sh, b = ./build.sh).
In each of those scripts, I typically have a one liner depending on what the project requires. A simple build one is:
#!/usr/bin/env bash
make build
And run:
#!/usr/bin/env bash
docker run foo/bar
Or maybe:
#!/usr/bin/env bash
python manage.py runserver
I might also add (source) environment variable settings, etc. Sort of like my own personal decentralized makefile.
Then I add each script to my .git/info/exclude for each project. It saves so much time switching between projects to not have to remember any particular one's build or run commands.
One slight modification: name the build and run scripts something that you will never expect to be in that repo (maybe like run-xyz.sh where xyz are my initials, 10 random characters, etc.).
Then, the filename can be excluded in a global gitignore file.
Yeah, good points. Maybe putting the script(s) inside a .whatever directory inside the project root like some other dev tools do is worth consideration. What do you think?
Really to do this right you should make it function like direnv and create a whitelist of scripts you trust (bonus points for including hashes of the script). On first run it'll ask you to review and trust the script, and then just work on subsequent runs.
In each of those scripts, I typically have a one liner depending on what the project requires. A simple build one is:
And run: Or maybe: I might also add (source) environment variable settings, etc. Sort of like my own personal decentralized makefile.Then I add each script to my .git/info/exclude for each project. It saves so much time switching between projects to not have to remember any particular one's build or run commands.