This works in some situations but the OPs use case seems to suggest near real-time access to records that would only have recently been created. I'm unsure if CSV could work well here.
That seems to stem from the fact they killed off pretty much the entire engineering team after the acquisition (judging by tweets I remember reading shortly afterwards).
In a real emergency situation, there are all sorts of solutions to get a fix out and fast. But that almost never happens, thus it's not something to optimize for.
That may also depend on how many commits wait in the line. If team is large or monorepo, you need to wait for all pushes to pass in order for your own push to be tested which might take a long time even if single test is fairly fast.
Serious question: why would I want to use this instead of tests as part of the CI process? Or would the use case be to use both but just get faster feedback from Lefthook?
You should use both. Basically it's bad idea to push broken code to remote, but running tests and linters on all files is too time consuming.
So you set up lefthook to run your tests and liters only on changed files (which is fast and prevents 90% of problems), but then your code is pushed you still run CI checks on all code to make sure that that some dependency in unchanged files is not broken.
> Basically it's bad idea to push broken code to remote
This argument is void if CI is set to allow complete testing on branches along with parallel pipelines. So I want to do something, I branch, code, and push. Server does all the funky stuff without me having to install or understand anything which is huge time saver. With parallel pipelines you do not even block others with this behavior.
Things not on trunk can be broken, that is exactly one of the points we have branches.
I'm chuckling inside thinking about how many people go and install/use this versus how many people actually work at a scale that they need to use this.
Given that e.g. AWS charges you for cross-region ECR image pulling, this can make a difference for scrappy companies that push large images on green (=multiple times a day, with lots of cache misses) to multiple regions. That's even if your deployments have just tens of replicas. Larger companies probably worry about other parts of the bill.
It makes sense to plan ahead for increased scale. If you are working for a VC backed company whose mission goals are to grow grow grow scale scale scale, then you cant exactly build for the infrastructure you currently are using. Its perfectly acceptable to build out an overbuilt infra, as long as your costs aren't shooting you in the foot. You know whats worse then paying too much for infrastructure? Loosing money and clients because your infrastructure breaks anytime you get a real workload on it.
But even worse is not being able to release because the system complexity has shot through the roof. Plan (and test!) for 10x scale at a time, then optimize to squeeze another 5-10x while you build the 1000x system.
To some extent testing this out when you need it is a bit too late. If you anticipate having a problem, it's useful to play with solutions before you actually have said problem.
That's different from applying a 50,000 node solution to a 50 node problem though.