Hacker Newsnew | past | comments | ask | show | jobs | submit | webmonkeyuk's commentslogin

This works in some situations but the OPs use case seems to suggest near real-time access to records that would only have recently been created. I'm unsure if CSV could work well here.


I suspect:

- memcached if you don't need to persist the data

- Redis if you don't know whether you need to use Redis or FoundationDB

- FoundationDB if you learn that Redis doesn't do what you need

I don't mean this in any kind of a derogatory way but I suspect that if you need to ask then you probably don't need FDB.

The principle of keeping tech stacks boring and using well established components is less exciting as an Engineer but is usually the best choice.


What's the solution? k8s!

What's the problem? Anything!


Nigel Richards learnt the words from the French dictionary but can't speak French


It looks like the website is proxied via a DDoS protection company called DDOS-GUARD https://gwhois.org/190.115.31.151

It looks like the website is being served from an Ubuntu server running nginx/1.18.0 https://parler.com/404


"A CPU for use in space must first be MIL-STD-883"

Is this just for NASA craft? Are there any regulations for private craft or international standards?


Depends, but mostly no. No regulations.


Speaking from experience: they're not even managing to service paying customers well.


That seems to stem from the fact they killed off pretty much the entire engineering team after the acquisition (judging by tweets I remember reading shortly afterwards).


The Idera way, buy product, fire devs, offshore to feature factory.


I will never again doubt what "purchase by private equity" means again, it is clear.


It really depends on who is buying.

But with Travis and Idera it was pretty clear it will go this way, looking at past Idera purchases.


What exactly made you "doubt" this fact before now?


I thought maybe there were sometimes exceptions to the rule.


There are NEVER exceptions to this particular rule, at least not in tech.


Having to wait 30 mins for a build to run sounds like the real issue here. How would you push out an an emergency release in a timely way for example?


In a real emergency situation, there are all sorts of solutions to get a fix out and fast. But that almost never happens, thus it's not something to optimize for.


That may also depend on how many commits wait in the line. If team is large or monorepo, you need to wait for all pushes to pass in order for your own push to be tested which might take a long time even if single test is fairly fast.


Serious question: why would I want to use this instead of tests as part of the CI process? Or would the use case be to use both but just get faster feedback from Lefthook?


You should use both. Basically it's bad idea to push broken code to remote, but running tests and linters on all files is too time consuming.

So you set up lefthook to run your tests and liters only on changed files (which is fast and prevents 90% of problems), but then your code is pushed you still run CI checks on all code to make sure that that some dependency in unchanged files is not broken.


> Basically it's bad idea to push broken code to remote

This argument is void if CI is set to allow complete testing on branches along with parallel pipelines. So I want to do something, I branch, code, and push. Server does all the funky stuff without me having to install or understand anything which is huge time saver. With parallel pipelines you do not even block others with this behavior.

Things not on trunk can be broken, that is exactly one of the points we have branches.


I'm chuckling inside thinking about how many people go and install/use this versus how many people actually work at a scale that they need to use this.


Given that e.g. AWS charges you for cross-region ECR image pulling, this can make a difference for scrappy companies that push large images on green (=multiple times a day, with lots of cache misses) to multiple regions. That's even if your deployments have just tens of replicas. Larger companies probably worry about other parts of the bill.


It makes sense to plan ahead for increased scale. If you are working for a VC backed company whose mission goals are to grow grow grow scale scale scale, then you cant exactly build for the infrastructure you currently are using. Its perfectly acceptable to build out an overbuilt infra, as long as your costs aren't shooting you in the foot. You know whats worse then paying too much for infrastructure? Loosing money and clients because your infrastructure breaks anytime you get a real workload on it.


But even worse is not being able to release because the system complexity has shot through the roof. Plan (and test!) for 10x scale at a time, then optimize to squeeze another 5-10x while you build the 1000x system.


To some extent testing this out when you need it is a bit too late. If you anticipate having a problem, it's useful to play with solutions before you actually have said problem.

That's different from applying a 50,000 node solution to a 50 node problem though.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: