Hacker News new | past | comments | ask | show | jobs | submit | xyzzy123's comments login

Quietly create a retrovirus which will write your data into the DNA of every living human.

Have thought about this a bit. I think there's a "business model" where a non-profit foundation charges a very high price (say $50 or $100/gig) and the interest on that pays for the hosting and admin. One issue is startup risk, if you don't get enough people wanting to store data "forever" it won't be sustainable.

The foundation has a remit to also do some related "good works". The idea is that the pot of money (and the interest it throws off) acts as an incentive to keep the foundation going. Eventually the cost of hosting "legacy" data should drop close to zero. You could run it as an overlay on two clouds initially to avoid capital outlay.

I think you would want librarians / archivists on the board. It wouldn't require much in the way of software, making something that could last in the long term is more of a governance problem than a technical one.


Get a Mitutoyo, it'll be accurate AND the battery will last 5 years. It only hurts once. Cheap calipers hurt every time you use them :/

> Australia will probably be the first "western" country to block Tor.

The Australian way would be to "ban" tor without any particular concern for enforceability or technical feasibility. Any actual blocking would be pushed onto industry somehow, which would then proceed to half-ass it, doing the absolute minimum possible to demonstrate they are complying with regulation.

I like Australia a lot, but a lot of the time it feels like political priority is to "make it look like something is being done". No one would actually care if the blocking worked or not unless the media made a big song and dance about it.

I also wonder how much of this ban is about "punishing" X and Meta in particular - Meta for it's refusal to pay for news and X because they didn't jump to immediately remove stuff the government wanted taken down.

> What even counts as social media?

Anything the government needs more leverage over or wants to shake down for money.


> The Australian way would be to "ban" tor without any particular concern for enforceability or technical feasibility. Any actual blocking would be pushed onto industry somehow, which would then proceed to half-ass it, doing the absolute minimum possible to demonstrate they are complying with regulation.

Just because it wouldn't be well-implemented doesn't mean it's nothing to worry about, not to mention that such things are almost always just 1 step on a path of many.


> X because they didn't jump to immediately remove stuff the government wanted taken down

Yes they did. X has always capitulated to the whims of governments.


Except that one time Elon Musk strongly disagreed with a government who wanted him to take down some Nazi stuff and got his app blocked in a whole country.

When it's about taking down left wing stuff he just complies.


Sort of; at medium scale you can blue/green your whole system out of the monorepo (even if its say 20 services) in k8s and flip the ingresses to cut over during release.

Of course k8s not required, you can do it in straight IaC etc (i.e deploy a whole parallel system and switch).

It's still "mixed fleet" in terms of any shared external resources (queues, db state, etc) but you can change service interfaces etc with impunity and not worry about compatibility / versioning between services.

Throwing temporary compute at the problem can save a lot of busywork and/or thinking about integration problems.

This stops being practical if you get _very_ big but at that point you presumably have more money and engineers to throw at the problem.


> Sort of; at medium scale you can blue/green your whole system out of the monorepo (even if its say 20 services) in k8s and flip the ingresses to cut over during release.

That's overall a bad idea, and negates the whole point of blue-green deployments. This is particularly bad if you have in place any form of distributed transaction.

There are very good reasons why deployment strategies such as rolling deployments and one-box deployments were developed. You need to be able to gradually roll out a change to prevent problems from escalating and causing downtime. If you use all that infrastructure to then flip global switches then you're building up all this infrastructure to the negate any of that with practices that invariably cause problems.

And this is being done just because someone thinks it's a good idea to keep all code in a single repo?


> You need to be able to gradually roll out a change to prevent problems from escalating and causing downtime.

From my perspective I prefer it to rolling updates, it just costs a lot of resources. It gives you a chance to verify all the parts of your system together during release.

You deploy an entire new parallel copy of your system and test it. This can be manual or automated. Once you have confidence, you can flip traffic. Alternatively you can siphon off a slice of production traffic and monitor metrics as you gradually shift traffic over to the new release. Note that you can also nearly "instantly" roll back at any point, unlike with a rolling update. Again, this is only "sort of" a mixed fleet situation because you can maintain the invariant that service A at v1.23 only talks to service B at v1.23. This means you can atomically refactor the interface between two services and deploy them without thinking too hard about what happens when versions are mixed.

Distributed transactions I'm not so sure about, we would have to talk specific situations.

> And this is being done just because someone thinks it's a good idea to keep all code in a single repo?

It's more like, if you have a monorepo this is one option that becomes simpler for you to do, should you happen to want to do it.


Most (but OK, not all) ultra-rich people will be diversified and hold some of their wealth in securities, real estate and other assets. They'll have family offices and expected rates of return, they can arrange for enough of their wealth to be liquid to pay the taxes... they just don't want to.

Given that wealth tends to appreciate around 4-5% (ok that's Piketty - but pick your favourite number) a 1% wealth tax is a "moderate headwind" on wealth growth for the very rich.

On the other hand, I agree the founder on 1st company with most of their wealth tied up in it is in the worst position, and also that the threshold in Norway seems very low (170k USD) and would hit many small business owners - who also will find it much more difficult to change their tax residency than the truly wealthy :/


Selling off securities, real estate and other assets is extracting wealth from companies.

One reason the general ROI of generalized wealth is higher than 1% is because the sort of people who accumulate it are allowed to do so without it all being taken away in taxes, so they're incentivized to grow their investments. The assumption that average ROI remains constant even after changes to the tax system is the sort of bad economic modelling that the article criticizes, and apparently someone won a "Nobel prize" for (economics isn't AIUI a real Nobel prize) just for pointing that out. A lot of rich people are rich on the back of investments, so clearly if they leave there are fewer people around creating that 4-5% to begin with.

You're right that wealth taxes interact very badly with startups. In fact in Zürich, Switzerland they had to adjust the wealth tax rules because it was basically killing any chance of a US-style startup scene. The moment you raised money from investors you would be considered really rich, not just paper rich, thus forcing the company to give huge payments to the founders so they could settle the wealth tax, and those payments would themselves be considered income immediately pushing the founders into the highest possible tax bracket, etc. Even if they're actually living on ramen! Unfortunately the fix for this took the form of the government granting special privileges to companies classed as "startups", which leads to a strange bureaucratic process in which the taxmen try to decide if your business model is "innovative" or not, using some internal definition, because "startups" are defined to have "innovative" business models. This is well beyond the scope of what a tax official should be deciding IMO, but it's the kind of thing that seems inherent to trying to implement a wealth tax.


I agree efficiency vs latency, but it's also "exploit vs explore" balance, work/life balance and more, leaving time for research, exploration, shooting the sh*t, decorating the office, buying christmas gifts at lunchtime, helping out co-workers etc.

I think "slack" is a much more general concept than efficiency vs latency. The slack itself allows low-latency response to emergencies but the activities that fill the slack time can be valuable in ways that often aren't legible to the org hierarchy.


I think this comes down to a lack of shared understanding on what a PR approval means, it sounds like you didn't believe it meant the same thing as your team did. There's not one standard meaning of the "rubber stamp". It's a culture thing and teams should discuss and eventually agree on what they think it means. Mature teams will have guides, checklists and documented expectations.

It's moderately likely that behind the scenes someone was complaining to your manager that you were blocking their work and that the "metrics" discussion was just a cover for that.

Usually when I review a PR I am just sanity checking the overall approach, figuring out or asking how they tested it, and making sure there's nothing crazy that will cause pain for everyone else later. Detail correctness I don't consider my problem because there's no economic way I can verify it. I can usually approve in minutes and I don't consider it a waste of time, I did the things the process needed of me.

Unless something irreversible is happening (and it should be reviewer's job to be aware of that), the fix for a bad PR is more PRs.

There are projects where almost the only thing that matters is approving quickly because this will let your co-workers get on with their job, this culture tends to evolve in orgs with lots of related repos where you need 5 MRs and pipelines (that flow on to each other) to deploy the tiniest unit change. It's completely dysfunctional but it happens a lot.

I imagine there are places where reviewers are expected to spend an hour "raising the bar" on every PR but I've never worked at one. I'm also not sure if I'd want to.

Note that there's not one thing or process that makes sense, it's very context dependent with lots of exceptions. For example, if someone from a "far away" team is contributing to a particular repo for the first time I will probably reach out to them to see what they're trying to do and review more carefully because they likely have limited context vs a core contributor.


Well then why does the PR template used at the company require manual validation steps on every PR if it's not expected that reviewers do them? Do you really think 5 seconds was enough time to "sanity check" the PR? Do you think 5 seconds was enough time to even click the code tab? Why are two "reviews" even required if the reviewers are actually expected to simply rubber stamp every single PR that comes through?

Edit: "the fix for a bad PR is more PRs" has got the be the worst take I've seen in years.


The review requirement often comes from compliance, many standards require a control to stop developers getting code into production by themselves without anyone else looking at it. Sometimes it's because an engineering manager read a book that said it was "best practice" or because "that's what we did at $lastjob". Sometimes it's because someone set up the SCM to require it and never looked at it again. Maybe it made sense to someone 5 years ago but it doesn't make sense now, for the teams and processes you have. It's worth asking and finding out!

I agree 5 seconds is not enough for even cursory review, why even bother, unless it's a compliance thing. In a functional team the thing you do is raise this in retro and discuss what everyone expects and wants from reviewers.


If your take is that blindly approving merges with 0 care is actually a good thing and that I "lack understanding" then I simply have to disagree lol. Good luck with that.


Sorry that I was not clear, my bad, I did not mean to imply that you personally "lack understanding" in any way.

What I should have written was that there seemed to be a lack of shared understanding among the team, (i.e, agreement) on the value, meaning and expected process for PR reviews.

I don't think there is any particular level of care that makes sense in all cases for PR reviews, I believe it depends on industry, criticality, the particular repo and who is doing what. I think the most important thing is that everyone is on the same page about what's expected.


This horse has been long flogged to death.

My list: Mostly fits in my head, gc, not horribly slow, boring concurrency, low effort cross-compilation, good distribution story.

It's contentious but I like the "low abstraction ceiling". Go punishes people who want to turn everything into a framework or abstraction and rewards people who just knuckle down and write the code that solves the actual problem instance.

Is it the "best" programming language on any single axis? Absolutely not. Are the ergonomics right for getting stuff done? Yep, at least for this commenter.


Go is terrible for writing frameworks, because time and time again it picks "simple". Look at generics, people inside google were practically begging Pike to consider adding them, as it would make Go code a lot more flexible, but he held out for years.

It's "simple" if you're just gluing other people's code together, and don't get me wrong, having a single portable artifact + run time is amazing, but as someone who's built a KV store engine with Go, you hit the rough edges very quickly.


Even if you do discover a better abstraction, I find Go's tooling makes inevitable refactoring less terrible than other languages. I can easily see where I "broke" code--gopls makes fixing things really easy.


Go is not the only language with an LSP implementation. What other ones are you comparing gopls to?


It's also, what kind of startup are you? What kind of workload do you have?

If you are bootstrapping a crud app business then 1 beefy hetzner box (or something slightly more reliable) with postgresql is probably fine until you reach scale where you sell the business. You care about burn rate above all.

If you are VC backed go all in on gcp or aws because thats what you're expected to do and and what the expensive people you hire are going to know.


I agree but would slightly modify it in that if you have taken VC money, growth probably matters above all else. Don't waste time on activities not related to the product being sold.


I really wonder whether a VC would rather invest into a startup with an architect focusing on KISS or one where the architect goes all in on cloud.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: