Hacker Newsnew | past | comments | ask | show | jobs | submit | Jedd's commentslogin

I tend to use byobu as a wrapper around tmux wherever possible - this combination might have gotten the author close enough to where they wanted to be.

Well, they'd still need to bring their own ascii teacup.

https://www.byobu.org/


I also am a fan, and find it much easier to successfully suggest byobu to others with its simple "F-key to action" mapping of common actions. I work with a bunch of non-developers, so the lower barrier to use is important :)

I will check this out. Thanks for the link!

Yeah, that post was hard to read.

I'll concede that I'm hugely empathetic for people that suffer data loss. The pithy aphorism about there being two types of people -- those who haven't lost data, and those who do backups -- is doubly droll because only the second group really appreciates the phrase.

But it's surprising to find people with more than a decade in IT who don't appreciate the risks here.

The timeline reveals there were 13 days from when the first signs of trouble surfaced, to when the account was deleted. So a fortnight of very unsubtle reminders to do something AND a fortnight in which to act.

(I recently learned the phrase BATNA[0] and in modern <sic> IT where it's Turtles as a Service, all the way down, it's amazing how often this concept is applicable.)

Author seems very keen to blame his part-time sysadmin rather than his systems architect. I can understand the appeal of that blame distribution algorithm, but it's nonetheless misguided.

The phrasing:

> But here’s the dilemma they’ve created: What if you have petabytes of data? How do you backup a backup?

inverts the horse & cart. If you have a petabyte of data that's important, that you can't recreate from other sources, your concern is how to keep your data safe.

If you're paying someone to keep a copy, pay (at least one other) person to keep another copy. Even that isn't something I'd call safe though.

[0] https://en.wikipedia.org/wiki/Best_alternative_to_a_negotiat...


The _new_ price has doubtless gone up, as inflationary valuation benefits to the manufacturer / vendors.

The resale value of your item has gone down.


This story - Diamonds are Bullshit - comes up regularly, and I bookmarked it back in 2013 because it's so good.

https://news.ycombinator.com/item?id=5403988

There is nothing to miss about the impending death of the 'diamond industry'.

(Oh, the link is broken on the HN 2013 story -- try this one: https://priceonomics.com/diamonds-are-bullshit/ )


When I saw the headline I immediately thought: "couldn't have happened to nicer guys /s"

Why is it that everyone seems to have a soft spot for industries that have some kind of monopoly, suddenly losing that monopoly.


I was both a) delighted and b) disappointed to discover, in my late 40's, the negroni, at a hotel in Brisbane Australia - because a) it became my favourite cocktail, and b) I'd somehow never had one before.

It's not, at first glance, a complex drink - but has somehow become surprisingly popular the past few years.

Experimenting with variations on the original brand-name elements is doubtless a big part of the fun - Campari is a very sweet bitters, and Rosso Antico is a competent but not spectacular vermouth. The third component is gin, of course.

Here in Australia there's some delightful local options for each of those three ingredients.

(Switching out the bitters for Fernet-Branca turns it from a Negroni into a Hanky Panky -- which is worth knowing about, but perhaps not worth the experience of drinking.)

TFA mentions in passing a 'rusty nail' which is even simpler - barely worth the moniker of cocktail - a blend of whisky and Drambuie (itself a blend of whisky and honey, herbs, spices).

I recall my first fondly, even if my liver does not -- a generous pour, foisted into my hands by the spitting image of John Cleese (along with his partner - an American version of Patsy-from-ab-fab) at a bed and breakfast, somewhere near Loch Ness, and sometime towards the end of the last century.


I discovered the negroni sbagliato on holiday in Italy just a few weeks before it blew up on tiktok. Still my go-to summer drink.

In Australia, a nip of alcohol (spirits) is 30ml - so anything involving more than one ingredient is generally multiples of that.

> And?

And you set up your own monitoring systems for your own infrastructure, as you have always done.

Or better yet, set up auto-renewal as per vendors recommendation.

Vendors - especially vendors you aren't paying - may provide some reminder services, but assuming those to be your sole method for 'managing' your renewals is a deeply poor operational position.

This is going to get really important as cert longevity gets reduced, eg https://news.ycombinator.com/item?id=43693900

If you're using Prometheus or Prom-friendly systems - https://github.com/ribbybibby/ssl_exporter


> Let's Encrypt stopped its certificate expiration email notification service a while ago, and I hadn't found a replacement yet.

This sounds like an easy problem to identify root cause for.

I think I received about 15 'we're disabling email notifications soon' emails over the past several months - one of which was interesting, but none were needed, as I'd originally set this up, per documentation, to auto-renew every 30 days.

Perhaps create a calendar reminder for the short term?


It's not clear if there are other places that application state is being stored, outside your database, that you need to capture. Do you mean things like caches? (I'd hope not.)

pg_dump / mysqldump both solve the problem of snapshotting your live database safely, but can introduce some bloat / overhead you may have to deal with somehow. All pretty well documented and understood though.

For larger postgresql databases I've sometimes adopted the other common pattern of a read-only replica dedicated for backups: you pause replication, run the dump against that backup instance (where you're less concerned about how long that takes, and what cruft it leaves behind that'll need subsequent vacuuming) and then bring replication back.


Feels weird to talk about strategy for your backups without mentioning RPO, RTO, or even RCO - even though some of those concepts are nudged up against in TFA.

Those terms are handy for anyone not familiar with the space to go do some further googling.

Also odd to not note the distinction between backups and archives - at least in terms, of what users' expectations are around the two terms / features - or even mention archiving.

(How fast can I get back to the most recent fully-functional state, vs how can I recover a file I was working on last Tuesday but deleted last Wednesday.)


  > without mentioning RPO, RTO, or even RCO

  > Those terms are handy for anyone not familiar with the space to go do some further googling.
You should probably get people started

  RPO: Recovery Point Objective 
  RTO: Recovery Time Objective
  RCo: Recovery Consistency 
I'm pretty sure they aren't mentioned because these aren't really necessary for doing self-hosted backups. Do we really care much about how fast we recover files? Probably not. At least not more than that they exist and we can restore them. For a business, yeah, recovery time is critical as that's dollars lost.

FWIW, I didn't know these terms until you mentioned them, so I'm not an expert. Please correct me if I'm misunderstanding or being foolishly naive (very likely considering the previous statement). But as I'm only in charge of personal backups, should I really care about this stuff? My priorities are that I have backups and that I can restore. A long running rsync is really not a big issue. At least not for me.

https://francois-encrenaz.net/what-is-cloud-backup-rto-rpo-r...


Fair that I should have spelled them out, though my point was that TFA touched on some of the considerations that are covered by those fundamental and well known concepts / terms.

Knowing the jargon for a space makes it easier to find more topical information. Searching on those abbreviations would be sufficient, anyway.

TFA talks about the right questions to consider when planning backups (but not archives) - eg 'What downtime can I tolerate in case of data loss?' (that's your RTO, effectively).

I'd argue the concepts encapsulated in those TLAs - even if they sound a bit enterprisey - are important for planning your backups, with 'self-hosted' not being an exception per se, just having different numbers.

Sure, as you say 'Do we really care about how fast we recover files?' - perhaps you don't need things back in an hour, but you do have an opinion about how long that should take, don't you?

You also ask 'should I really care about this stuff?'

I can't answer that for you, other than turn it back to 'What losses are you happy to tolerate, and what costs / effort are you willing to incur to mitigate?'. (That'll give you a rough intersection of two lines on your graph.)

This pithy aphorism exists for a good reason : )

  > There are two types of people: those who have lost data,
  > and those who do backups.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: