if #2 is correct, holy shit did gitlab get lucky someone snapshotted 6 hours before.
Dear you: it's not a backup until you've (1) backed up, (2) pushed to external media / s3; (3) redownloaded and verified the checksum; (4) restored back into a throwaway; (5) verified whatever is supposed to be there is, in fact, there, and (6) alerted if anything went wrong. Lots of people say this, and it's because the people saying this, me included, learned the hard way. You can shortcut the really painful learning process by scripting the above.
Do you have to download the entire backup or is a test backup using the same flow acceptable? I'm thinking about my personal backups, and I don't know if I have the time or space to try the full thing.
For DB backups, until you've actually loaded it back into the DB, recovered the tables, and tested a couple rows are bit identical to the source, it's a hope of a backup not a backup. Things like weird character set encodings can cause issues here.
If time and space for the full thing are an issue, it could be really important to get going after an incident to be able to recover the most important bits first.
Dear you: it's not a backup until you've (1) backed up, (2) pushed to external media / s3; (3) redownloaded and verified the checksum; (4) restored back into a throwaway; (5) verified whatever is supposed to be there is, in fact, there, and (6) alerted if anything went wrong. Lots of people say this, and it's because the people saying this, me included, learned the hard way. You can shortcut the really painful learning process by scripting the above.