Hacker News new | past | comments | ask | show | jobs | submit | dsgrillo's comments login

Happened with me when buying a nvm-e samsung evo 970 (if I'm not mistaken). I was getting a full new PC and only opened the ssd box after all components arrived, two weeks after receiving it.

I opened a ticket with amazon explaining what happened, got a call from their support to confirm that the violated box was the product one and not the carrier one, and then got reimbursed by the end of the same day.


During the week I'm not shutting down my Mac, only putting it to sleep. Thus, in the morning, Slack (and all the other stuff I was working) is already open so it's easy to jump back doing whatever I was doing yesterday.

Each day varies... Usually, early in the morning is the least crowded time to deploy stuff, and when I'm mostly doing mine (we're doing Trunk-based development). Other than that, is the best time for doing code reviews before the daily meetings begin. Then there are meetings, incidents to follow up, interviews to conduct, features to implement and so on.


datadog


It depends on how much you can invest (time and money) vs the risk for your business.

I have 2 projects running a similar setup as yours: everything on a single machine. So far, running since 2014 without any large issues with both of them.

Once, I got locked out of the provider (due to my fault, but it doesn't matter). They've kept the server running, even though I couldn't ssh to the machines. I was ready to redeploy everything on another provider in less than 1 hour, with a DB backup of the day prior of the incident.

If my DB crashes, get corrupts, etc. I have a couple of backup for the last day, week and month, all saved in a different provider.

Thus, in short, I don't think the managed DB really worth for me. I also don't have HA, but I can cope with that. I'd rather get beefier machines for the extra price.


Thanks for your advice and for the links!

What would be the advantage of setting up something (either a company or a sole trader)?

I wanna have the least amount of work in those "administration tasks", but I'm also afraid that it can bite me in a future. Do you have an idea of when is the proper time for caring in those tasks? After 1 customer? 10? Or maybe is related to the MRR?

Right now, the system should be stable (with tested backups) and the only bad thing that comes to my mind is not being paid. I'm totally fine if it happens and will worry about only if/when happens.

Am I being naive?


Hey, thanks for your suggestion!

Although it got me thinking, what's the worst thing that can happen if I have no contract at all?


If you're in Italy and your client is in Brazil, and this is a $200/month service, neither you nor they are going to pursue international litigation on a contract dispute. It's too expensive and unpredictable.

To some degree, that means your contract doesn't matter, and if it doesn't matter, you don't really need it. But --- a contract sets expectations, and generally, reasonable people will follow reasonable contract provisions when they're written down.

Your lawyer and theirs will have a lot of stuff that's important to say in a contract for legal reasons, but you also want to define the scope, define the service level, define payment terms and cancelation terms. Even if you and they know that nobody will be held to the terms, you can feel OK about turning off their service (which may include deleting their data) if they don't pay after N days, if you said that would happen in the contract, and there's no extenuating circumstance.


I used to freelance without a contract. Bad stuff generally happens, much of it being miscommunication. And when the bad stuff happens, it's usually your fault for not writing a contract.

You don't need a huge contract in legalese. Just 2 sentences will do. Something dumb like "I will give you this, and in return you will give me $300. Once you stop giving me $300, I will stop giving you this."

But yeah, lawyers give better advice and lawyers are often cheaper than programmers.


they can get you into a legal battle of loopholes if they want to be assholes. hopefully they won't, but hey, its 2021.


My experience says to backup to a different vendor.


This is part of my threat model. Is there an easy way to do this with e.g. hosted RDS or block storage device snapshots?


In my case I'm dumping + zipping the entire database at the application level. In my case is as simple as adding a library [1], scheduling the job and transferring to AWS S3 (my main application is on DigitalOcean)

[1] https://github.com/spatie/laravel-backup


> Don't: manage your own DBs for production. [...]

Why, though?

If you're really on a low budget, the extra $$ for a managed DB doesn't really pay for itself. I'd rather prefer spinning up a container running whatever DB I need and run daily backups to an external vendor.

In my case, the website is free, living on ads revenue, so every penny counts.


Good question and I have an answer for you. It depends on your risk tolerance and resiliency.

If you compare it to a single machine with multiple tenants and no replication, it doesn't make sense. But that comes with risks and given the OPs question in the context of a k8s cluster (which for HA is a minimum of 5 servers), I figured they are prioritizing time savings over money.

Losing customer data, downtime, and data breaches can kill some companies before they even take off.

If your particular business is not sensitive to that than you can run your own database. Or better yet, if you're a content business you may not even need a database at all.

But even so, if you run your own DB you need to worry about backups and such so it may even still be worth the money. And if you are going for High Availability (HA), running a properly replicated cluster can be time consuming.

The cost of a t3.medium RDS instance on AWS for instance is $0.068 / hour compared to a t3.medium instance on EC2 is $0.0416 per hour. That is an extra $19 per month. For my use case, paying an extra $19 a month is well worth the savings in time and the piece of mind I have knowing my backups are running and if the server crashes I can recover quickly. But you must consider use cases and your's might be different.


You bring up good points, fair enough.

I might being naive here, still:

   - losing customer data: the backup must be done frequently enough. So I'm not risking losing data.
   - downtime: I'd expect that RDS could still go south as well, maybe less frequently, but still. In my case I start another VM, run again the container and apply the backup.
   - data breaches: Of course one can misconfigure something here, but not sure how it'd be different using a managed database. 

Regarding the cost, I know that there are companies spending hundreds of $$ on their infra, but it just baffles me for most use cases. I still feel that one can get really far with just a budget of 50$/month.


True. You have downtime risk with RDS and managed databases as well. I have taken down RDS instances with a bad query and indexes once or twice.

$50 total would be tough with fully managed AND high availability. I agree. But to digress from the tech side a bit and talk my own experiences, the question I ask myself is: as a solo-prenuer can I use that time to be doing something that will make me more than $X per month? If the answer is "yes, immediately!" I go with the managed server. If the answer is, "it would take me years to break even"... I may roll my own.

To address your points, thought: I think managed databases do help here. Data breaches and outages are usually configuration issues. Since they start with a good known configuration, managed services do help. Also, I see not being able to SSH in as a positive in this scenario.

For example: Say you misconfigured a database an break backups (happens all the time even to good DBs). That is much harder to do on a managed database.

If you're managing just one machine and being down for 10-30 minutes while your backup is being restored is fine, you probably don't gain much by using a managed service. In that case I'd take frequent backups, upload them immediately off the server (probably S3), and containerize the database. In my architectures I always run at least 3 servers.

With most database engines, if you have three servers you need to lose 2 to cause an outage. But a 3 server cluster is much harder to manage manually, tilting the scales more towards a managed database.

Also, to your point, would blow $50 out of the water.

If I can, I use a managed serverless database like DynamoDB (there is also others) that are pay as you go. You can easily run a service on just DynamoDB free tier if your use case is well suited to that kind of key/value stored based DB.

You can also, always start with your own and switch to managed later as you add machines to the cluster.

Just, whichever you choose... but "especially" when rolling your own... make sure you run fire drills. I.e test your backup and restore process regularly.


If you have any migration, you probably want to rollback them as well


That's sort of pet peeve of mine: Migration are done separate from code deploys. Version 1 of your code runs on schema version 1. Schema version 2 does not make chances that will break code version 1. Code version 2 can utilize the chances made in schema version 2, but you're still able to rollback the code.

Each schema migration should also come with its own rollback script.

The downside is that you might need three migrations for some operation, but at least you won't break stuff.

The assumption that you can do a schema migration while deploying new code is only valid when you have very limited database sizes. I've seen Flyway migrations break so many times, because developers assumed it was fine to just do complicated migrations on a 200GB database. Or a Django database migration just lock up everything for hours because no one cared to think about the difference between migrating 100MB and 100GB. And I've never seen anyone seriously considering rolling back a Flyway migration.


Agree with this and have practiced and advocated for it. Make the schema changes to support the new feature first, then verify existing software still works. Deploy the schema change. Then develop the new feature, test, and deploy the program. That way you can deploy and rollback without needing to synchronously run a bunch of migration routines.


I have currently 2 projects with some traffic (5k req/day) running in production.

Both have a similar setup:

   - 1 droplet with docker pre-installed from DigitalOcean
   - clone directly the repo from github
   - together with the code, I have a folder with a bunch of docker images (caddy, mysql, php, redis, etc.) that I can easily spin up.
   - for any release, I manually ssh, git pull and run the migrations (if any) and manually rebuild any docker image if needed
   - I have daily jobs that dumps and zip the entire DB to S3
   - if I have some deployment that I know will break any functionality during deployment, I warn the users before and accept that downtime.
   - never had to handle a "hard" rollback till now.
I've planned to change this setup for while, but until now, didn't find any reason that justifies the effort.

I spend 20$ (10$ per droplet) + few cents on S3 per month with them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: