You never connected to your live database using `mysql` (pg, whatever) with a privileged user? Using the frameworks shell is almost the same, except that all client-side validations work.
Yeah. Changing the schema is not something I'd do with code running against the database.
Also, my database typically refuses to start a transaction if the entire database is locked, and my application handles failing transactions by waiting a while and retrying.
Assuming you have tested the deploy script altering the schema to check that the transaction locking tables does not take too long then there is no reason to take down the application. In my personal experience this is the case for virtually all of our changes of tables.
If you mean why is it a problem at all, it's usually because of database size. On any large scale deployment (ie you have at least a million users) schema modifications will take hours. The only way to do reliable schema modifications is to have extra capacity and do it in stages. Also, your forward changes have to be backwards compatible. (AKA you're not allowed to both add and remove a column at the same time.)
The way to do it is to take some of your slaves out of the request pool and run the alter tables on them. You do this many times depending on your available capacity. (You probably can't just rip out half your slaves, you probably need to do at least 3 batches.) After you've altered all your slaves you can promote one to master and take the master offline to do its own alter. Then you push the code changes to production and add the old master back into the pool as a slave once it's done its alter.
In this scenario you need 3x the time the alter takes. So if the alter takes 6-7 hours (common in mysql if you have a large-ish table) it's going to take you at least 18 hours before you can push your code that depends on a database change.
Doing this manually at scale instead of an automated deployment process is extremely risky and will almost certainly be screwed up often.
This is one of the main reasons people are hoping schemaless databases work out in practice.
Agreed on backwards compatible, but for application semantics; not types of schema changes.
There's no reason to not be allowed to both add and remove a column at the same time, or to merge and split whole tables. In your example, these kinds of changes would not be possible.
There's also no reason to not be able to run old and new code at the same time, or to revert a schema change.
With ChronicDB we reduced schema changes to:
$ chd change -f upgrade_map mydb
Schemaless databases don't solve this, just as an instantaneous ALTER TABLE won't solve this.
A lot of us are guilty of a quick data-munging hack at the command line from time to time, but the idea of altering the schema from a REPL makes me want to cry.
It depends, if you're working with a smaller database with a smaller team with a smaller client base, there's not as much harm. There could possibly be more harm in having more ceremony for doing simple things.
I disagree with this. Making a script and testing it against a copy of the database is a good idea in 100% of cases. The only reason you don't do that is because you're lazy and stupid. (I've done this because I'm lazy and stupid.)
When you defeat a safety interlock system, expect to be injured.
No one said it'll work 100% of the time. But it most likely would have saved the OP a lot of pain. Just because something isn't a perfect solution, it doesn't mean it's not worth doing.