Generally the hardest thing with version control on a database (for an evolving codebase) is separating unrelated changes - such as schema changes vs content updates - and branching and merging those changes in sync with the code dependencies. Another issue is non-destructively replaying development changes into test/production environments.
So for example, you might have a feature branch that includes some schema changes and some value modifications, and a content branch that includes a bunch of inserts into a few content tables that happen to include foreign key references to each other (so you need to maintain referential integrity when replaying those updates/inserts).
I don't see anything in the description that indicates this tool address those problems. For me, those are really the only problems that a DB version control system ought to be focused on. Speed of snapshotting is not all that important in a development environment as you typically work on a cut-down dataset anyway. A minute or so to take a snapshot a few times a day isn't a huge deal, whereas taking more frequent snapshots doesn't seem like something that adds any value, if it doesn't address any of the other problems.
"Another issue is non-destructively replaying development changes into test/production environments."
"...you need to maintain referential integrity when replaying those updates/inserts"
I think most of your misgivings about this tool are grounded in the expectation that it would provide merge features like Git. Since it claims to be "git for databases", I suppose that's a fair expectation.
However, the tool does provide the core functionality of Git, which is to implement a tracking system for hash-tagged objects (or, in this case, tables). This does have value. Not only is it faster than standard db dumps, it's also more space-efficient. For some people, this could be really valuable.
I think it's an interesting tool and could serve as a launching point for more powerful/useful functionality.
Sqitch sounds like just another migration tool, not a DB change tracking tool.
I have been looking for a tool that will allow me to track, diff and revert changes to the content of specific "business logic" tables so that we can acurately track and test those changes.
It doesn't look like anything like that exists, so eventually I'll have to roll my own.
I back up this recommendation whole-heartedly. Sqitch is a fantastic database tool. I stopped using Rails-type migrations because of it (I write PL/pgSQL functions and wanted a migration tool to handle that well, amongst other reasons.)
I would disagree that referential integrity and schemas are the only issues a DB version control should focus on.
For example, I would be very interested in having git like branching on top of something like this - http://sandbox.substance.io/docs/lorem_ipsum.json. Basically version controlled schemaless object tree.
I recently implemented something like this, backed by mongodb, and by exposing a HTTP api which mimicked git. I had to relax these two requirements, but it is still worth it.
However I would be very much interested in using libgit2 with a database backend instead of filesystem.
I'd love a tool that solves that problem. Schema / data / user / permission migrations, support for both production and development environments and integration with CM tools like ansible would be fantastic.
So it's yet another toy project with grandiose, overblown claims ("git for databases"? seriously?!)
An honest update to the README would be to take the whole thing down several notches, if nothing else then to avoid the kind of confusion the post you're replying to expresses.
If anyone's interested in git-like storage systems then it's work checking out Irmin [1]. Previous discussion is at [2].
Excerpt: "Irmin is a library to persist and synchronize distributed data structures both on-disk and in-memory. It enables a style of programming very similar to the Git workflow, where distributed nodes fork, fetch, merge and push data between each other. The general idea is that you want every active node to get a local (partial) copy of a global database and always be very explicit about how and when data is shared and migrated
Irmin is not, strictly speaking, a full database engine. It is, as are all other components of Mirage OS, a collection of libraries designed to solve different flavours of the challenges raised by the CAP theorem. Each application can select the right combination of libraries to solve its particular distributed problem."
I've spent a day looking at Irmin, trying to figure out the JSON CRUD interface, but there's just no documentation, and the irc channel is pretty much dead. The code looks good, but it's my first introduction to OCaml, coming from Clojure. Would love to see some more documentation on it.
Irmin's still being developed, hence not many docs at the moment. Apologies there was no-one responding in irc (I assume you mean #mirage - there are usually 30+ folks there). The mailing list is quite active and is a good place to pose questions.
So, it appears to just copy tables around within the database. I wouldn't want to use this on a DB over a few MB in size. Sure, restores are "fast" (a table rename), but copies are not so much.
I can't imagine this would be kind to a production database (lots of cleanup from copied & deleted tables), and would consume a lot more space than a gripped logical backup of the tables in question.
I have regularly used this with database that's nearing 1000 megabytes. I don't particularly mind slow snapshotting because my workflow is more about restoring database back to baseline than taking copies.
Please don't use this for production. It is not stable enough and you only end up with lost data.
We use our own lvm solution at work, works great, we do something like pgsnap to swap to the snapshot or pgsnap master to work on the master. I'll check this out.
Why does the author compare it to Git? The functions this software performs are no where near those performed by Git. Nor it is a proper version control system.
Suppose I'm working on a new feature branch and I run a bunch of migrations and add new data. Then I need to go make a bug fix on the master branch. Using Stellar, I could just take a snapshot before starting my feature branch, restore the DB state, do my bug fix, then go back to the DB as it was in my feature branch. I haven't tried Stellar yet, but this seems like a scenario where it would be useful. (And definitely not supported through Postgres transactions.)
True. Although then you would have to worry about changing your DB configuration whenever you change branches. You'd also have to make a copy of your database manually any time you want to start modifying it. Stellar seems like an easier solution.
Transaction is only reverted if the migration fails. Stellar helps you if your migration succeeds but does the wrong thing (deleting wrong column, missing WHERE in UPDATE statement).
The use case which this really excites me about is automated testing from the GUI level. If the performance is good enough, this would be really useful for restoring DB state in between tests.
While doing single restore is fast enough, you will have problems doing multiple restores in a row as the background process needs to finish before another restore is possible (which may take several seconds when you are dealing with big databases).
What if you're writing a migration that includes DML, and it fails partway through? Wouldn't it be good to be able to restore the DB state automatically and try again?
What's wrong with that? Assuming you escape the table names correctly, that seems reasonable. And if you can't escape the table names, you're going to have a hard time dynamically generating queries anyway. Parameterized queries are a baseline requirement for values, but are rarely supported by client libraries for things like table or schema names.
Nothing wrong with the syntax, it's just that if you're going to create an entire mirror of your database, you might as well use mysqldump. (Plus it'll actually be portable).
It takes snapshots and computes diffs between snapshots or the live database. It lets me drop and re-import some of my app's tables, then compute the minimum set of changes between the previous import and the new import. I wouldn't call it "git for ActiveRecord models" but it appears to be similar to this project.
Comments welcome! The docs, as always, could use some help.
The implications for this extend beyond backing up your database.
Imagine a world where daily time-series data can be stored efficiently:
This is a lesser known use case, but it works like this: I'm a financial company and I want to store 1000 metrics about a potential customer. Maybe the number of transactions in the past year, the number of defaults, the number of credit cards, etc.
Normally I would have to duplicate this row in the database every day/week/month/year for every potential customer. With some kind of git-like storing of diffs between the row today and the row yesterday, I could easily have access to time series information without duplicating unchanged information. This would accomplish MASSIVE storage savings.
FWIW efficiently storing time series data is big problem at my company. No off the shelf solution makes this easy for us right now, and we would rather throw cheap hard disk at the problem rather than expensive engineers.
There are a lot of existing compression algorithms for time series data that do just this. I'm not sure how well any of these are implemented however. I think the problem is not necessarily how the data is stored, since that's fairly easy to fix with a bit of engineering effort if you're willing to write your own system. The harder part is rewriting query engines to take advantage of this sort of compression. Although ideally this could just be abstracted away by the storage layer.
What about Cassandra? I believe it efficiently stores multiple time values for each (row, column) value as it changes. Google's BigTable design does this, and I believe you can use BigTable through Appengine.
Column-oriented databases virtually all feature this in the form of column compression (e.g. "repeat this value for the next 1000 rows"). And if you don't want column compression, they have sparse data filling/interpolation -- e.g. use the last available value from a time series. This is pretty much their bread and butter. Interpolation is essentially making the query engine smarter, so you don't end up in the situation you're apparently facing where you have to insert duplicate records purely to satisfy a simplistic join.
Back to this product (which appears to simply wholesale copy databases?), I use LVM for exactly what it is doing -- I create and rollback and access and update LVM snapshots of databases. The snapshots are instant, and in most situations the data duplications is very limited. LVM is one of the coolest, most under-appreciated facets of most Linux installs -- http://goo.gl/J2mIvG
Shameless plug for mite: https://github.com/jdc0589/mite-node
Simple migrations that take advantage of everything you already know about git and sql, plus some other cool stuff.
It's not too mature yet, the readme is mediocre at best, and it has some issues that will popup when working with a team, but it's pretty damn useful.
While not exactly the same thing, I've recently found and started using https://github.com/nkiraly/DBSteward to specify schema and then store the spec in my repo with the code. It also supports diffing the current schema against a previous one, so that nice upgrade sql scripts can be generated.
This is a nice project. I used to have my database dump tracked by git (in binary mode). anytime my db changes I'll have to overwrite the file with the new database dump and include it with the commit.
I'm just wondering if this project offers anything special/better than the method I described.
So for example, you might have a feature branch that includes some schema changes and some value modifications, and a content branch that includes a bunch of inserts into a few content tables that happen to include foreign key references to each other (so you need to maintain referential integrity when replaying those updates/inserts).
I don't see anything in the description that indicates this tool address those problems. For me, those are really the only problems that a DB version control system ought to be focused on. Speed of snapshotting is not all that important in a development environment as you typically work on a cut-down dataset anyway. A minute or so to take a snapshot a few times a day isn't a huge deal, whereas taking more frequent snapshots doesn't seem like something that adds any value, if it doesn't address any of the other problems.