Yes, the db has been the most challenging aspect for us. We have 3 situations -
1. "Common baseline". With a relatively stable product, most branches (as in ~51%) do not impact the schema. For testing / QA purposes, these share one central QA db and pollute each out. Turns out, a lot of the times this is quite ok because the PR is about how the data is displayed, or improved logging, or UX change, or a security layer or anything else other than core domain knowledge - they don't care for the data that much.
2. "I'm special". Some branches do modify data (whether the format or the structure). To handle these, the manifest.json file has an option to request a separate database. If present, the rollout script will do "pg_dump + copy" of the shared staging DB, and duplicate it into "qa_$BRANCH", then update the config file (or .env for Docker) with the appropriate connection value. Additionally, it will all *sql files in a dir specified in manifest.json against the clone DB. This is done on every release, which does get annoying by resetting the qa data (we could add another manifest switch here I). On the upside, it forces you to codify all data migration rules from the start.
3. "I am very special". Some changes transform data in a way that requires business processing and cannot be done with easy SQL. Sorry, out of luck - we don't automate special cases yet. The developer has to pull the QA database to localhost, do his magic, and push it back. Not ideal, but hasn't caused any problems yet. If ain't broke...
1. "Common baseline". With a relatively stable product, most branches (as in ~51%) do not impact the schema. For testing / QA purposes, these share one central QA db and pollute each out. Turns out, a lot of the times this is quite ok because the PR is about how the data is displayed, or improved logging, or UX change, or a security layer or anything else other than core domain knowledge - they don't care for the data that much.
2. "I'm special". Some branches do modify data (whether the format or the structure). To handle these, the manifest.json file has an option to request a separate database. If present, the rollout script will do "pg_dump + copy" of the shared staging DB, and duplicate it into "qa_$BRANCH", then update the config file (or .env for Docker) with the appropriate connection value. Additionally, it will all *sql files in a dir specified in manifest.json against the clone DB. This is done on every release, which does get annoying by resetting the qa data (we could add another manifest switch here I). On the upside, it forces you to codify all data migration rules from the start.
3. "I am very special". Some changes transform data in a way that requires business processing and cannot be done with easy SQL. Sorry, out of luck - we don't automate special cases yet. The developer has to pull the QA database to localhost, do his magic, and push it back. Not ideal, but hasn't caused any problems yet. If ain't broke...