There can be more to data integrity than foreign keys and transactions.
We have an application to track and distribute work to employees, there can only be one active work item per item type on an account - we use MSSQL for this and originally enforced this with a CHECK constraint, which ultimately failed, we switched to a partial index (CREATE UNIQUE INDEX ON activities(account_id, activity_type) WHERE activity_state = 0) which works flawlessly. There is absolutely no sane way to implement this at the application layer, and even then we have additional applications and administrative work that may access this database directly so we can't rely on the application enforcing data integrity anyway. MySQL doesn't even support CHECK constraints, nor does it have partial indexes - while even SQL Server has supported partial (filtered) indexes since SQL Server 2008 (nearly a decade after PostgreSQL implemented them, but hey, still at least a decade before any of the MySQL family will have them).
Should you put application logic in your database? No, probably not. But you should always make sure your database prevents ANY invalid data from being put into it, your little application won't be the only one touching that database forever in all likelihood.
Ha...that's a bit of a hack! I've had to do a similar trick for a unique constraint on columns where we didn't care if the value was null. SQL Server (for some reason) considers null a value in this case; in other words, there could only be one null value in the table for purposes of a unique constraint.
E.g:
create unique index on (col) where col is not null
Stuff like this always makes me glad PostgreSQL takes adherence to the SQL standard seriously, NULLS are supposed to be considered non-equal for unique indexes - probably some stupid design from the Sybase days that MS decided was easier to keep for compatibility than change, though.
b) just the fact that you're (rudely) giving me the religion on CHECK constraints means you've probably never actually scaled a DB for high transaction volume. Once you have to start sharding and denormalizing for performance, we've left ACID territory, and most of the validation has to be in the application anyway.
We have an application to track and distribute work to employees, there can only be one active work item per item type on an account - we use MSSQL for this and originally enforced this with a CHECK constraint, which ultimately failed, we switched to a partial index (CREATE UNIQUE INDEX ON activities(account_id, activity_type) WHERE activity_state = 0) which works flawlessly. There is absolutely no sane way to implement this at the application layer, and even then we have additional applications and administrative work that may access this database directly so we can't rely on the application enforcing data integrity anyway. MySQL doesn't even support CHECK constraints, nor does it have partial indexes - while even SQL Server has supported partial (filtered) indexes since SQL Server 2008 (nearly a decade after PostgreSQL implemented them, but hey, still at least a decade before any of the MySQL family will have them).
Should you put application logic in your database? No, probably not. But you should always make sure your database prevents ANY invalid data from being put into it, your little application won't be the only one touching that database forever in all likelihood.