I think you have to really think hard about what you want out of a history tracking system, and there isn't a one-size-fits-all answer. We did something like the article but much more elaborate. At the core though, we addressed the concern you raise in two ways:
1. We forced fully serialized write transactions, because we determined that the users needed simplified semantics they could understand. They need a linear history model for the whole DB, not leaking out a weird graph of MVCC states on different rows. This obviously has some significant performance/scalability implications. Our worldview is more for small, long-tail use cases, so we can scale by having multiple independent databases for different communities and not worry so much about having one large database with optimized write concurrency.
2. We tracked the version timestamps generated by all write transactions in another system table with a uniqueness constraint. The theoretical collision here would abort the second write transaction with a conflict, requiring the application to try again (with a later timestamp). I am not sure it has happened in practical operation given the high precision timestamps used in a modern Postgres deployment.
But, we went wild in other ways: we store model information (much like is in pg_catalog) and have history tracking of that too. This allows different snapshots to add/remove schemas, tables, columns, key constraints, and foreign key constraints, as well as renaming schemas, tables, and columns. The history storage uses JSONB and the model snapshot is needed to properly project this content back into the specific table definition at a given time.
Our systems application is a relatively low-level web service that provides access to a shared database. So we're not really targeting hand-written SQL use cases (though it is feasible to do some custom SQL against the "live" database as a privileged system operator). Instead, our regular use case is that all queries are expressed in our web service API and it generates the SQL queries. Then, an extra parameter can specify a snapshot ID to have the web service API operate read-only over an earlier snapshot. It generates the different SQL necessary to perform the same kind of query over historical data.
And, since our user-visible concept is a web service rather than a raw database, we inject our own web authentication and authorization scheme. So authenticated web users can be tracked for the "who did it" metadata, and we also built our own fine-grained access control scheme that is understood by the web access layer. This policy model is written in terms of the database model and so also gets tracked as snapshots of policy appropriate to one version of the database model. Querying an older snapshot means reconstructing the older model and the older policy that is written in terms of that model.
A final kink is the possibility of needing to amend history. You might need to go back and REVISE the policies applied to older timestamps, because you realize you had a flaw in your policy or because organizational policies have changed and you want to enforce these on any subsequent queries of older data. You might also need to REDACT older data entirely for some kind of confidentiality or compliance reason. This could mean scrubbing historical tuples to make certain fields "null" or even making some historical tuples disappear entirely.
1. We forced fully serialized write transactions, because we determined that the users needed simplified semantics they could understand. They need a linear history model for the whole DB, not leaking out a weird graph of MVCC states on different rows. This obviously has some significant performance/scalability implications. Our worldview is more for small, long-tail use cases, so we can scale by having multiple independent databases for different communities and not worry so much about having one large database with optimized write concurrency.
2. We tracked the version timestamps generated by all write transactions in another system table with a uniqueness constraint. The theoretical collision here would abort the second write transaction with a conflict, requiring the application to try again (with a later timestamp). I am not sure it has happened in practical operation given the high precision timestamps used in a modern Postgres deployment.
But, we went wild in other ways: we store model information (much like is in pg_catalog) and have history tracking of that too. This allows different snapshots to add/remove schemas, tables, columns, key constraints, and foreign key constraints, as well as renaming schemas, tables, and columns. The history storage uses JSONB and the model snapshot is needed to properly project this content back into the specific table definition at a given time.
Our systems application is a relatively low-level web service that provides access to a shared database. So we're not really targeting hand-written SQL use cases (though it is feasible to do some custom SQL against the "live" database as a privileged system operator). Instead, our regular use case is that all queries are expressed in our web service API and it generates the SQL queries. Then, an extra parameter can specify a snapshot ID to have the web service API operate read-only over an earlier snapshot. It generates the different SQL necessary to perform the same kind of query over historical data.
And, since our user-visible concept is a web service rather than a raw database, we inject our own web authentication and authorization scheme. So authenticated web users can be tracked for the "who did it" metadata, and we also built our own fine-grained access control scheme that is understood by the web access layer. This policy model is written in terms of the database model and so also gets tracked as snapshots of policy appropriate to one version of the database model. Querying an older snapshot means reconstructing the older model and the older policy that is written in terms of that model.
A final kink is the possibility of needing to amend history. You might need to go back and REVISE the policies applied to older timestamps, because you realize you had a flaw in your policy or because organizational policies have changed and you want to enforce these on any subsequent queries of older data. You might also need to REDACT older data entirely for some kind of confidentiality or compliance reason. This could mean scrubbing historical tuples to make certain fields "null" or even making some historical tuples disappear entirely.