Edit: TL;DR: No, not with replication in the picture.
As for whether replication shows fresh data, it sort of can, but not in a very practical way. synchronous_commit = on, when synchronous_standbys has been configured, does indeed make COMMIT wait until some number of standbys has flusheùd the transaction to disk, but not until the transaction is visible to new transactions/snapshots on the replicas. In other words, it's just about making sure your transaction is durably stored on N servers before you tell your client about the transaction; a new transaction on the replica server after that might still not see the transaction (if the startup process hasn't got around to applying it yet). As a small step towards what you want, we added synchronous_commit = remote_apply, which makes COMMIT wait until the configured number of standbys has also applied the WAL for that transaction, so then it will be visible to new transactions. The trouble is, you either have to configure it in such a way that a dying/unreachable replica can stop all transactions on the primary (blocking the whole system), or so that it only needs some subset of replicas to apply, but then read queries don't know which replicas have fresh data.
I have a proposal to improve that situation: https://commitfest.postgresql.org/22/1589/ Feel free to test it and provide feedback; it's touch and go whether it might still make it into PostgreSQL 12. It's based on a system of read leases, so that replicas only have a limited ability to hold up future write transactions, before they get kicked out of the synchronous replica set, a bit like the way failing disks get kicked out of RAID arrays. Most of the patch is concerned with edge conditions around the transitions (new replicas joining, leases being revoked). The ideas are directly from another open source RDBMS called Comdb2. In a much more general sense, read leases can be found in systems like Spanner, but this is just a Fisher-Price version since it's not multi-master.
As for whether that gets you strict serializability, well no, because PostgreSQL doesn't even support SERIALIZABLE on read-only replicas yet, and although REPEATABLE READ (which for PostgreSQL means snapshot isolation) gets you close, anomalies are possible even with read-only transactions (see famous paper by A Fekete, search for that name in the PostgreSQL isolation tests, for an example). Some early work has been done to try to get SERIALIZABLE (actually SERIALIZABLE READ ONLY DEFERRABLE) working on read-only replicas. https://commitfest.postgresql.org/22/1799/ but some subproblems remain unsolved.
Maybe with all of that work we'll get close to the place you want. It's complicated, and we aren't even talking about multi-master.
First, note that PostgreSQL uses SSI, an optimistic strategy for implementing SERIALIZABLE, so it neeeds to be able to nuke transactions that cannot be proven to be safe. This is a trade-off; other systems use (pessimistic) strict two-phase locking and similar, and we are taking the bet that the total throughput will be higher with SSI than with S2PL. Usually that optimism turns out to be right (though not for all workloads). Note nearby comments about another RDBMS whose SERIALIZABLE implementation is so slow that no one uses it in practice; that matches my experience with other non-SSI RDBMSs too.
Next, you have to understand that a read-only snapshot isolation transaction can create a serialization anomaly. Take a look at the read-only-anomaly examples here, showing that single node PostgreSQL database can detect that: https://github.com/postgres/postgres/tree/master/src/test/is... (and ../expected shows the expected results).
Now, how is the primary server supposed to know about read-only transactions that are running on the standby, so it can detect that? Just using SI (REPEATABLE READ) on read replicas won't be good enough for the above example, as the above tests demonstrate.
The solution we're working on is to use DEFERRABLE, which involves waiting until a point in the WAL that is definitely safe. SERIALIZABLE READ ONLY DEFERRABLE is available on the primary server, as seen in the above test, and waits until it can begin a transaction that is guaranteed not to be killed and not to affect any other transaction. The question is whether we can make it work on the replicas. The reason this is interesting is that we think it needs only one-way communication from primary to replicas through the WAL (instead of, say, some really complicated distributed SIREAD lock scheme that I don't dare to contemplate).
As for whether replication shows fresh data, it sort of can, but not in a very practical way. synchronous_commit = on, when synchronous_standbys has been configured, does indeed make COMMIT wait until some number of standbys has flusheùd the transaction to disk, but not until the transaction is visible to new transactions/snapshots on the replicas. In other words, it's just about making sure your transaction is durably stored on N servers before you tell your client about the transaction; a new transaction on the replica server after that might still not see the transaction (if the startup process hasn't got around to applying it yet). As a small step towards what you want, we added synchronous_commit = remote_apply, which makes COMMIT wait until the configured number of standbys has also applied the WAL for that transaction, so then it will be visible to new transactions. The trouble is, you either have to configure it in such a way that a dying/unreachable replica can stop all transactions on the primary (blocking the whole system), or so that it only needs some subset of replicas to apply, but then read queries don't know which replicas have fresh data.
I have a proposal to improve that situation: https://commitfest.postgresql.org/22/1589/ Feel free to test it and provide feedback; it's touch and go whether it might still make it into PostgreSQL 12. It's based on a system of read leases, so that replicas only have a limited ability to hold up future write transactions, before they get kicked out of the synchronous replica set, a bit like the way failing disks get kicked out of RAID arrays. Most of the patch is concerned with edge conditions around the transitions (new replicas joining, leases being revoked). The ideas are directly from another open source RDBMS called Comdb2. In a much more general sense, read leases can be found in systems like Spanner, but this is just a Fisher-Price version since it's not multi-master.
As for whether that gets you strict serializability, well no, because PostgreSQL doesn't even support SERIALIZABLE on read-only replicas yet, and although REPEATABLE READ (which for PostgreSQL means snapshot isolation) gets you close, anomalies are possible even with read-only transactions (see famous paper by A Fekete, search for that name in the PostgreSQL isolation tests, for an example). Some early work has been done to try to get SERIALIZABLE (actually SERIALIZABLE READ ONLY DEFERRABLE) working on read-only replicas. https://commitfest.postgresql.org/22/1799/ but some subproblems remain unsolved.
Maybe with all of that work we'll get close to the place you want. It's complicated, and we aren't even talking about multi-master.