Yep, you need Google-scale for your shitty startup from the get-go. Otherwise why bother? Especially now, when who have single servers with only a few terabytes of RAM at your disposal.
Many of the problems with databases that I outlined in that post are about how they create complexity, which is not necessarily related to performance or scale. Complexity kills developer productivity, which reduces iteration speed, which can be the difference between an application succeeding or failing.
I can imagine Codd saying the exact inverse: any sufficiently complex data model quickly becomes intractable for developers to assemble ideal indexes and algorithms together each time in response to new queries, which kills productivity and reduces iteration speed. Particularly as the scale and relative distributions of the data changes. The whole idea of declarative 4GLs / SQL is that a query engine with cost-based optimization can eliminate an entire class of such work for developers.
Undoubtedly the reality of widely available SQL systems today has not lived up to that original relational promise in the context of modern expectations for large-scale reactive applications - maybe (hopefully) that can change - but in the meantime it's good to see Rama here with a fresh take on what can be achieved with a modern 3GL approach.
In the experience of my department, event sourcing has brought complexity, not taken it away. Theoretically, when done right, by skilled and disciplined teams and to store the state of one system to build projections off of and not to publish state changes to other systems, I think it might work. But in most cases it's overkill. A big investment with an uncertain payoff.
Interesting, what if one creates multiple databases, but on the same instance (or even process group). Can these conflicting issues resolve somehow? Is there a super-proxy which would only log transaction phases and otherwise offload jobs to database servers (and sqlite wrappers)?