If you really need to scale beyond what Postgres/PostGIS can handle, then you might want to check out GeoMesa[1], which is (very loosely) "PostGIS for HBase, Cassandra, or Google BigTable".
That being said, you may not need it, because Postgres/PostGIS can scale vertically to handle larger datasets than most people realize. I recommend loading your intended data (or your best simulation of it) into a Postgres instance running on one of the extremely large VMs available on your cloud provider, and running a load test with a distribution of the queries you'd expect. Assuming the deliberately over-provisioned instance is able to handle the queries, you can then run some experiments to "right-size" the instance to find the right balance of compute, memory, SSD, etc. If it can handle the queries but not at the QPS you need, then read replicas may also be a good solution.
Yeah at my current job we run RDS in AWS and scale it up to a m5.12xlarge when we need to get sh*t done fast. It normally sits around a 4xlarge simply because the 12 is far too expensive.
That being said, you may not need it, because Postgres/PostGIS can scale vertically to handle larger datasets than most people realize. I recommend loading your intended data (or your best simulation of it) into a Postgres instance running on one of the extremely large VMs available on your cloud provider, and running a load test with a distribution of the queries you'd expect. Assuming the deliberately over-provisioned instance is able to handle the queries, you can then run some experiments to "right-size" the instance to find the right balance of compute, memory, SSD, etc. If it can handle the queries but not at the QPS you need, then read replicas may also be a good solution.
[1] https://github.com/locationtech/geomesa