Look at every mobile gaming company and you will see document stores. Dynamo, BigTable, DocumentDB. Google's very own analytics is built on BigTable. The examples are countless.
While a document store is a subset of NoSQL databases, NoSQL is not synonymous with document store. It also includes graphs, key-value stores, etc. That's what the original person was getting at - you should've started with "document store".
While I may use JSON for storing click data, I wouldn't do it using a NoSQL database.
And there is no reason why Postgres couldn't handle this kind of stuff. Sure it doesn't come with the needed scaling tools built in but it's doable. I'm running billions of rows through a Postgres analytics setup.
A purpose built system can be more efficient but it also doesn't need to mean NoSQL. You can see some of the serious-scale datastores adopting SQL-like query interfaces. For example Hive for Hadoop or PlyQL for Druid.
NoSQL is a loose term for a mish-mash of technologies.
Dynamo started as a document store. Bigtable is not but I included it because it is another very good option for analytics and is still lumped under the nosql umbrella.
Using relational database for event analytics means that you have a trivial use case. You might have a lot of rows, but your data is dead simple. Otherwise you would refactor every week to change the schema and sharding. Not to mention how much money you would spend on hardware.
No it didn't. The original Dynamo paper doesn't even include the word "document" once! The title of the paper makes it obvious what kind of system it is: "Dynamo: Amazon’s Highly Available Key-value Store"
You should really do some fact checking on your statements.
Bigtable alone does not make an analytics system. You'll need a lot more around that.
I don't have a trivial use case. We use the jsonb data type in Postgres and have no upfront defined schema for metrics.
Postgres' jsonb can out-perform many of the open source NoSQL databases out there. Our system consists of a single digit of nodes that can handle hundreds of thousands of writes per second and as I mentioned already, billions of rows. A single CPU core can scan though more than a million rows per second. Is it the fastest that there is? No. Is it good enough for most people? Absolutely.
While big table can be classified as a database that can be scaled to extremely large size, I would not recommend it for the backend of a game at all.
It would be extremely expensive to run a workload like this, and would not be very performant.
This is if we are assuming a very high transaction rate, lots of concurrency and likely highly volatile data. Big table is just not really for that. Either is Dynamo. Both of these are great at giant scale multi purpose databases however.