I too believe it is the best fit, but the "aggregate functions" gets most people. The use of counter columns is very limiting and many engineers don't want to struggle with storing state during streaming writes to precalculate the aggregates on write. Also, engineers tend not to want to do large reads to rebuild large aggregate values on small data changes.
It is what we use, and we use spark streaming for the rollups. We had evaluated Influx, OpenTSDB, and Druid also. So long as you know the exact read patterns for your client I think Cassandra is definitely the best fit for most things.
I did not. The Cassandra schema appears similar. We had enough custom needs for how we aggregated data (e.g. ewma) that we probably needed to do this ourselves anyways.
From what I read (so not confirmed) one problem is that it uses space inefficiently. Since I predefine the columns anyway, they might as well use an efficient storage instead of mongodb style kv-pairs.
I am also not convinced of the partition key necessity (although it doesn't hurt either once you got it).
Finally, since my application runs on the JVM, I'd actually like to see a direct integration / an API that allows me to skip the socket overhead and launch cassandra directly on start-up. The advantage is mainly memory (just need half of it) and latency although I agree this is more difficult to maintain in a distributed scenario.
Writes are faster than read, it's an AP, and you shouldn't really update frequently it unless you want tombstone hell. There's also TTL too.
Is there any cons of using Cassandra as a Time Series Database? I'd like to hear it.
The biggest thing for Cassandra is you should know your queries before hand before you data model.