The threshold for Cassandra / Dynamo scaling is increasing is probably the only point. "Big data is dead" is pretty stupid to say, typical clickbait marketing by a database that will probably be chucked away by something else trendy in another year.
But at a certain point, a 10,000 core 5 petabyte single megamachine starts to practically encounter CAP from the internal scale alone. It already ... kind of ... does.
And no matter how big your node scales, if you need to globally replicate data ... you have to globally replicate it over a network, and you need Cassandra (DynamoDB global replication is shady last I looked at it, I have no idea how row-level timestamps can merge-resolve conflicting rows updated in separate global regions)
But at a certain point, a 10,000 core 5 petabyte single megamachine starts to practically encounter CAP from the internal scale alone. It already ... kind of ... does.
And no matter how big your node scales, if you need to globally replicate data ... you have to globally replicate it over a network, and you need Cassandra (DynamoDB global replication is shady last I looked at it, I have no idea how row-level timestamps can merge-resolve conflicting rows updated in separate global regions)