For me the difference seems to be in handling large number of distinct logs. In Kafka every log & partition is a separate file and moreover it keeps it open. So, storing multiple logs results in writing to many files so eventually random write IO; and also you may hit limits of open files. You can multiplex logical logs in each Kafka log, but then you read unnecessarily other logs.
Keeping SS tables makes it more sequential write and reasonably sequential write, as long as you have enough RAM to get multiple records of each log, so they constitute a continues blocks in flashed file.
Actually you could get very similar result using Cassandra, which also uses SS tables. The difference is that Cassandra keeps merging files, which actually makes much more IO traffic than clients. Cassandra will typically need 16x more IO for merging then actual data write rate. You can limit it a bit if you create time shard tables.
Keeping SS tables makes it more sequential write and reasonably sequential write, as long as you have enough RAM to get multiple records of each log, so they constitute a continues blocks in flashed file.
Actually you could get very similar result using Cassandra, which also uses SS tables. The difference is that Cassandra keeps merging files, which actually makes much more IO traffic than clients. Cassandra will typically need 16x more IO for merging then actual data write rate. You can limit it a bit if you create time shard tables.