That all true but at some point the combinations of paths explode. It is not possible to write tests for all the combinations then it possible to cover them eventually with some probability. Fuzzing covers more execution path combinations over time.
We write to WAL and then register the transaction in the transaction sequence registry.
If a concurrent transaction registered between the start and the end of the transaction, we update the current uncommitted transaction data with concurrent transactions and re-try registering it in the sequencer again.
To scale to multi-master we will move the transaction sequence registry to a service with a consensus algorithm.
Comparison with old version is actually in the article for the patient reader. It could go to the top but I don't think it will make a difference. At the end of the day it is the article at the official QuestDB website which gives the reader a spoiler about the bias.
I am intrigued what Timescale is going to publish next.
There were 2 queries in the QuestDB benchmark over the same table. ClickHouse didn't even try to match both of them choosing one as a victim. I guess that's what happens when you optimise the data storage for one query.
Why does the lack of indexes matter? Especially when the size on disk is so much higher? Defining a sensible index isn't an unreasonable or daunting task, and minimal effort in CH got a 4x speedup over QuestDB. "It's faster if you invest literally zero time making it efficient" doesn't offer any practical benefit to anyone.
If it was demonstrated that Quest did a better job overall in the majority of cases where an optimization would have been missed, that's one thing. But this feels awfully nitpicky.
The article is not _just adding an index_. They are embedding one of the search fields in a table _primary key_. That likely means the whole physical table layout is tailored for that single specific query.
While it can help to win this very benchmark it's questionable whether it's usable in practice. Chances are an analytical database serves queries of various shapes. If you only need to run a single query over and over again then you might be better off with a stream processing engine anyway.
The primary key is, in effect, an index. Specializing on the latitude field of a table of geographic data seems like an incredibly small thing to nitpick.
Doesn't matter, since that clearly wasn't the purpose of the article. After all, they were totally happy to add an index for another competing DB as long as they happened to win that comparison. Then they crow about how they beat having an index.
So, maybe do not create specific scenarios for corner cases and then generalize outcome? And write articles about common scenarios that is important for people who will use technology on daily basis.