Hacker News new | past | comments | ask | show | jobs | submit login

Both the HLL (Algebird) and TDigest implementations we're using have a simple way to serialize a compressed representation. So basically just reading the row, merging the value currently stored, and writing the merged value back.

Depending on how many times you will write to the row, you could avoid having to do a merge on write by using APPEND_IF_FITS and just merging the byte arrays when you read.

It's nice that FDB gives you so much low level flexibility, you can do whatever you feel fits your use case.




Hey man, thats pretty cool and we do exactly the same using Cassandra instead of FDB. Since Cassandra doesnt support transaction at high volume (100K tps) we do a shuffle so that all the same key do read/modify/write from the same machine. It seems like with FDB you can get away with it as it supports transactions? My question to you is what is the volume your system is operating at? Also how does it work for skews? Lets say you need to update HLL for a key that is heavily skewed, does your FDB transaction unwind fast enough not to slow down the whole system?


Great questions!

> what is the volume your system is operating at?

This varies, as our workload is dynamic in that anyone at any time can inject a query for the data stream, but for this sake lets say 5k.

> Also how does it work for skews?

Foundation does a magnificent job automatically detecting and physically relocating skew. However, to mitigate write skew, I use time bucketing techniques where party of the key is a MURMUR3 hash of the minute_of_hour so that heavy write loads can only affect a server for one minute. This has helped with certain metrics.

> Lets say you need to update HLL for a key that is heavily skewed, does your FDB transaction unwind fast enough not to slow down the whole system?

There isn't really a concept of an HLL (or key) being heavily skewed. A key lives on a single sever (or multiple, depending on replication). Essentially, when I want to merge additional HLL content into one already store, I just read it, deserialize it, merge it with the one I have and then write the result back to FDB. Because of transactions I can ensure that nobody else is doing the same exact thing I am doing. If there were...then mine (or their) transaction would fail, and retry. The retry is important because it would reattempt the same logic, except the result I got from the database would be the merged result from somebody else. This allows you to ensure that idempotent / atomic operations happen as you'd expect.


Thanks for the reply, got a few more additional questions for you :-)

Lets say you are counting distinct ips used by `users` using HLL. Lets say you start getting DDOSed by certain users since I am assuming you are not doing s shuffle before writing to FDB, you will be locking the user, reading HLL, deserializing, merging and writing back to FDB from multiple machines which will results in a lot of rejected transaction and retries. My question is whether retries unwind fast enough or you will end up dropping data on the floor as you will exhaust the retry count


Turns out we are doing a shuffle :) - We're using Apache Flink for the aggregation step (5 second window) which performs a merge on key before writing the value out. So at the end of the day, we would only read/deserialize/merge/write once every 5 seconds, that is of course assuming we received data for the HLL aggregation.

However, due to the need for HA, we might run two or three clusters in different AZs which means we might have a few servers writing a partial aggregation to the same row, thus, the awesomeness of FDB plays a role.

That being said, our P99 latency writing to FDB is typically very low (few ms). We're doing usually 4,000 - 5,000 transactions a second at any given time.


Not the person you’re responding to, but you can merge HLLs together, so if your workload was skewed, you could hash the value you’re adding to the HLL and distribute it among more keys in FoundationDB.

Additionally, depending on the write rate and the size of the data being written to the HLL, it may be worth only actually writing it out periodically and keeping a log you read at runtime of recent values.

There is a trade off between needlessly re-writing mostly unchanged data and read performance that is similar to the IO amplification trade off in log structured merge tree-derived storage engines.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: