Hacker News new | past | comments | ask | show | jobs | submit login

Why didn't you just do more granular writes and let the different processes lock the file? SQLite is impressively fast, it can handle a lot before you need a full database.



I don't know how to do that.

Basically there is one process periodically dumping stats into a sqlitedb. There is an HTTP API that can retrieve said stats. How do I make it so the HTTP API doesn't occasionally fail when trying to read from the DB?


Retry until it gets the lock and don't hold the lock longer than you need to to write to the db.

Really though if you are trying to read and write to a db concurrently and over the network, it sounds much more clear cut for something like postgres.


I don't really like the idea of a retry loop. Why doesn't sqlite have an API to just block until the DB is available?


You asked the question and now you don't like the 'idea' of the answer. Use real numbers instead of feels, and again if you need real concurrency, use a real database.


"Retry" is not a solution; it's a workaround. Imagine if that's how opening files worked: "Oh yeah, if the file fails to open just retry a few times until it works."

There's a small chance even retrying 3 times you could fail all 3 times in a row. WAL journaling is a solution.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: