Hacker News new | past | comments | ask | show | jobs | submit login

> On performance: SQLite has a mechanism for canceling queries that take longer than a certain threshold.

Can't it consider the complexity of the query and the actual database (indices, tables size etc) to guess how heavy the request is goign to be in advance?




sure, but aren't there some things that can't calculated in advance like i/o, locked tables or priority queries just to name a few


> sure, but aren't there some things that can't calculated in advance

I don't mean precise calculation but a reasonable guess

> like i/o

It can be benchmarked and estimated.

> locked tables

Why would tables be locked if we work in read-only mode?


How did the data get there.


The page by the link says it's about the immutable mode and INSERT and UPDATE errors raise errors. As about me it's a very usual use case: I pre-populate a database with a huge data set (some GiBs) in a single batch and work with the data (data science stuff) in read-only mode then, some time after, add another batch of data (usually much smaller than the initial one) to it but database writing and reading never happens simultaneously in this scenario. In fact Datasette seems a thing I've always wanted as it is probably going to let me access my SQLite databases over a network this way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: