> On performance: SQLite has a mechanism for canceling queries that take longer than a certain threshold.
Can't it consider the complexity of the query and the actual database (indices, tables size etc) to guess how heavy the request is goign to be in advance?
The page by the link says it's about the immutable mode and INSERT and UPDATE errors raise errors. As about me it's a very usual use case: I pre-populate a database with a huge data set (some GiBs) in a single batch and work with the data (data science stuff) in read-only mode then, some time after, add another batch of data (usually much smaller than the initial one) to it but database writing and reading never happens simultaneously in this scenario. In fact Datasette seems a thing I've always wanted as it is probably going to let me access my SQLite databases over a network this way.
Can't it consider the complexity of the query and the actual database (indices, tables size etc) to guess how heavy the request is goign to be in advance?