Wouldn't it just be easier to use MySQL or Postgres in that situation? I mean if you are connecting to a network share, why not use a database over the network instead?
because sqlite is way simpler to deploy. MySQL and Postgres require being set up, allow outside connection but you need to be careful not to open to much, you need to create a database, a user, set a password, grant privileges to the user on a database, configure the clients to use this user and this database.
With a network share, you already configured authentication. SQLite would be easier and maybe even more secure since it does not increase the attack surface?
however, SQLite is not really meant to be used concurrently efficiently.
To be honest, you are replacing one set of problems with another. The rationale about configuration does not make a lot of sense either, better you learn how to do things right than what seems easier.
I haven't played with shares for a long time. But if I remember only one node can write to a file at once. That's also to say if file read/write performance will be any good for this use case.
NFS or Samba servers have their own issues with attack surface.
You could add a queue and execute the queries one at a time. Probably SQLite or at least popular wrappers for it already do this? But it will be slower than a DBMS that allows you to make hundreds of queries in parallel.
Sqlite is in loads of things, Firefox, VSCode etc., if you have NFS home directories you can have issues. KDE’s akonadi uses MySQL via localhost, though it can’t handle you being logged in more than one place. My view is that a JSON or XML file would work a lot better.