The problem with SQL is that most products that implement it have subtle (and not so subtle) differences with custom functions, types, syntax for all sorts of things, etc. Sqlite is actually rather limited and lacks a lot of stuff you'd find in e.g. postgres or mysql, both of which tend to implement the same things very differently. That's of course aside from things like joins, sub selects, etc. ANSI SQL only gets you so far.
I'd argue decoupling how you distribute data from how you store data, and how you are going to query it is probably a good thing.
CSV is indeed not a great format because it lacks structure and types (everything is a string). Flattening complex data in csv format can get ugly quickly. I recently had to deal with some proprietary object database dumped out as csv. I had some great fun trying to reverse engineer their schema from the raw data.
XML has the same problem unless you dream up some schema that explicitly states the type. Json has some basic types for strings, booleans, and numbers. This makes it a bit more useful than XML out of the box. Both xml xml and json objects represent trees of information and SQL databases represent tables of rows. Of course some databases can store json (or xml even) and allow you to do some queries over that. E.g. Elasticsearch does a decent job of guessing schemas for json documents. It will figure out things like dates, geojson, numbers, strings, etc.
So, distributing data as sqlite files is not necessarily a horrible idea but it doesn't really solve the problem of how to move data between different data tools. What's lacking is a standard way to to distribute strongly typed tabular data. Most databases can import/export csv and native inserts for their flavor of sql but not much else. Most (open) data get shipped using some custom schema on top of xml, csv, json or whatever else seemed fashionable at the time.
I'd argue decoupling how you distribute data from how you store data, and how you are going to query it is probably a good thing.
CSV is indeed not a great format because it lacks structure and types (everything is a string). Flattening complex data in csv format can get ugly quickly. I recently had to deal with some proprietary object database dumped out as csv. I had some great fun trying to reverse engineer their schema from the raw data.
XML has the same problem unless you dream up some schema that explicitly states the type. Json has some basic types for strings, booleans, and numbers. This makes it a bit more useful than XML out of the box. Both xml xml and json objects represent trees of information and SQL databases represent tables of rows. Of course some databases can store json (or xml even) and allow you to do some queries over that. E.g. Elasticsearch does a decent job of guessing schemas for json documents. It will figure out things like dates, geojson, numbers, strings, etc.
So, distributing data as sqlite files is not necessarily a horrible idea but it doesn't really solve the problem of how to move data between different data tools. What's lacking is a standard way to to distribute strongly typed tabular data. Most databases can import/export csv and native inserts for their flavor of sql but not much else. Most (open) data get shipped using some custom schema on top of xml, csv, json or whatever else seemed fashionable at the time.