>I think the bigger issue is that you can use an RDBMS to do NoSQL-like things, such as key-value stores, with the flexibility to structure your data if you need that later. So why not start with a relational database?
I've tried that in the past and failed miserably.
1) values in a key-value table will endup needing to hold nested data structures, such as a JS object/hash. Ie. mykey={...}
2) turning values to JSON (or some other serialization) makes it impossible to concurrently update or search subkeys.
3) so you convert your complex (sets, hashes) key-value data into several rows that hold one subkey per row, so now you have updatable rows but still no indexable solution and a serious performance problem.
4) so you create a multi column type-segregated table (one column for each DB type) for indexing your key-values and making them searcheable. That also requires a metadata table to go with it so that you know in which column your key or subkey is indexed.
5) say you successfully implemented your key-value store with your RDBMS. You still don't have usable joins (you don't have relational data) or a real migration path out of your key-values.
Trust me, don't just put keys and values in a relational DB. Start with the right tool for the job, either make your schema relational from the beginning or use a proper KV or document store.
1. Postgres has a json and jsonb datatype specifically to be your values.
2. It has built in query and modification tools that can manage the nested objects and array and whatnot.
3. It let's you build indexes to help with all this, including reverse indexes for asking something like "is this string in the array inside this object".
4. You don't have to specify your schema in sone horrid way, you can just throw what you want at it and specify later, and get performance numbers that are competitive and even exceed some popular NoSql stores.
5. You can mix and match freely with all the niceties of a proper SQL database with transactions, joins, constraints, etc, as you see fit to evolve.
So yeah, don't just throw your keys and values into some text table somewhere; but use instead one of the best proper kv stores, that just happens to also be one of the best sql databases, and figure out how you want your schema you need it.
Postgres JSON(B) falls nearly in the same category. Do not use Postgres as a key value store, use a KV store for that.
Postgres JSON is great for storing the original JSON data coming from external APIs and Webhooks, such as Stripe payments. But it's not a silver bullet and really not a first class citizen. SQL query syntax for JSON data is awful, non-standard and requires casting types, which defeats the purpose of schemaless. I believe Postgres JSON types solve a different use case: an app based on a relational schema that needs to store JSON structures eventually.
I don't recommend starting off your app basing it on Postgres JSON. Use the full relational Postgres instead and do it right from the start.
Same can be said of MySQL. Not comparing to Postgres, but I've been amazed at how many people do not know that either of these as a powerful JSON type.
I did this too. The issue with using an RDMS for a KV Store is that the data you are storing often has to drop to lowest common form, or the database structure gets wildly complicated. So we initially had:
subject (varchar), key (varchar), value (varchar)
But that gets very messy very quickly. So we added a field to specify the value's type. That worked, but then there's an overhead reading and writing the data - because there's a constant conversion. So we then added the most common types as extra value fields. That was the best option. But then the problem is that it gets fiendishly complicated to extract the data. I mean, this was the simplest version that never made it to production. In the end, reporting killed the idea. Because using SQL Server Reporting Services and needing to pivot the data to create any reports became a massive performance issue. Around that time EF Code first was coming in the .Net world, so we went that route instead. And really, I feel like it was the right solution.
I've tried that in the past and failed miserably.
1) values in a key-value table will endup needing to hold nested data structures, such as a JS object/hash. Ie. mykey={...}
2) turning values to JSON (or some other serialization) makes it impossible to concurrently update or search subkeys.
3) so you convert your complex (sets, hashes) key-value data into several rows that hold one subkey per row, so now you have updatable rows but still no indexable solution and a serious performance problem.
4) so you create a multi column type-segregated table (one column for each DB type) for indexing your key-values and making them searcheable. That also requires a metadata table to go with it so that you know in which column your key or subkey is indexed.
5) say you successfully implemented your key-value store with your RDBMS. You still don't have usable joins (you don't have relational data) or a real migration path out of your key-values.
Trust me, don't just put keys and values in a relational DB. Start with the right tool for the job, either make your schema relational from the beginning or use a proper KV or document store.