I'll bite. I think with most relational and document databases, my view of the "classical world", you are often storing all of your data in tables or documents in a structured way. Normalizing the data where possible. You're then building indexes on that data to pull it out in a useful operational, optimizing for the most common operations. If a new use case comes up, you add some more tables, add some constraints, and create some new indexes.
I think with Redis, you are starting knowing that your use case needs to be fast. You start with the access patterns and fit that into key/value lookups (or search with modules). You're typically denormalizing the data, duplicating it or aggregating it from other places, so that you can serve operational data fast. Otherwise, just use in-memory SQL.
I think with Redis, you are starting knowing that your use case needs to be fast. You start with the access patterns and fit that into key/value lookups (or search with modules). You're typically denormalizing the data, duplicating it or aggregating it from other places, so that you can serve operational data fast. Otherwise, just use in-memory SQL.