It has been pointed out before that json(b) has problems with indexing. IIRC the cost estimates of indexes on JSON data are static, and therefore very rarely accurate. I'm terribly sorry but couldn't find a reference with 5min of searching. I still like postgres over mongo
I guess the workaround would be creating indexes on computed columns that query from the json data, together with changing one's queries to use that computed field. For example, with a json column storing names in various places, a computed column could collect all of them in an array. An index on that computed column will have good statistics.
Bottom-line: if you want your queries to run fast, you will have to tell your store what kind of data you have and what kind of queries you will run. Otherwise, there's little the store can do.
Having a traditional database with various constraints is a way to give that information. With json columns, you may have to do it in another way (for now).