On the more general point, I think that the larger scale the abstraction is, the larger the number of actually encountered use cases needs to be. You shouldn't write an ORM because you feel like existing ones don't serve your specific needs. You should write SQL, and then replace it with an ORM once it's clear you will benefit from the extra abstraction.
There seems to be a fairly consistent trade-off between the scale of the abstraction, the degree of over-engineering, and the leakiness of the abstraction. So if you're going for a large scale abstraction, then I'd want lots of proof that it's not a leaky one.
I'm kind of bothered by the choice of direct SQL versus ORM. I'd prefer something in between that assists with SQL but doesn't try to completely hide you from the RDBMS: helper API's. I know many developers "hate" the RDBMS (perhaps because it's unfamiliar), but shifting the burden from potentially tricky RDBMS's to potentially tricky ORM's is not problem solving, just problem shifting. But maybe the ORM debate is another topic in itself.
I think I phrased my opinion badly. I'm generally in favor of using ORM's because they simplify 99% of the stuff you need to do in the database. There will always be stuff that doesn't fit your choice of abstraction well... that's just a fact of life.
I'm mostly wailing against writing your own ORM (and also other forms of "not invented here" syndrome), which is something I've seen done way, way too many times, because you have to reinvent a ton of things you probably don't have much experience with, will probably get wrong in ways you don't anticipate, and will miss scaling requirements you don't know about yet. Because of this, I think there should be a heavy burden of experience on writing "core" libraries.
ORM's "simplify" things until something goes wrong, then they can be a pain to troubleshoot. A "light" set of utilities can also simplify a good portion of the database interaction. However, they don't give you a false sense of power such that you tie yourself into a knot that bites you 3 years down the road. But, it depends on the shop: If your shop always has a couple of good ORM experts around, then they can deal with and/or avoid such snafus as they happen. But, it's hard to "casually" use ORM's right.
I suspect our experiences differ, or maybe it's our philosophies. I'm of the opinion that if your self-tied knot bites you 3 years down the road, that's better than the status quo. :P
That's situational, of course. My current projects are tech giant things, so some choices actually do have a >3 year window, but then there's a tech giant sized stack of use cases to design against.
When my projects were startup problems though, I'd be happy if that component still worked well after 3 years of growing and pivoting. In that kind of environment, getting more for free is by itself valuable for your ability to pivot quickly.
The environment does indeed matter. With startups, the short term is critical: if you don't grow fast enough, there will be NO "3 years from now" to fix. A big bank will care about glitches 3 years from now.
I'm not sure what you mean in the middle paragraph, per "stack of use cases to design against". You mean getting it done "now" overrides problems 3 years from now? Your org making that trade-off is fine as long as it's clear to everyone and perhaps documented.
On the more general point, I think that the larger scale the abstraction is, the larger the number of actually encountered use cases needs to be. You shouldn't write an ORM because you feel like existing ones don't serve your specific needs. You should write SQL, and then replace it with an ORM once it's clear you will benefit from the extra abstraction.
There seems to be a fairly consistent trade-off between the scale of the abstraction, the degree of over-engineering, and the leakiness of the abstraction. So if you're going for a large scale abstraction, then I'd want lots of proof that it's not a leaky one.