For Mongo I write a data layer that has methods like GetFoo() or UpdateFoo() where the methods have implicit knowledge of key structure (and query model objects for validation, etc)
For SQL I would do something similar but use (for example) SQLAlchemy Core (not ORM) to do DB-agnostic SQL generation. That would essentially give me a SQL API somewhat similar to pymongo.
For some things ORM's get in the way, but avoiding them by your own data layer sounds like a pretty big time-suck.
Being able to write a single line of SQLAlchemy to build relationships into your models seems way simpler than implementing relationships in a custom data layer.
I don't deny that it's more work. The question is whether it's worth the investment. I think for large, complex codebases (not little one-off apps) the gains in performance and long-term maintainability are worth it. Having the ability to know the exact performance implications of a query (and not having to waste brain cycles on questions about lazy vs eager loading, etc) is more desirable.
For SQL I would do something similar but use (for example) SQLAlchemy Core (not ORM) to do DB-agnostic SQL generation. That would essentially give me a SQL API somewhat similar to pymongo.