But doesn't that essentially mean that you've simply shifted the problems of fetching data (overfetching, N+1 problems and resource expansion) a little bit to the right, without solving the actual issue?
It would technically be faster because of lower latency between services in the same data center and sending the final results to the user in one go, but at the same time if your DB is hit 50 times to service that request, you still have a problem.
Essentially you just give the client the ability to run dynamic queries (with the complexity of which, the resolvers and their logic, you now need to deal with), without actually improving anything else, at the cost of noticeably increased complexity and yet another abstraction layer.
That tradeoff seems like a tough sell to me, much like an SQL engine that's not tightly coupled with a solution to actually store/retrieve the data would be like. Then again, there are solutions out there that attempt to do something like that already, so who knows.
> Which is a pretty big deal when you have practically the whole planet as users, many of whom are found where low latency is a luxury.
This is an excellent point!
Different scales will need vastly different approaches and the concerns will also differ. Like the anecdotes about a "Google scale problem", where serving a few GB of data won't phase anyone, whereas some companies out there don't even have their prod DBs be that size.
Another thing to consider is that while a query may be dynamic during the discovery/development phase, once it is solidified in the application for deployment it becomes static and is able to be optimized as such.
If you are a small team of developers that is no doubt a lot of unnecessary indirection. When you are small you can talk to each other and figure out how to create a service that returns the optimized results right away. At Facebook scale, if everyone tried to talk to each other there would be no time left in the day for anything else.
It would technically be faster because of lower latency between services in the same data center and sending the final results to the user in one go, but at the same time if your DB is hit 50 times to service that request, you still have a problem.
Essentially you just give the client the ability to run dynamic queries (with the complexity of which, the resolvers and their logic, you now need to deal with), without actually improving anything else, at the cost of noticeably increased complexity and yet another abstraction layer.
That tradeoff seems like a tough sell to me, much like an SQL engine that's not tightly coupled with a solution to actually store/retrieve the data would be like. Then again, there are solutions out there that attempt to do something like that already, so who knows.