Hey, thanks for your reply. I think you guys have done an amazing job creating a very powerful GraphQL client. However, to me "smart" GraphQL clients don't make much sense. My approach with WunderGraph is the following: You write down all your Operations using GraphiQL. We automatically persist them on the server (WunderNode) and generate a "dumb" typesafe client. This client feels like GraphQL and behaves like GraphQL but is not using GraphQL at all. It's just RPC. This makes using GraphQL more performant and secure at the same time. Additionally it's a lot less code and a smaller client because those RPC's are a lot simpler.
But I don't think it addresses why I'd want a "smart" GraphQL client: normalized caching on the client.
Say I have a dashboard where multiple panels on a given page make their own requests, since the panels are shared between many pages. But they share some objects. If I get updated data in a request from one panel, I'd like to see that update in all panels, without triggering more requests.
Side note that a magic layer to have each of those components combine their requests into one would actually hurt performance, since it's better to load the requests in parallel. And manually merging them into one would be quite a chore.
Client side caching with a normalized cache implementation is very hard to get right. I see why you would want that feature and if it were simple to implement I'd always want to use it. However I think we can get away with a solution that is a lot simpler than normalized caching. With persisted Queries we can apply the "stale while revalidate" pattern. This means we can invalidate each individual page and would have to re-fetch each page in case something updated. This is some overhead but the user experience is still very good. Normalized caching in the client can get super hairy with nested data. In addition, normalized caching adds a lot of business logic to the client which makes it hard to understand the actual source of truth. From a mental model it's a lot simpler if the server dictates the state and the client doesn't override it. If you allow the client to have a normalized cache the source of truth is shared between client and server. This might lead to bugs and generally makes the code more complicated than it needs to be. Is it really that bad to re-fetch a table? I guess most of the time it's not. I've written a blog post on the topic if you want to expand on it further: https://wundergraph.com/blog/2020/09/11/the-case-against-nor...
> Client side caching with a normalized cache implementation is very hard to get right
Absolutely true! When I worked on this at $prevCo, it was tricky and sometimes caused bugs, unexpected behavior, and confused colleagues.
I will say that a proper implementation of a normalized cache on the client must have an ~inherent (thus generated) understanding of the object graph. It also must provide simple control of "whether to cache" at each request. Most of the problems we experienced were a result of the first constraint not being fully satisfied.
My impression is that Apollo does a good job on both of these but I haven't used it so I can't say.
I'll also note that the approach of "when one component makes an update, tell all other components on the page to refetch" sounds like a recipe for problems too – excess server/db load, unrelated data changing in front of users' eyes (and weird hacks to prevent this), etc.
Of course, with the wundergraph architecture, it sounds like answer to these questions would simply be to load a given table only once per page – which means no more defining queries on the "panel" components in the dashboard, for example.
All tricky tradeoffs! The right answer depends on what you're building. The Wundergraph approach sounds pretty cool for a lot of cases!
For many use cases, adding an avoidable server round-trip between a user interaction and a view update is an absolute non-starter. Milliseconds matter.
Does it lead to greater complexity somewhere, and all the issues around making that complexity bulletproof? Sure. But the user experience is so viscerally different that some will demand it. I think it’s admirable to work on getting that complexity correct and properly abstracted so that it can be re-used easily.
You can avoid this problem by using Etags, stale while revalidate pattern as well as prefetching. This keeps the architecture simple without any major drawbacks.
Aside: why is there not an RSS feed for the WunderGraph blog?
I think Jens Neuse is making two important observations about GraphQL:
1. GraphQL's single URL/endpoint [1] is possibly an anti-pattern
2. ETags are important for Cache-Control and Concurrency-Control on REST endpoints
The concept of prepared statements is useful for my SQL-centric brain. WunderGraph effectively creates a REST endpoint for each prepared statement (GraphQL DML). Like prepared statements in SQL, WunderGraph uses query metadata to determine the types of input parameters and the shape of the JSON response.
Kyle Schrade makes an important point about canonical GraphQL queries: response payloads can be reduced by filtering JSON fields, similar to SQL projection (i.e. the columns specified in the SELECT clause). It seems that WunderGraph can potentially support both approaches by allowing optional GraphQL queries on each REST endpoint that can be used to filter the endpoint specific JSON response.
I don't see a problem allowing a generic GraphQL handler. It's just that I don't like the approach of allowing arbitrary queries from clients you cannot control. If this use case has a lot of demand I don't think I wouldn't support it. I'd just rather implement a seamless developer experience for code generation so you don't really want to not use it and lose the benefits.
I think we are on the same page. From my perspective, arbitrary queries are a vector for a Denial of Service event (both intentional and accidental). This has long been one of the use cases for Stored Procedures in SQL; restrict the public interface to guard against expensive queries (large scans and sorts). Faceted Search [1] may be a counter-example but I suspect that these interfaces are implemented at least partially with Full Text Search indexes rather than purely dynamic GraphQL/SQL.
It might be a useful exercise to prototype an online shopping site using WunderGraph.
That sounds interesting! But isn’t this like persisted queries (the Relay kind and not Automatic kind) without the benefit of prototyping your queries as a front end dev as you’re working on the front end?
I’d say that’s completely fair still, just wondering. I’d also say I understand the carefulness and stance on “smart clients,” i.e. normalized caching, which is why this isn’t a default in urql, but without it I think the discussion here is much more nuanced.
It’s so to speak much easier to rely on an argument with a smarter client and the Apollo ecosystem, than the rest. Anyway, I like your approach with Wundergraph so I’ll definitely check it out!
I was asking myself an important question. When you write a Query, what activity are you actually currently involved in? You try to understand the API and want to query it. What's the easiest way to understand an API? Read the documentation? Where is the documentation? It's the schema, hence GraphiQL/Playground. So why would you want to switch back and forth between Documentation and Code when you want to understand an API? On the other hand, if you already use GraphiQL in your workflow, how does this look like? You write a Query in GraphiQL, then copy paste it into your code. Now if you want to add something else you go back to GraphiQL, search for another field and copy paste again. Compare that to WunderGraph: You keep getting back to GrapiQL and extend your Queries. You hit save and the code-generator re-generates the client. You don't even have to change the code if you just extended a query. The function call in the frontend simply returns more data. I wrote a feature page about this: https://wundergraph.com/features/generated_clients I'd really appreciate your feedback on it!
It all seems very interesting, I may try and experiment putting it in front of my current prod GraphQL schema and making a few queries, once I get the auth stuff figured out. One question though, is any of this going to be open source? The on-prem-first focus you have is certainly a selling point for me as I already run my entire backend in Amazon's ECS so adding another service for the wundergraph would be very simple - however, I'm always weary of using non-open source software that I can't fork and patch, as I've had to do that many a time due to not being able to wait for patches to be upstreamed.
Regardless, I think the points you make in your blog posts are spot on, and I'm looking forward to watching this project evolve.
We'll open source all of it except the control plane and a component we're currently working on which lets you share, stitch and combine APIs of any type across teams and organizations. All the other parts will be open source, the engine, the WunderNode, CodeGen. We don't want to be locked into a vendor ourselves. You can always not use our proprietary services. The core functionality described above will always work offline without using any of our cloud services. We will offer a dirt cheap cloud service where we run WunderNodes on the edge for you but if, for any reason, you don't want to use this you're free to host your own Nodes. I'd love if you could contact me and we have a chat about your use case. I'd really like to get your take and build out the next steps as close to user expectations as it can get. I don't want to build something that doesn't work for the community.
What I can't quite glean from the docs is how you can do row-based security, ie authZ on user ownership of a row when you're trying to filter by certain things other than the ID.
Another thing is mutations - does WunderGraph support mutations at all yet? Security for those is also even more important, as you might want to restrict what entities you can attach to the entity you're creating etc.
I guess the root of my question is how much business logic can you achieve with WunderGraph itself? It's probably not something that's necessary if I really think about it, if it just handles the authN and then passes tokens with claims nad user IDs to the data sources, Hasura/Postgraphile et al can handle the row-specific authZ and business logic, and then WunderGraph can just be the BFF for each app client. I'd still definitely use it in that setup, as the generated clients and federation subscriptions would be a marked improvement over Apollo for me.
WunderGraph can inject variables or claims into a query. If you want to implement ownership based authorization e.g. with Hasura, Postgraphile, fauna or Dgraph, etc. the value to determine ownership needs to be part of the schema. E.g. a owner field on a type or a permission table/type. Then you supply a owner ID from the claim and that's it. This works because you don't allow this value to be submitted by the client. It always gets injected from a claim in the JWT. This leads to a big advantage over using one of the Auth implementations from said vendors like e.g. Row level security. You decouple Auth from the storage. You can always move to another database and are not stuck with a specific Auth implementation. You could also delegate Auth to a completely different service like open policy agent. If you don't want to use WunderGraph anymore you can re-implement the logic in a Backend for frontend. This way you evade vendor lock in for both database and middleware layer.
Mutations are fully supported. When generating clients all we do is treat mutations like POST requests and queries and subscriptions like GET http2 streams falling back to http1 chunked encoding.
WunderGraph doesn't want to contain business logic. We are the front door, making everything secure and establishing a stable contract between client and server. We mediate between protocols and we map responses so that every party gets data in the format and shape they expect. Other than that, if you want to add custom logic just run a lambda with any of the supported protocols, e.g. GraphQL, REST and in the future gRPC, SOAP, Kafka, RabbitMQ, etc. and we do the mediation. But as were the middleware layer I'd try to not put business logic into this.
That said I'd love to get in touch and discuss how WG can add value for you.
That does sound very interesting. I believe the issue lies in the fact that this is a workflow-based “sales pitch.” What I mean is that this is a difference that doesn’t always apply depending on what tools you use (like client dev tools, type generation / hints, etc)
But what it does do is constrain. Now, constraints are great. They’re always a great tool to introduce new innovations. What I’m ultimately thinking is, how much do you bring to the table compared to persisted queries and tools like GraphQL Code Generator and the added flexibility that comes with those tools?
First, with this approach you're able to add authentication rules into operations, not just the schema. That is, you can inject claims from the Auth jwt into variables. This gives you a lot more flexibility than schema directives or a resolver middleware. This feature is unique to WunderGraph.
Next we're able to execute the persisted query on the edge using etags for low latency.
WunderGraph adds the capability to use @stream & @defer on top of any existing GraphQL or REST API. You don't have to change anything on your existing GraphQL server. This works especially great with Apollo federation. WunderGraph is a replacement for Apollo gateway. We support federation with subscriptions, @defer and @stream, another unique feature to WunderGraph. The generated code gives you simple to use hooks, in case of react, to fetch data or streams.
Finally the generated code is authentication aware. WunderGraph has its own OIDC server. Generated clients know if authentication is required for a specific persisted query. This way a query will wait until the user authenticates and then fire off.
I think this should be enough. I don't want to get too much into the details as there are a lot more benefits.
I didn't know WunderGraph, but this sounds similar to OneGraph [1], i.e. you write your GraphQL query, identify its input if needed, then persist (once) the query on the server. This returns a unique query ID that can be used to execute that query server side. In OneGraph, you can use just HTTP for that, no need for a GraphQL client library. You can use any HTTP client to trigger a POST request with the persisted query ID and its input params in the request body. This way it seems a bit easier and simpler that your approach with RPC in WunderGraph. I need to read your docs to have a full picture though. ;)