Hacker News new | past | comments | ask | show | jobs | submit login

Client side caching with a normalized cache implementation is very hard to get right. I see why you would want that feature and if it were simple to implement I'd always want to use it. However I think we can get away with a solution that is a lot simpler than normalized caching. With persisted Queries we can apply the "stale while revalidate" pattern. This means we can invalidate each individual page and would have to re-fetch each page in case something updated. This is some overhead but the user experience is still very good. Normalized caching in the client can get super hairy with nested data. In addition, normalized caching adds a lot of business logic to the client which makes it hard to understand the actual source of truth. From a mental model it's a lot simpler if the server dictates the state and the client doesn't override it. If you allow the client to have a normalized cache the source of truth is shared between client and server. This might lead to bugs and generally makes the code more complicated than it needs to be. Is it really that bad to re-fetch a table? I guess most of the time it's not. I've written a blog post on the topic if you want to expand on it further: https://wundergraph.com/blog/2020/09/11/the-case-against-nor...



> Client side caching with a normalized cache implementation is very hard to get right

Absolutely true! When I worked on this at $prevCo, it was tricky and sometimes caused bugs, unexpected behavior, and confused colleagues.

I will say that a proper implementation of a normalized cache on the client must have an ~inherent (thus generated) understanding of the object graph. It also must provide simple control of "whether to cache" at each request. Most of the problems we experienced were a result of the first constraint not being fully satisfied.

My impression is that Apollo does a good job on both of these but I haven't used it so I can't say.

I'll also note that the approach of "when one component makes an update, tell all other components on the page to refetch" sounds like a recipe for problems too – excess server/db load, unrelated data changing in front of users' eyes (and weird hacks to prevent this), etc.

Of course, with the wundergraph architecture, it sounds like answer to these questions would simply be to load a given table only once per page – which means no more defining queries on the "panel" components in the dashboard, for example.

All tricky tradeoffs! The right answer depends on what you're building. The Wundergraph approach sounds pretty cool for a lot of cases!


For many use cases, adding an avoidable server round-trip between a user interaction and a view update is an absolute non-starter. Milliseconds matter.

Does it lead to greater complexity somewhere, and all the issues around making that complexity bulletproof? Sure. But the user experience is so viscerally different that some will demand it. I think it’s admirable to work on getting that complexity correct and properly abstracted so that it can be re-used easily.


You can avoid this problem by using Etags, stale while revalidate pattern as well as prefetching. This keeps the architecture simple without any major drawbacks.


Not really, implementation of graphql cache is like day or two work.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: