The Github page may not be of much interest unless you're trying to get started, so let me say a couple words about our subscription implementation that's in the 1.4 release candidate.
The idea with GraphQL subscriptions is that a client can submit a document that is run in response to an event on the server, which pushes the result of that document to the client. So if you had a UI showing new orders, you might have a subscription that looked like:
subscription {
orderPlaced {
id
customer { id tableNumber }
}
}
When an order is placed you can publish an event to this subscription field either manually, or setup pubsub relationships in the schema itself.
Here's the challenge: Suppose loading the customer of an order requires a DB lookup. If you have 100 people who all submit that subscription, and you run each of those subscriptions independently, you're going to end up with 100 individual database lookups for the same customer.
De-duping documents can help; if the documents are identical and the context you're executing them in is identical, you can just execute it once and push the values out. The problem is if you have authentication or authorization rules in the document then you can't really do that, each document needs to be executed individually within the context of whoever submitted it.
One of the main features of our subscription implementation however is that the batch dataloading mechanisms we have to avoid N+1 queries within a single document can also work with sets of documents. When an order is placed we take the full set of documents triggered by that event and run them as a batch, so any redundant DB lookups can get coalesced into a single query.
Elixir itself has been a huge help here. We can achieve this batched document storage and execution without any external dependencies like Redis and, because we integrate easily with Phoenix PubSub (other backends possible) we get cluster support for free also without any external dependencies. Add in linear scaling with the number of CPUs and it's really been a fantastic platform to build this on.
We're in the final stretch of migrating our old node/mongo/REST stack to elixir/postgres/graphql (with Absinthe), and I'm absolutely loving it so far. Elixir is a great language, and Absinthe is very easy to use, even for a graphql noob like me. Even new/experimental features like absinthe_phoenix and absinthe_ecto have been virtually flawless.
I guess the real test will be when we deploy it to production on Tuesday!
Basically, our purpose for the API changed. The old stack was created first as an MVP, then evolved a lot over time. We decided to make our API useful not just for our front end, but also for developers.
The mongo "schema" was a mess, and our data actually has quite a lot of relations, so postgres was an obvious choice. That was probably the most important change.
None of our current engineers have much experience in node (the one who built the api originally no longer works for the company), and we all dislike it as a language. The code itself was a mess, and would need to be rewritten even just to switch to postgres. Ideally, we wanted a language that was typed, functional, and good for web programming. We built small prototypes in Nim, Rust, and Elixir, and Elixir just ran away with it, even though its typing is less than ideal.
The graphql choice (and the direct impetus for the rewrite) was basically because we decided to make our api useful not only for our front end, but also for our users to interact with directly from their code. For this to be reasonable, the endpoints needed to be completely restructured, and we needed to expose a lot more of our database. It seemed pretty clear that having a single graphql endpoint to expose all our data would buy us a lot in terms of being developer-friendly. Even just exploring in GraphiQL is so much better than reading docs for a REST API.
Overall, it's taken about two months of my time plus the last couple of weeks of one other engineer's time to recreate it all and migrate the data.
Hey folks! I’m one of the Absinthe co authors, happy to answer any questions!
We’re hoping to get the 1.4 release out soon, it’s was delayed a bit to make sure some features were in place for the book. For now though the 1.4.0–rc is available and we’ve been using it in production.
We've been super busy getting the book finished but we want to make sure that we've also got everything someone needs to get going available in the docs.
A) Absinthe 1.4 provides first class support for GraphQL Subscriptions, which give you the ability to push data to clients based on subscription documents they submit, triggered by events within the system. GraphQL Mutations are the way you push changes at the server, and Absinthe provides tools when building your schema to indicate a mapping of mutation fields to interested subscription fields.
B) This significantly depends on your underlying datastore. GraphQL (and consequently Absinthe) is entirely agnostic about how, where, or in what you're storing and managing your data. This may seem like a non answer but the point really is that you're entirely free to pick whatever approach works best for your data and your means of storage.
C) A significant portion of this answer is the same as B, although there are some conventions within the GraphQL community that help here. The Relay connection pattern uses opaque cursor values alongside each item returned in a page, and you can encode in these cursors whatever information you need to provide a coherent pagination experience.
D) I'm a big fan. Phoenix has actually done some really amazing stuff with an ORSWOT CRDT in order to track channel presence within a cluster. We're in the middle of looking at how to integrate that with subscriptions. More generally, I think when you can work your problem into the feature set of a CRDT they can be immensely powerful.
All in all though I think you may be ascribing to GraphQL or Absinthe a stronger role in the management of the data than it has. If you want to use CRDTs then you just articulate one in your schema and push it to clients and let them push diffs at you. Same could be said for much fo B, C, D. Absinthe doesn't manage the _state_ of your application your clients. Rather, it provides a way for your to communicate that state in whatever way you choose.
Any hints on implementation details would be very welcome, also it would be very nice to find a chapter about these important real-life issues and how to handle them in the book!
(Bruce Williams, Absinthe's co-creator here). Subscriptions are part of our v1.4 release (currently in rc, stable release imminent). We support subscriptions both over Phoenix Channels and normal HTTP (Server-Sent Events)—it's built to be rather extensible. We did a talk at ElixirConf recently that illustrates some of the design decisions and motivations: https://www.youtube.com/watch?v=PEckzwggd78
Subscriptions are available through the ‘absinthe_phoenix’ (https://github.com/absinthe-graphql/absinthe_phoenix) library as an add-on to the Phoenix Framework’s channels. It’s a pretty robust solution and its maintainers Ben & Bruce are very helpful on Slack.
Sorry to hear it's a bother to you. The Scala implementation of GraphQL is called Sangria, so we thought we'd continue the theme. It also has the perk of providing some fun design opportunities.
The idea with GraphQL subscriptions is that a client can submit a document that is run in response to an event on the server, which pushes the result of that document to the client. So if you had a UI showing new orders, you might have a subscription that looked like:
When an order is placed you can publish an event to this subscription field either manually, or setup pubsub relationships in the schema itself.Here's the challenge: Suppose loading the customer of an order requires a DB lookup. If you have 100 people who all submit that subscription, and you run each of those subscriptions independently, you're going to end up with 100 individual database lookups for the same customer.
De-duping documents can help; if the documents are identical and the context you're executing them in is identical, you can just execute it once and push the values out. The problem is if you have authentication or authorization rules in the document then you can't really do that, each document needs to be executed individually within the context of whoever submitted it.
One of the main features of our subscription implementation however is that the batch dataloading mechanisms we have to avoid N+1 queries within a single document can also work with sets of documents. When an order is placed we take the full set of documents triggered by that event and run them as a batch, so any redundant DB lookups can get coalesced into a single query.
Elixir itself has been a huge help here. We can achieve this batched document storage and execution without any external dependencies like Redis and, because we integrate easily with Phoenix PubSub (other backends possible) we get cluster support for free also without any external dependencies. Add in linear scaling with the number of CPUs and it's really been a fantastic platform to build this on.