To all considering using graphql, let me tell you something that was unclear to me when I first tried it:
If you plan to use it with a schemaless (nosql/graph) backend, graphql will force you to write a schema for it. If you can't (due to dynamic data), you will just end up forcing graphql treat your data as json-blobs with no schema.
Graphql turns into a json-blob transmitter with no benefits.
Also, if you have deeply nested/recursive data structures, the client performing the query needs to know just how deep it should query. This can lead to ridiculous queries if you're running on a graph db where the client doesn't know how many vertices it should traverse.
REST is a better fit if the above is true for you.
To anyone put off by this comment: don't be. With all due respect the parent commenter has misunderstood how to shape their API responses.
GraphQL excels at nested data structures. When you have infinitely recursive child nodes of the same type (like ancestors in a family tree) the GraphQL list type should be used.
I probably was a bit unclear.
Lets say I've got a family tree stored in a db that goes back 1000 years.
Through graphql I want to find my first ancestor following all mothers backwards. The query would be:
query familyTree {
name,
mother {
name,
mother {
..and so on for an unknown number of nestings
}
}
}
Dataloader solves batching of the nested query on the server, but doesn't solve the problem of not knowing what the correct number of nestings the query should have.
Of course it's possible to create a new graphql endpoint for this type of query, but then we've just recreated REST in graphql.
Absolutely. Personally I haven't made much use of NoSQL, I'm sure there are plenty of use cases, but for ancestry I would still use a relational db.
I do see how nested objects might look like a perfect fit for this, since families are literally "nested objects". Perhaps there are plenty of advantages to using NoSQL and shaping the data this way... but the thought of creating an API With that structure is terrifying to me, haha.
Question to OP: are you using this structure for a live api/website I can take a look at? Does each node have an absolute ID? Do you normalize your data? Maybe I'm thinking too much in relational terms here? I'm genuinely curious about this.
For the recursive issue we have a directive that lets us specify a recurse level. The server validates that with a maximum value.
As for schema-less we generate a scheme that includes all types and fields but we filter out any non-available ones (according to user with) each time an introspection query is issued.
Yet another tutorial about using a GraphQL client. It's nice but I think the hard part is implementing a GraphQL server. Are there any examples of a full blown GraphQL server, interpreting complex queries as SQL/NoSQL queries in a performant way?
> Are there any examples of a full blown GraphQL server
Sure! But here's the thing to know: the meat of a GraphQL server is in the schema. Every server implementation you see will have you define a schema, and then will execute queries against it. I would do the setup for the implementation in the language of your choice (instructions for which is usually listed in the README of the git repo), and then take a look at example schema, the most famous of which is the Star Wars schema:
> interpreting complex queries as SQL/NoSQL queries in a performant way
Something which is often confusing is that GraphQL is completely database agnostic. However you were fetching data from your database of choice before, you will continue to do. GraphQL has you define types (i.e. a user type, a blog post type, etc.), and then you tell it how to fetch that data. It could be a library for SQL, NoSQL, or even another API.
For example, imagine I define my GraphQL schema which is pretty similar to another GraphQL schema (of a remote API server).
Could I implement my GraphQL server's resolvers in such a way that they simply rewrite/reinterpret the incoming query by forwarding the (modified) query to the target remote GraphQL server? Or will it be very inefficient and very hard to write this kind of GraphQL-schema-to-similar-GraphQL-schema adapter?
To compare REST, one can imagine (a part of) your REST API being similar to another external REST API. It's relatively straightforward to have your HTTP handlers map to remote REST API endpoints and make the neccessary conversions. (Assuming your REST endpoints map relatively 1:1 to the other API's REST endpoints).
I'm still fairly new to GraphQL, so take this with a grain of salt, but it's my understanding that a idiomatic GraphQL server would implement a single purpose resolver function for _every single field_ in the schema, and these field resolvers would then compose together to resolve larger query fragments. Often a request caching & batching layer like Facebook's own DataLoader library is necessary to make data fetching efficient and performant under this model: http://graphql.org/learn/best-practices/#server-side-batchin...
So in other words, for parts of your schema that are similar to each other, you'd simply include and compose together the relevant resolver functions for the fields they share. The conceptual model is similar to reducer composition in Redux, where top-level reducers (analogous to the root query resolver in GraphQL) can delegate to child reducers each responsible for only a part of the application state (or child resolvers each responsible for resolving a single fragment of the whole query), and this delegation can continue to arbitrary depths.
EDIT: I see your question is actually about composing with third party GraphQL APIs, so I haven't really answered it. GraphQL resolvers are just functions that return data. So you can certainly just implement an async resolver that forwards the received GraphQL query to a remote API of interest, and take the response returned and merge/override it with additional data to form the response to your own query.
Depending on what you're looking for, it might not be straightforward with the normal high level utilities, but most libraries also provide lower level functions for parsing and such.
If you just want to grab an object for a certain edge than that's easy. Routing the entire query differently based on something at the root would also be easy, but routing it based on something deeply nested might be trickier, but still not too tricky.
I used the server tutorials in howtographql.com, they were very informative.
Scroll down to the hands on tutorials, the ones on the right are server-side tutorials. I still recommend you read through the beginner material as well!
Seconded - Tons of GraphQL + Apollo Client tutorials out there but not much on the server. I've been using Graph.cool which does the work for me but if I have to make a server it'll be a lot harder.
I created a boilerplate example that uses Hapi, Apollo Server, Knex (SQLite) & Webpack to give you an idea on how to get started. Apollo Server makes it easy for you understand how to write a GraphQL server by breaking it down into typeDefs and resolvers, that are needed to create your schema.
If anyone is interested in jumping straight in to GraphQL, I recommend graph.cool. It was posted a while back when the service was first released. Their free developer tier is awesome, and their project tier is completely reasonable.
There is also the graph.cool slack channel if you're looking for an active GraphQL community. You can usually find someone to help you out with questions there.
I actually have stopped using Apollo on the server side (outside of the middleware) to build the GraphQL definitions.
I use the vanilla graphql-js lib instead with the join-monster library for queries + batching + paging and objection.js for modeling + mutations.
join-monster is built for using your database with graphql (I use postgres). Objection.js is great for mutations because of its insertGraph / upsertGraph functionality, where you can feed in your entire mutation input as a nested structure and it will perform the right insertion queries to multiple tables based on your objection.js models that you've defined.
I've learned much more since then (eg authorization / authentication / mutations / implementing paging via relay connections) that I'll probably start up another project in the future talking about how to build a full-scale GraphQL server.
Probably not, my tip is to sign up, but then wait for the recording to be available (they usually are), once a recording is posted you can see where did they post it to and then download.
I do this from time to time
Huge thanks to the GitHub team for putting together this webcast!
For everybody interested who already wants to get familiar with GraphQL, check out this getting started tutorial for GraphQL: https://www.howtographql.com/