GraphQL has been around for a little while now in web terms. It was released publicly in 2015, and has been used by Facebook since 2012.

GraphQL as a technology is pretty great for solving several pain points with REST apis.

Benefits of GraphQL

Combining Data Sources

GraphQL is excellent as a middle-man between your frontends and your downstream services owned by backend developers. This is detailed nicely in this article by Sam Newman. This is a pattern that works well in larger scale companies, and would likely be a waste of time for smaller scale companies/start up projects.

This pattern benefits backend developers of downstream services too, who are freed up to write maintainable performant services, instead of worrying about each specific and individual use case a frontend flow might have.

Simplifying data fetching for the frontend

You typically might end up having to create several very specific API endpoints which only end up getting used in one or two places on your application, catering to specific user flows. You might also have to call multiple APIs to retrieve all the data that's needed. This is the traditional over-fetching/under-fetching problem, where you are usually retrieving far too much data, or not enough.

GraphQL solves this by letting the front-end developer specify precisely which fields are needed. You no longer need to make one API call to fetch a model, immediately followed by another to fetch a related model, and you also don't end up using only a small proportion of the data you retrieve.

Static typing

At the core of GraphQL you have its type system, strictly enforcing the contract you see in a GraphQL schema exploration view.

GraphQL is to JSON APIs as TypeScript is to JavaScript. You know ahead of time if the stuff you're requesting from an API actually exists and is being requested for in the right format.


Learning curve

Like any new technology, there is an associated cost with having to learn and maintain code using yet another new technology.

GraphQL has been around for a little while now, and seems to be accepted as one of the "industry standards", meaning more and more developers have experience using GraphQL.

Backend complexity

The frontend complexity of over and under-fetching hasn't truly gone, it's just been moved to the backend. While a frontend developer might specify a nice simple looking query including relations to a model, the developer of the GraphQL API has to actually write the code that fetches these relations. We end up with the n+1 problem where the extra network calls from before are just shifted to extra database/service calls on the backend. There are strategies such as dataloaders which aim to remedy this, but it is a problem that needs to be solved nonetheless.

Backend security

A new problem associated with query complexity is that backend developers now also need to be careful as to how relations are fetched for models, and being careful to limit the amount of data that will be fetched in one go.

For instance as a backend developer, if a model has a list of associated data which can potentially be large, we need to be careful to limit the length of the list to prevent a query for all the associated models from bringing our GraphQL server down.

If we have a model Tag, with associated Post models, then a query for this data might look like so:

query {
  tag(id: 1) {
    posts {

In this example, imagine that there can be 1000s of posts associated with a single tag. By sending the above request, if there was no query complexity protection in place, it could bring our GraphQL server to a standstill with it fetching every single post. So we need to either enforce pagination/data limiting parameters to be included in queries, or have some other mechanism in place for limiting the amount of data the backend will fetch like throwing errors for complex queries.


You will have to decide for yourself and your team(s) if GraphQL is right for your use case. I highly recommend trying it out at the very least though.