87

3 Methods to Resolve GraphQL Endpoints

 5 years ago
source link: https://www.tuicool.com/articles/hit/7fiamei
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

GraphQL is a specification that defines how to fetch data from a backend system. In many ways, it is similar to REST and often uses the same HTTP(s) transports as REST. However, rather than using various path-based URIs and HTTP verbs, it uses a single endpoint with a defined schema that specifies how to not only fetch data but also mutate, or change, data. GraphQL schemas are the heart of GraphQL and provide a much richer interaction over the data. GraphQL is at times seen as a competitor to REST-based frameworks, but GraphQL also goes hand-in-hand with those frameworks.

The main purpose of GraphQL is to ultimately provide more flexible access to the underlying data through composition, selection, and mutation. Rather than having to fetch multiple documents via REST, only to use a handful of data from each of those requests, GraphQL allows specifying precisely which fields to select and then composing them together. This allows for clients to reduce network cost and latency by avoiding multiple trips. However, GraphQL can lead to more complexity than simple REST applications. For this reason, GraphQL is seen as a better alternative in systems with multiple types of clients, each with their own set of requirements, with very in-depth data sets.

GraphQL Schemas

At the heart of every GraphQL specification is the schema . The schema is the contract between the server and the client; specifying not only what data is available but also what types of data they are and how they relate. Every field has either a primitive type (such as int, string, float, boolean, etc) or a complex type. This helps to ensure type checking within client applications as a first-class citizen rather than purely a documentation or validation-based tool such as JSON schemas. Schemas are composed of classes made up of one or more fields. Clients may select those classes, choosing precisely the relevant fields needed. The following is a simple example of a movie-based schema using interfaces, types, and enumerations.

enum Genre {
  	ACTION,
  	COMEDY,
	  DRAMA,
  	DOCUMENTARY
  }

  interface Person {
  	# ! demarks firstName as non-null
  	firstName: String!
  	lastName: String
  }

  type Actor implements Person {
  	firstName: String!
  	lastName: String

  	# [ ] brackets denote an array of Movie
  	movies: [Movie]
  }

  type Director implements Person {
  	firstName: String!
  	lastName: String
  	movies: [Movie]
  }

  type Character {
  	actor: Actor!
	  name: String!
  }

  type Movie {
  	name: String!
  	genre: Genre!
  	actors: [Character]
  	director: [Director]
  }

  type Query {
   movies: [Movie]
  }

Example of querying the schema:

query {
  	movies {
  		name
	  	genre
	  	actors {
	  		firstName
	  		lastName
	  	}
  	}
  }

Schemas also support arguments within field selections to further define customization to clients. As an example, a particular numerical metric field may choose to provide a unit argument that specifies what data unit to output the value in. This is in contrast to typical systems that output a value in a single standard unit and rely on documentation to express what unit it is—putting the unnecessary onus on each client to manage the conversions. With GraphQL, the client can specify the precise unit as an argument to the data selection. The GraphQL resolver can then manage the conversions and return the appropriate value to the client. Ultimately, this customization allows the logic and control to happen server side, which is often more effective and easier, removing the stress from each client application.

The following is an example of using arguments, specifically units of measurement (UoM) for lengths.

UoM (Unit of Measurement) for Length

enum UoM_Length {
  	MILLIMETER,
  	CENTIMETER,
  	KILOMETERS,
  	METERS,
  	INCHES,
  	FEET,
  	YARDS,
  	MILES
  }

  type Metric {
  	value(unit: UoM_Length): float
  	displayValue(unit: UoM_Length, format: String): String
  }

  type Query {
	  metrics: [Metric]
  	metric(id: ID!) : Metric
  }

Below is an example of querying this particular schema.

query {
	  metrics {
		  value(unit: FEET)
		  displayValue(unit: FEET, format: '#,###.#')
  	}
  }

GraphQL schemas are incredibly expressive with far more features than this article can explain, including directives, which can provide expressive conditional support. The schemas are ultimately the capability that separates GraphQL from any other REST-based framework. The schemas, however, are purely a specification and the implementation of the schemas are backed by data resolvers.

Resolvers

Resolvers are the key to GraphQL implementations since every object, field, argument, etc is backed by a resolver. The resolver contains instructions on how to resolve a particular field based on the active context. Resolvers are also only invoked when a user requests the data rather than on every request, making for highly efficient data processing.

Using the previous movie schema and query, we may end up with a movies query resolver such as:

class QueryResolver {
		fetchMovies(data, args, context, info) {
			// fetch the content from respective source
			return movies
		}
	}

This class and method will be assigned to the movies field of the Query type. This assignment happens as part of the bootstrapping process or configuration in the GraphQL server. The request handler in GraphQL maps the query node to the root Query resolver, and then maps the movies field to this fetchMovies resolver. This process continues until all fields have been resolved. For example, GraphQL would next map the actors field selection to a fetchActorsByMovie method declaration.

The basic signature of a resolver is : fetchData(data, args, context, info)

data
args
context
info

Resolvers are responsible for using the context and active state to fetch the underlying data and return the data back to the server. The server then maps the returned data to the requested fields while calling any other children resolvers. Once all resolvers have completed, the entire document is returned to the client in the requested structure.

Methodologies

Choosing how to implement the resolvers and what to back them with is often the most critical decision in the design process of the GraphQL server. Often, it is highly dependent on existing systems and how to interoperate with them. Other times, it depends on organizational boundaries and ownership. There are endless methodologies for resolving data access. The three listed here are common variants typically used. These include REST, direct data access via DAOs, and compositional access.

REST

REST is a common method to back GraphQL. Rather than rewriting entire REST stacks to convert to GraphQL, organizations often just stack GraphQL on top and resolve schemas through RESTful API calls. This is a good strategy that allows bootstrapping a GraphQL schema quickly and effectively. It essentially provides the customization and data selection process through GraphQL, enabling more effective clients, while maintaining the integrity of the RESTful system.

It also allows that architecture to happen under organizational boundaries. API services are typically owned by data or backend engineering teams, which may not wish to build and support GraphQL, whereas the frontend teams may want to leverage GraphQL and its flexibility. By using the APIs already established by those teams, the frontend teams can easily build resolvers and establish their own GraphQL framework. This also allows the GraphQL instance to be backed by multiple, distinct APIs and managed by multiple, distinct teams via a single interface into the entire organization.

The example below uses pseudo-code to map to the movies schema above in order to resolve movies, the characters, and the actors. In this example, there are two distinct backends, which can also be completely separated and managed individually without impacting the GraphQL service:

movies-backend
actors-backend
moviesResolver(query, args, context, info) {
	  http.get("http://movies-backend/v1/movies/list").then { movies ->
	  	return movies
	  }
  }

    movieCharactersResolver(movie, args, context, info) {
	    id = movie.id
http.get("http://movies-backend/v1/movies/${id}/characters").then { characters ->
		    return characters
	    }
    }

    actorResolver(character, args, context, info) {
	    actorId = character.actorId
	    http.get("http://actors-backend/v2/actors/${id}").then { actor ->
		    return actor
	    }
    }

However, REST can also be detrimental in many aspects. One of the main driving forces of GraphQL is being able to select precisely what data is needed—allowing highly efficient data resolutions. However, when the resolvers are backed by REST, then the entire request must be fetched via REST and only certain fields selected from the response. This causes the backend REST system to fetch all the data even though it may not all be needed, leading to slight inefficiencies in the stack. In this particular example, the movie catalog may provide expanded data for the distribution company, musical tracks, etc. This data would be fetched by REST but unused by GraphQL.

Another way REST becomes a hindrance on GraphQL is the N+1 problem . In order to avoid the inefficiencies of selecting large JSON documents, APIs may fragment themselves into smaller data sets, allowing resolvers to fetch less data to become more efficient. However, it also requires making an API call for every resolver and could potentially lead to hundreds or thousands of API calls, which even under high parallelism quickly becomes problematic. Essentially, this replaces the N+1 database selection anti-pattern into a GraphQL anti-pattern.

Using the above example, we can see the characters for each movie are separately fetched, and then the actor for each character is also separately fetched. If the main movies query resulted in 5 movies, each with 10 characters, we would end up making 56 total REST calls. Due to the inefficiencies in REST and HTTP, this has the potential to create higher latencies. The primary solution to overcome this issue would be batching.

Overall, the hardest part of any GraphQL implementation is choosing the most efficient data resolution handlers. When using REST, requests should be batched together as much as possible by using the active context and state to determine what types of data need to be fetched and resolving them all at once. Batching also automatically collapses requests to the same endpoint in order to avoid making the same call twice. This leads to more complex situations, yet more efficient implementations. In this particular example, we could batch each actor into the active context state and then fetch all 50 actors in one query. This also avoids making the same calls twice in the same request such as when the same actor appears in multiple movies.

DAO

If REST is one end of the spectrum for resolving data queries, then direct data access would be the other end of that spectrum. Using direct data access to resolve data involves placing the GraphQL implementation nearest the data source. Data philosophy says that the closer to the data source the logic lives, the more efficient the logic will be. If logic is needed to aggregate different types of data together, then selecting and aggregating the data within the database will be much more efficient than doing so at the client level. The client tier may have to make several requests to fetch the data just to aggregate specific fields together. Typically the closer to the data source, the faster the access—in other words, querying a database is faster than querying an API and this effect is compounded the more tiers that exist. This same ideology holds true with GraphQL resolvers, which is why direct data access is more efficient than REST—the number of tiers is reduced and data gets moved closer.

To use direct data access within GraphQL data resolvers, you attach DAO-based calls to the resolvers. For example, the application may have a MovieDAO that knows how to fetch movies by various criteria such as getMoviesByActor , getMoviesByGenre , etc. The GraphQL schema may then provide data selection within those contexts such as the following:

actor(id='foo') { 
    movies { 
      id
      title 
    } 
  }

  genre(type='action') {
    movies { 
      id
      title 
    } 
  }

The data resolver will wire up the appropriate DAO to fetch the data. The DAOs themselves may communicate to varying data stores, independent of each other.

Direct data access may also result in the N+1 problem. However, the N+1 problem to the database tier is far more performant than to an API tier. Even still, an implementation must be highly cautious of invoking this type of behavior. It is still preferred to attempt to group together queries where possible. For example, rather than invoking a select statement for movies and a select statement for actors, the context can be used to wire up a single select statement that selects both movies and actors together. The big advantage to direct data access is that it is more forgiving of poor implementations than an API tier due to the more efficient querying into the data store.

The pseudo-code below demonstrates using DAOs with wired up database objects to query a data store. These resolvers could be backed by any database including both relational and non-relational. This example is similar to the REST example, but is typically more performant and more capable being able to query the data stores directly. For example, we could easily add batching or compositional support selecting precisely how the queries are mapped.

class MovieDAO {

	  List<Movie> fetchMovies(query, args, context, info) {
		  return movieDb.select("select * from movie");
		// easily add any other queries or where clauses based on arguments/context
	  }
  }

  class MovieCharactersDAO {
	
	  List<MovieCharacters> fetchCharactersByMovie(movie, args, context, info) {
	    Integer movieId = movie.getId();
		  return movieDb.select("select * from character where movie = ?", movieId)
	  }
  }

  class ActorDAO {
	  Actor fetchActorByCharacter(character, args, context, info) {
		  Integer actorId = character.getActorId();
		  return actorDb.select("select * from actor where actor = ?", actorId)
	  }
  }

The biggest issues with direct data access are organizational boundaries and ownership. Where REST-based architectures allow multiple teams to be backed by a single GraphQL server, doing the same with direct data access is not as straightforward. GraphQL can be backed by multiple data sources and works very well, but when those data sources cross-organizational boundaries, then ownership of the server becomes an issue and managing the relationships between those backend sources gets more difficult. For example, one team may own personalization data and recommendations whereas a separate team may own the movie data itself. In this particular example, one team may own the movieDb and another team the actorDb —these teams may not want applications directly querying their data stores, instead preferring access through REST, an SDK, or binary transport such as gRPC. As each tier is added to avoid these boundaries, the server becomes less flexible and less performant.

Composition

The final methodology of GraphQL is composition which can help resolve organizational boundaries. Composition is the process of stitching together multiple distinct GraphQL servers by defining relationships. This allows each organization to define their specific GraphQL instance for their specific data sets. The composition tier then maps relationships and data sets together. For example, the recommendation server may provide a movie identifier with its GraphQL server. The movie server would provide movie data based on a given identifier. The composition tier would be able to create the relationship from movie identifier to movie data. The resulting GraphQL schema would allow selecting the recommendations and movie data completely, automatically fetching the backend data from each GraphQL server. This selection process is also highly efficient and selects precisely each set of data.

The Apollo GraphQL server provides the best example of implementing schema stitching. The server resolves each backend schema provided and then uses rules provided to the server to stitch the schemas together with relationships. The following example demonstrates how we could stitch together the movie schema and recommendation schema if provided separately.

extend type Recommendation {
    movie : Movie
  }

  Recommendation: {
    movie : {
      fragment: `... on Movie { id }`,
      resolve(recommendation, args, context, info) {
        return info.mergeInfo.delegateToSchema({
          schema: movieSchema,
          operation: 'query',
          fieldName: 'movieById',
          args: {
            movieId: recommendation.movieId,
          },
        context,
          info,
        });
      },
    },
  }

Composition still requires multiple hops to each backend microservice, which can lead to complex data distribution. It is more ideal to fetch data directly from the data source itself to minimize the hops, but for organizations built on microservices with several distinct teams, using composition helps to solve those boundaries.

The other part where composition breaks down is when not every system uses GraphQL. In these situations, you can not directly stitch together the GraphQL schemas. The best methodology, instead, is to manually stitch together relationships and use binary protocols or REST to fetch each data. Binary protocols, such as gRPC, allow for defining these relationships and stitching data together. The GraphQL server, then, provides the frontend process and schema for selecting the data while the transport tier allows fetching from each distinct microservice. This form of composition allows for a three tier architecture to exist.

Three Tier Architecture

In a three tier architecture , data is separated into a core data access tier, a business or product focused tier, and a presentation or view focused tier. This provides a very loosely coupled system with high flexibility, allowing applications the power to select their data needs without having to couple every data system to every other dependency.

The core data access tier allows one or more groups to expose their backend data systems with a data focused representation through either GraphQL or a binary-based transport, such as gRPC, using microservices. This tier merely provides the data and identifiers into other data sets managed by separate teams or microservices and each tier should use the architecture most suited for its needs. This means that one type of schema may rely on SQL such as movie data, while another schema type relies on NoSQL such as personalization or recommendations, while others are fronted by REST or gRPC to better abstract the backend systems. The more complex data systems may choose to use GraphQL and directly rely on schema stitching at the product tier.

The business tier uses GraphQL to create a common product-focused schema as well as defining the end relationships between the data sets. The business tier is meant to convert the data-focused sets into product-focused sets while applying business logic rules for the product. This allows the core data to remain agnostic and separated from any specific product, while allowing the products to be shared across multiple applications or views. This tier is important to create common alignment across all views of a particular product. The GraphQL server may choose any of the above methodologies depending on the particular architecture and backend systems. When both product and cores use GraphQL, then schema stitching is the best methodology. For cores that rely only on REST as an abstraction over the data, then REST can be used to map each relationship. For cases where the same team owns the data stores and the product tier, then using the appropriate data store DAO for each data store is more efficient. Typically, however, the end result will be a mixture of all three as systems grow and evolve over time.

The presentation tier represents each individual application or view of a product. For example, this may be a mobile application, web application, and TV-based application. These applications would utilize the product focused schemas from GraphQL providing the common data and relationships. The views would then map that data to its specific views providing any additional view-centric logic.

Ultimately, this type of architecture allows each tier to grow and evolve independently while ensuring flexibility for each product.

Conclusion

GraphQL is incredibly powerful and flexible and offers a wide assortment of possibilities when it comes to designing the most appropriate architecture. Deciding which architecture to choose is often the hardest, most critical decision. The best recommendation is to first understand the organizational boundaries and ownership. Who will ultimately own the implementation and architecture? Who owns each of the data sets? How are or how will those data sets be exposed? These types of questions can help decide how to formulate each tier of the architecture.

For small organizations or organizations that own data to products end-to-end, it is recommended to stay simple and use direct data access to ensure high efficiency across products. For larger organizations built on several microservices, it is recommended to follow a three-tier architecture that allows microservices to grow independently as either their own distinct GraphQL server or using a binary transport and schema. Product distribution teams would then be able to own the GraphQL tier, connecting the relationships and data sets together. It is best to expose the resolvers nearest the data stores without crossing organizational boundaries. This means it is more preferred to use direct data access, then GraphQL stitching/composition, followed by REST. In general, REST should only be used when required by backend teams or legacy systems.

Regardless which architecture is finally chosen, allow GraphQL to grow and be as flexible as possible. Resolvers, field arguments, and even more complex capabilities such as directives, can allow a GraphQL schema to be highly flexible while remaining loosely coupled to its users. The more logic that can move to the server while remaining agnostic to clients, the more efficient and maintainable the end-to-end system will be. The resolvers and associated schema are ultimately the most critical components that define the implementation. Choosing how to effectively implement and manage those resolvers will make or break not only the server itself, but also the entire end-to-end architecture.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK