Feb 16, 2022

Caching REST APIs vs. GraphQL APIs

Blog post's hero image

You're sitting there, waiting for the page to load. How long is it going to take? After a second or two, you give up. This experience gets repeated across apps, platforms and sites again and again. Why? As web applications have grown, the number of data points have too, which in turn has multiplied the number of network calls. This means you get slow performance for users and high cost for the companies footing the bill. The result? Users simply give up on the product. Let’s face it: we live in a world of "now," with the expectation of immediacy. Which is why we need to talk about caching.

For many uses, caching—a method of storing data for speedier access—is a critical feature. If done correctly, it can cause your application to load up to 100x faster and reduce the traffic the origin infrastructure has to handle equally.

In modern-day apps, REST and GraphQL APIs are the primary frameworks used to communicate over the network, and they work very differently.

In this article, you will learn how caching plays a critical role in developing scalable applications, along with its various implications. You’ll also discover how caching in REST fundamentally differs from caching in GraphQL and you’ll explore the trade-offs between the two.

What Is Caching?

Caching is a way of storing frequently accessed data in a request-response path, either in local memory or in temporary storage, where it can be accessed more quickly rather than fetched from a server again and again.

When a user requests a resource that is located on the server, the request itself goes through a series of cache. If the request header matches up with any of the already-cached data, it returns without needing to re-fetch the data from the server.

If none of the cached data matches up with the requested resource, then the request fetches a fresh copy from the server; and on its way back, the newly fetched response gets cached in temporary memory.

Having the ability to cache data on network request-response not only improves the user experience but also does the following:

  • Reduces the bandwidth usage by interacting with locally cached data first – before the server.

  • Reduces network call errors, in case there is a connection issue at the time of server interaction, by querying requests against a cache table first.

  • Reduces the overall load on the server and, in the long run, makes an application scalable by ensuring the request interacts effectively with the memory cache.

  • Reduces network calls by ensuring that the many data points, which are not always dynamic, are cached properly.

Caching with REST APIs

When you read about caching in the REST protocol, it is more about caching in HTTP. REST, when first introduced, was strictly based on the specs standardized by HTTP. Thus, the network calls, cache handling, and so forth in REST are all based on the HTTP specification.

Query operations under REST API are cacheable by default, but note the following examples:

  1. GET request: This is cacheable by default. Browsers treat all GET request-response paths as cacheable.

  2. POST request: This is not cacheable, but it can be made so by using directives, such as expires-header or freshness (you’ll read more about this below).

  3. HEAD request: This is cacheable, similar to GET.

  4. PATCH request: It is cacheable as well if directives, such as freshness or others, are mentioned.

To control the caching behavior in a REST-based network call, you need to specify the response header (for example, freshness and expires-header). Let’s take a closer look at two important headers:

  1. Expires: The expires response header mentions the duration after which a certain cached item would get invalidated or become stale.

Expires: Fri, 20 May 2021 23:59:59 GMT

After the specified time, the request needs to be fetched against the server, and a fresh cache is stored.

  1. Cache-control: The cache-control response header may include one or more values called directives. These directives can indicate whether the cache should be there in the first place, describe fetching policies, and more. One of these directives is max-age. In the case that both max-age and expires headers are defined, the max-age header takes precedence.

Cache-Control: max-age = 30000000

Note that if the already-cached item has been changed at the origin, the server cannot directly communicate with the cache storage or the client/browser agent. Instead, it communicates with the expiration time. If it is within the expiration time, the cache is considered fresh; otherwise, it is stale. Thus, the client-side browser can determine if it needs to cache data again.

This expiration time is calculated as follows:

Expiration Time = (Response Time + Freshness Lifetime) - Current Age

Removal of Items

Removing cached data in the REST protocol has been made possible by using headers, such as cache-control.

You can mention a caching header at the time of a network call against a server. In the case where an expires or max-age header is specified, then the data would be removed from the cache based on the headers provided.

The cache-control header, available since the inception of HTTP/1.0 itself, allows you to specify values, such as no-storeno-cache, or must-revalidate, for different kinds of caching behavior.

cache-control: no-cache

Also, the browser cache comes with the cache.delete web API method, which can be used to delete cached items. These items are generally mentioned as a key-value pair, meaning that they can be deleted using the appropriate key.

cache.delete(request, {options}).then(function(found) {
// delete cache item
});

cache.delete is a Promise-based asynchronous API, and it takes a bit of time to delete values. When done, the promise resolves to true if any entry is found.

Challenges in REST

While caching is a multi-talented problem-solver, if it is not done correctly, you could run into trouble. A GET request in REST API caches responses to the browser by default. If network calls are done carelessly or are not optimized per the use cases, there’s a possibility of running into a stale data issue.

Suppose there is a list of products that has been requested and rendered on the screen, and it has been cached properly. Now, a user has deleted a product from that list. In this case, if the delete function doesn’t refetch the list against the server then the user would continue to see the stale data rendered from the cache.

As you can imagine, this trade-off between making the user experience snappy and getting correct data every time can be tricky.

Caching in GraphQL

GraphQL handles caching in a completely different manner than a REST API, primarily because GraphQL requests are not strictly based on the HTTP specification. In a typical GraphQL request, every request is a POST request, and thus, there isn’t any concept of GET, POST, PUT, or DELETE. Therefore, it relies on uniquely identifying objects on the client end for caching behavior.

That being said, GraphQL is not able to correctly leverage the error codes as used in the HTTP specification, as it does not follow these specifications like REST does. For example, there could be an error on the response sent by the server, but you might still get a 200 OK status code on the client side. This is one of the major shortcomings of GraphQL.

Yet GraphQL shines at explicitly fetching only the data that is needed from the server without getting unnecessary responses within that endpoint—a problem you face while dealing with REST APIs.

In URL-based requests, such as REST, the caching is done by identifying unique URL endpoints. However, in GraphQL and its associated libraries, there is no such concept; instead, it relies on exposing a unique identifier on the client side (see below).

Global IDs

One solution for caching data on the client side could be to use an ID as a global identifier for getting responses back from the API server. That unique ID is used to handle caching on the client side.

If the backend is already using some sort of library, such as a universally unique identifier (UUID), then the API can leverage that for identifying data objects sent from the server. This further helps in the identification of unique objects (distinguished based on their IDs) that need to be cached again.

__typename

One problem with using an ID as a global identifier is when you are incorporating an already existing API in your codebase, which uses a local ID. Maintaining the existing local ID and the newly created global ID as an identifier might create conflict.

However, various GraphQL libraries, such as Apollo Client and urql exist to provide ways to handle identification at the client side.

In the case of Apollo Client, it relies on __typename while caching the data on the client’s side. Furthermore, GraphQL includes a __typename field automatically with every network call it sends, which helps in identifying unique data objects.

Strictly Typed Schema

GraphQL improvises on the primitive REST network call techniques. Another feature that gives GraphQL an edge over REST is its strict schema definition for application development. GraphQL makes sure to explicitly get only the response data that is required based on the schema provided. This schema is usually strict on types, well-written, and self-documented in terms of the kind of data that’s expected to be consumed on the client side. This allows the codebase to be error-free, making it robust and maintaining a good developer experience at the time of consumption.

Introspectability

Finally, introspectability is a prominent feature in the GraphQL ecosystem that distinguishes its capabilities from that of REST. The term introspectability describes the ability to look up what information is present in the schema. It makes things easier for building and consuming APIs on the client side. By doing an introspection, you are asking the GraphQL schema for information on the kind of queries it supports.

The types that are defined in the introspection directives include the following:

  1. __Schema

  2. __Type

  3. __TypeKind

  4. __Field

  5. __InputValue

  6. __EnumValue

  7. __Directive

Every server that exposes a GraphQL API can get its API introspected.

For example, if you want to introspect the types defined on schema, you can simply do the following:

{
__schema {
types {
name
}
}
}

This fetches the following:

{
"data": {
"__schema": {
"types": [
{
name: "Pikachu"
},
{
name: "Actor"
}
]
}
}
}

Similarly, you can do introspection for __EnumValue__TypeKind, and all the above-mentioned directives.

Conclusion

As you can see, there is no perfect solution for caching; both REST and GraphQL have trade-offs.

REST, a decades-old technology, is still prevalent today and relies strictly on HTTP specification. It caches data by default, and the response headers depict the correct status on the client side according to the HTTP standards.

GraphQL, which has been around for a decade, and in public use since 2015, has given developers control of the cache mechanism, a strictly typed schema, and the ability to get exactly the desired data points instead of being bombarded on an API endpoint. While GraphQL has its own issues with caching on the client side, tools such as Apollo Client, urql, and Stellate Edge Cache have simplified the process, enabling the GraphQL ecosystem to grow.

The Stellate Edge Cache is an advanced system for efficiently caching GraphQL queries, performing edge caching at over 70 worldwide locations. It reduces overall response time for the backend and provides fine-grain control for caching behavior, allowing you to cache content for long periods with the confidence that your users will always receive the latest data.