Jan 8

Guide To Akamai Caching For GraphQL

Akamai offers a GraphQL caching feature, though we’ve found that it’s quite limited and still requires a lot of manual building and adjustment to make it a sufficient caching solution.

Unfortunately, building out the caching feature to sufficiently cache your data is resource intensive and pulls developers away from improving the core value of your business. 

Even after building out Akamai’s caching feature, there are still limitations that make it difficult to optimize your caching strategy. For example, it doesn’t offer observability, change management, fine-grained cache configuration, or developer enablement for GraphQL caching.

We wanted a solution that makes GraphQL caching simple, so we built a GraphQL edge caching solution to automate caching and provide the tools and metrics necessary to easily maintain and optimize your caching strategy. 

In this post, we’ll discuss both the limitations of Akamai’s GraphQL caching options and how Stellate can solve these challenges to allow your team to focus its time and resources on improving the core value of your business.

How Does Akamai GraphQL Caching Work?

Akamai has a GraphQL caching feature that takes their API caching and makes it work for POST requests.

It will allow you to cache GraphQL requests with a simple configuration. It supports both GET and POST requests, and you can define whether you want it to cache GraphQL responses containing errors or not, and set a default max-age for all your requests. 

Akamai also honors any cache-control header that is sent from your origin, which allows your GraphQL API to define if and how each response shall be cached.

However, there are limitations with it, which we’ll discuss in more detail below.

The Limitations Of Akamai’s GraphQL Caching (And Solutions)

While Akamai's GraphQL caching solution offers some basic caching functionalities, it isn't a sufficient caching solution out of the box. Even if you manually build out Akamai's caching feature, it still has various irreversible limitations that can make it difficult to implement a truly efficient and effective caching strategy.

Here are a few of the specific limitations that we’ve encountered with Akamai’s caching feature:

  • It does not parse GraphQL query responses for cache tags

  • Akamai does not purge the cache after mutations

  • It applies one cache configuration to the entire GraphQL API

  • It does not offer query normalization

  • Akamai caches empty list responses

  • Akamai can only handle queries up to 4096 bytes.

It also lacks GraphQL caching-specific observability metrics to improve your caching strategy, and you won't have access to change management or developer enablement.

We designed Stellate, a GraphQL edge caching solution, to solve these specific challenges. Below we'll discuss each of the challenges with Akamai's caching feature in more detail and describe how you can solve each one.

Applying one cache configuration to the entire GraphQL API

Akamai applies one cache configuration to the entire GraphQL API, which is problematic as not all data can be cached the same way. 

For example, you can't cache a product and shopping cart query the same way because the former is static data that rarely ever changes (making it ideal for caching with a high max-age) whereas the latter is dynamic data that changes frequently through each user interaction with the cart (and thus should not be cached for long periods of time). 

However, Akamai applies one cache configuration to the entire GraphQL API, meaning you can't differentiate between various GraphQL operations. 

As a result, you wouldn't be able to cache either one. 

To solve this, you could set up your origin server to return a cache-control header that specifies how long the given GraphQL response can be cached, ​​which would allow you to cache individual GraphQL responses differently based on their contents. 

Nevertheless, it isn’t a great solution as it requires all backend/subgraph developers to think about caching when they make changes to the API. It also requires more technical expertise or introducing additional tooling to set up the cache-control header properly in the code. 

In addition, the cache control does not support advanced scoping beyond simple public and private scopes, so you won’t be able to cache user-specific data. 

Stellate solves this by offering fine-grained cache rules that allow you to define:

  • Which types and/or field this rule applies to

  • Cache maxAge

  • Stale-while-revalidate

  • A scope (which defines if the presence of this type/field means the response should be cached per user)

By implementing fine-grained cache rules, you can significantly improve your cache hit rate. In the example above where you have a product and shopping cart query, you could implement different caching rules for each of the respective types to ensure both are appropriately cached. 

Lacks query normalization capabilities

Akamai's caching feature doesn't offer query normalization, which can hurt your cache hit rate if you have many queries that semantically fetch the same data yet are on separate strings.

For example, the queries { product { id name } } and {product{id name}} semantically fetch the same data. Yet because they are separate strings, cached data for one of those queries wouldn't be served for requests for the other query. 

This significantly increases cache misses.

Query normalization solves this problem by ensuring that different variations of the same query are recognized as identical and therefore retrieves the cached version of the data rather than returning to the server to retrieve the data.

As a result, it helps maximize your cache hit rate.

You can technically build out a query normalization solution manually in Akamai on the client side if you fully control all the clients that fetch data from your API.

Yet this is time consuming for your engineering team and pulls them away from improving the core value of your business.

In contrast, Stellate does offer query normalization as an out-of-the-box feature.

Specifically, it normalizes incoming GraphQL operation text based on the GraphQL specification to ensure fetching the same data will always hit the cache.

No option to parse GraphQL query responses for cache tags

Having a cache invalidation strategy is probably the most important topic when you implement any caching solution. Akamai allows users to build their own custom cache invalidation strategy, but they don't offer an out-of-the-box solution.

For example, you could build a solution by always requesting `__typename` from the client, compiling a list of all `__typename` + key fields at the origin, and sending it back in [the `Edge-Cache-Tag` header].

However, even after building this out, it still isn't an ideal solution, as Akamai only supports up to 128 cache tags per object. 

This makes purging problematic. For example, if you have a query fetching 50 blog posts with their authors using Relay-style pagination (the standard pagination in GraphQL), the response would contain more than 128 objects.

Therefore, you would need over 128 cache tags to be able to purge it.

This becomes even more limiting when you use Relay-style fragment composition because operations fetching all the data for a single page quickly exceed 128 objects.

In contrast, Stellate offers cache invalidation strategies out-of-the-box.

Instead of limiting the number of cache tags, Stellate has a limit for the total length of all cache tags. This limit allows for a higher number of available cache keys (up to twice or three times as many as Akamai). 

Stellate also automatically deduplicates any cache tags, making sure that no two identical cache keys are applied more than once.

Caches empty list responses

Caching empty list responses is problematic as it ultimately causes the cache purge to fail.

The cache purge fails because when a GraphQL API responds with an empty list, it does not respond with the necessary __typenames to compute the cache tags. 

As a result, the stale response of an empty list would stay in the cache even after a purge.

So even when the user knows that the list has changed, they cannot purge the existing cache entry, so they are stuck showing stale data until the cache age runs out.

Stellate solves this by applying its cache rules when it finds the related types in the response. 

That means an empty list in the response will not trigger any cache rules, which in turn means that a response that just contains an empty list would not be cached.

Does not purge the cache after mutations

The fact that Akamai's caching features doesn't purge the cache after mutations is problematic because it means you would have to implement cache purging for every mutation in the graph yourself. As a result, developers have to build their own cache invalidation strategy, which is a resource intensive process.

Stellate provides a cache invalidation strategy for mutations out of the box.

So when it receives a mutation, it will purge all entities from the cache that it finds in the mutation response.

Queries are limited to 4096 bytes

The whole idea of GraphQL is to describe all your data requirements in one larger request rather than sending multiple smaller requests. 

Therefore GraphQL queries larger than this limitation cannot be cached in Akamai. For bigger GraphQL APIs and clients, it's quite likely that you will hit this limit.

Stellate solves this problem by parsing GraphQL queries up to a size of 40,000 bytes. If you have a query larger than this, Stellate won't cache it, but you can still successfully proxy it in your origin. 

Lacks GraphQL caching-specific observability, change management, and developer enablement

The only observability metrics that Akamai's GraphQL caching feature offers are custom attributes. This makes it very difficult to improve your cache hit rate, as you won't have the data to identify the best opportunities for improvement.  

Akamai also has subpar documentation and lacks support, making it difficult for developers to navigate challenges. 

In contrast, because Stellate specializes in GraphQL caching, it offers GraphQL caching-specific observability metrics and change management.

This makes setup super easy, and the detailed observability metrics make it easy for developers to quickly identify and prioritize opportunities to improve their cache hit rate.

Developers also have access to detailed documentation and responsive support. 

Here's a more comprehensive overview of how Stellate compares to Akamai's GraphQL caching feature.

Feature

Stellate

Akamai

Caching

Fine-grained cache configuration

❌🛠️

Query normalization

❌🛠️

Fine-grained cache invalidation

❌🛠️

Automatic mutation cache invalidation

❌🛠️

Automatic persisted queries support

Query batching support

Per-user and per-group caching

Run at scale

Partial query caching

🔜

Custom key field support

❌🛠️

Multiple environment support

Debuggability

List invalidation

❌🛠️

APQ support

Query batching support

Alerts for HTTP & GraphQL errors

Observability

Metrics for every request

Per-operation aggregated metrics

Caching opportunity identification

Cache purge analytics

GraphQL query anonymization

GraphQL schema usage tracking

Schema breaking change prevention

Custom attributes

Change Management

Local development support

Cache configuration-as-code

❌🛠️

CI/CD for cache configuration

Cache configuration change previews

24/7/365 on-call rotation

Developer Enablement

Documentation

😕

GraphQL Caching Support

GraphQL Caching Education & Enablement

Akamai GraphQL Caching Alternative

Akamai’s GraphQL caching feature is just that – a feature.

While it’s a better alternative to no caching options, you’ll find that it’s still very limited and you’ll have to invest resources into building out into a sufficient solution, and even then, managing it will be a very manual task.

This is why we created Stellate.

It’s designed specifically to solve GraphQL caching challenges and therefore comes with all of the solutions you need out-of-the-box to maximize your cache hit rate. 

You can learn more about how Stellate works by signing up or scheduling a demo today.