NOTE: This pattern expands on the FastAPI pattern, so check that out first to get up to speed.
In the FastAPI pattern, we built a Python web service that grabs a photo of Mars from the NASA Mars Curiosity rover API and resizes it for use as a wallpaper. One major issue with our implementation is that every time a user navigates to either the
/ or the
/wallpaper route, the server would have to query the NASA API. This is slow, and could potentially exhaust the NASA API rate limits if the server receives many requests.
To remedy this, we can incorporate caching. Caching simply stores the results of a function in memory, so that when the same arguments are passed to the function again, the results can immediately be retrieved from the cache, rather than performing the function computation again. In our case, responses to the NASA API can be saved in the server's memory and returned immediately upon request, without needing to query the NASA API every time.
We use a Python library called
cachetools, which provides a suite of basic function caching procedures. In this application we'll use the
TTLCache, which specifies a maximum cache size and the "time to live" (TTL) of each item stored in the cache.
TTLCache works by caching function calls, up to a maximum size (the
maxsize keyword argument). When a new function call is made and the cache size is at its maximum,
TTLCache will remove the least recently used (LRU) function call to make room.
TTLCache also tracks the age of each item. If an item exceeds the time to live (
ttl keyword argument), it is removed from the cache. This is useful for applications where function calls change periodically, such as once per day. In the case of our application, we'll be caching responses from the NASA Mars rover API.
TTLCache is an appropriate caching strategy since we know the NASA API updates once per day.
First, we'll add caching to the
get_photo_info function on line #27. This function will always return the same value for a given day, so we set
ttl to one day. Since there are no variations on this response for the given day, we can set the
maxsize to 1. Now, on my machine, an initial request to the
/ route takes ~750 milliseconds, while secondary cached requests only take ~2 milliseconds. That's a ~375 times speed up!
We've moved the photo fetching and resizing logic from the
make_wallpaper function to a separate
get_photo function on line #50. We cache this function as well, but this time specifying a
maxsize of 64. Since the
/wallpaper route is parameterized by a
height argument, both of which change the
get_photo return value for a given day (i.e. a photo), it's appropriate to cache multiple function calls here (as it is in most cases).
Now we've successfully added caching to our web service by adding a couple decorators and doing a small refactor.
It's important to note that for a larger-scale production application,
cachetools may not be an appropriate choice. For example, if multiple instances of our Mars wallpaper server are running, each instance will have its own cache, rather than a shared cache. A better approach would be to use a caching solution like Redis or memcached, which allow multiple processes to use a central cache. For our simple application, however,
cachetools is sufficient, and illustrates some basic caching techniques that can be applied to larger-scale implementations and alternative caching solutions.