Redis and Caching

Jessica Findley
4 min readAug 4, 2023

In my previous article, I talked about how Redis can be used as a database —

But today, I am going to tell you about how Redis caching can be used to optimise the performance of your application.

Let’s say for example, you have your web application and an SQL database. When your application makes a request to retrieve data, depending on that request, it can take some time. Let’s say for example it takes between 75ms-100 ms for every request made to the database. Doesn’t sound like a lot, but if your application went viral, a high amount of traffic making requests to your DB at the same time, could increase this massively.

By using Redis caching, you can improve this request time to as little as under 1ms per request, and as a result, you gain two main benefits:

1 — Better user experience because the applications performance is much faster.

2 — Reduces the load on your database services, because Redis is taking care of some of these application requests.

So how does it work?

Well for Redis caching you have a Redis instance that caches your data in memory (RAM) and this in memory data can be retrieved MUCH faster then on disk.

This Redis instance sits between your web server and your database. So let’s say a user makes a request to log in to their social media application. We need to query the database to make sure the login details are correct. But with Redis, we may not need to.. Let me explain

When the web server makes the request, you can see that it hits our Redis instance first. If Redis has this data needed for the request (known as a cache hit), it will respond to the web server with this requested data. So in this case, the database will not even receive the request, because Redis knows what the web server wants and just deals with it.

In the scenario where Redis does not have the information needed for a given request(a cache miss), it would then pass the request to the database to deal with it.

How does the Redis Cache instance get populated in the first place?

So when you create your Redis instance, it’s initially empty. It get’s your data from your application being used. So if a user makes a request and Redis doesn’t have the data to respond, we know now the request then goes to the database instead. But when the database itself responds, the Redis cache instance is also populated with this data meaning if the same request was made again, Redis would have this data, so would deal with the response, and not the Database.

We know caching instances are stored in memory.. this isn’t able to hold ALL data, so what happens then?

Well… there are few ways to deal with this.

The first one, being the most expensive, is to configure your Redis instance to increase it’s memory (but this can be costly)

Or you can configure the instance to deal with the removal of data in a way of your choosing. These are a list of the policies you can implement in the instance configuration on how and what data to remove, when memory capacity has been reached.

  • noeviction: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
  • allkeys-lru: Keeps most recently used keys; removes least recently used (LRU) keys
  • allkeys-lfu: Keeps frequently used keys; removes least frequently used (LFU) keys
  • volatile-lru: Removes least recently used keys with the expire field set to true.
  • volatile-lfu: Removes least frequently used keys with the expire field set to true.
  • allkeys-random: Randomly removes keys to make space for the new data added.
  • volatile-random: Randomly removes keys with expire field set to true.
  • volatile-ttl: Removes keys with expire field set to true and the shortest remaining time-to-live (TTL) value.

What if the cache is storing stale data?

If you have this issue, you can implement a TTL(Time To Live) for your data stored in the Redis instance. You set a time limit in seconds, minutes or hours and when that time is up, Redis will remove this data from it’s cache.

You can also use a cache worker that monitors the data in Redis and the data in your database, and if anything changes, the cache worker will update the data in Redis.

Downsides of Redis Caching

Because it’s data stored in memory, it can be a lot more expensive compared to just using your database alone without Redis caching.

Another downside of data being stored in memory, is that if the server goes down, you will lose all your data in the Redis cache instance.

Redis cache is single threaded so can only take one request at a time which will bottleneck / slow down performance. ( Although, it will still be much much faster than not using it at all)