From Redis to KeyDB with Nginx as load balancer and .Net Core

Part 2 -Switching from Redis to KeyDB and running multiple KeyDB instances in Active Replica mode

Ivaylo Gluhchev
9 min readJun 16, 2021

This is a short series of articles describing what the research led to. Before continuing further let me first share what this series would be and what it would not be.

It would be a quick walkthrough a simple setup of an ASP.NET Core web API application that is going to use a single Redis instance for caching and we are going to replace it with two instances of KeyDB running in Active Replica setup behind a Nginx load balancer. Along the way I will share a hint or two that you might find useful.

I would like to start with a Active replication.

Active replication

I will say a few words about what is active replication as it will be a big part of this post. Replications in this case or in a distributed system as you know is a way to provide consistency in redundant resources. Active replication basically means that two master nodes are replicas (have the same setup and hold copies of the same information)of one another, syncing with each other. That allows synchronous read and write operations to both of them. Since version 5.0 KeyDB has introduced active-replication. Such kind of setup allows the load balancing of the operations to the two master nodes. We will see how to setup our KeyDB nodes behind an Nginx load balancer in Part 3 of this series.

The connection between the two master nodes is of type active/active. This allows for high availability.

https://docs.keydb.dev/blog/2019/08/05/blog-post/

In a failover scenario if one of the nodes fails the other will continue working.

If we wanted to achieve something like this in Redis, the setup will be more complicated and most likely it will have a master node, replica node/s and a couple of sentinel nodes that will monitor the master node. In case the master node fails the sentinels will promote the replica to a master.

Detailed information about active replication in KeyDB could be found here.

Switching from Redis to KeyDB

In Part 1 of this series we saw how to run Redis image in a container and how to set it up to communicate with our application and keep the cached data. Let’s move on and see how we can switch to KeyDB. First lets get the KeyDB image and run it in a container.

A quick note about the port that I will use - 46379. If you have followed part 1 of this series you probably know that we currently have a Docker container with Redis container running on this port, if we want to use the port we must stop the redis-container (this is how we named the container) first. If we choose to do that we will see that moving from Redis to KeyDB requires almost no efforts at all. But of course in production in most scenarios you cannot just shut down your current cache database, so another approach would be to change the configuration. For our purposes I will stop our Redis instance and then execute the command above.

The command that I will use is

docker run --name keydb-container -p 46379:6379 -d eqalpha/keydb

That way the image is run in the container forwarded on port 6379 in detached mode.

If this is for the first time of running the command above, the image will automatically be download if it does not exist locally and you will see something similar to the screen below.

If you have already implemented the step above you now can simply start your container using the following command: docker start <your-container-name>

We will open the keydb-cli see what it will show us.

docker exec -it keydb-1 /bin/bash 
keydb-cli

What we see from the image is that the commands are pretty much the same as the Redis commands. As we mentioned at the beginning KeyDB is a Redis fork so it’s meant to be a seamless transition from Redis to KeyDB by design. But don’t just take may word for it and lets go to our application and try to run it. We will see something like the image below, which means that the data is returned from the app not from the cache.

That tells us that the data also had to be added to some cache but which? We will not waste anymore time and check the KeyDB instance.

Wow isn't that nice, we made no changes to the code just restarted the app and now our caching database is not Redis but KeyDB. Switching between those two is as easy as it gets. Of course you can run KeyDB forwarded on any port you like and you will only have to change it in the appsettings.json and will be good to go.

KeyDB Active Replicas

Now that we have a new caching database running let’s move forward towards our goal to create a simple redundant setup of two caching databases.

We will create two instances of KeyDB running in Docker containers. We will also have a dedicated network in Docker in which we will add the containers. We will also make the both instances communicating to each other by making them active replicas of one another. I prefer to start fresh so i would stop and remove my current KeyDB container.

First we will create a Docker network and include both our KeyDB instances into it so that they will be able to communicate with one another. The command for doing that is

docker network create --subnet=<my_network_ip> <mynetwork_name> 
example:
docker network create --subnet=172.24.0.0/16 mynet

We create it in such a way so that it will allow static ips and we can assign ips within the network.

Since we can assign ips to our instances lets execute the following command for each of the two instances that we want to create.

docker run  --net <mynetwork_name> --ip <my_container_static_ip> --name <my_container_name> -p <exported_port>:6379 -d eqalpha/keydb keydb-server  --bind 0.0.0.0 --active-replica yes

example:
docker run --net mynet --ip 172.24.0.2 --name keydb-1 -p 46380:6379 -d eqalpha/keydb keydb-server --bind 0.0.0.0 --active-replica yes
docker run --net mynet --ip 172.24.0.3 --name keydb-2 -p 36380:6379 -d eqalpha/keydb keydb-server --bind 0.0.0.0 --active-replica yes

Now keydb-1 and keydb-2 are part of mynet and they have active repclication enabled.

The next step is to make sure that the two nodes are aware of each other.

docker run -it --rm --net <my_netowrk> eqalpha/keydb keydb-cli -h <container_A_ip> -p 6379 REPLICAOF <container_B_ip>

example:
docker run -it --rm --net keydnet eqalpha/keydb keydb-cli -h 172.24.0.2 -p 6379 REPLICAOF 172.24.0.3 6379
docker run -it --rm --net keydnet eqalpha/keydb keydb-cli -h 172.24.0.3 -p 6379 REPLICAOF 172.24.0.2 6379

We will explore what this active replica actually does. We will use the clis of the two nodes to monitor the interactions and the changes. We will verify that neither of the instances has any keys at the the beginning and after that we will set a key in keydb-1 and it will immediately become available in the keydb-2.

Now we have two KeyDB instances that are replicating each other. If for some reason one of the nodes fails we will be able to bring it up with the commands described below. (In our case keydb-2 is down. It has an ip 172.24.0.3 which we have set up earlier, in order to start it again and make it a replica of keydb-1 which has ip 172.24.0.2):

docker start keydb-2
docker run -it --rm --net mynet eqalpha/keydb keydb-cli -h 172.24.0.3 -p 6379 REPLICAOF 172.24.0.2 6379

We will see how they interact with our weather application. We will change our appsettings.json file and instead of port 46379 we will set it to 46380. This is the port of keydb-1. When we start our application the data will be cached in the keydb-1 node but it will also be available in the second node (keydb-2).

Running the application after changing the port will make a call to the cache server - in this case keydb-1 and try to get the data from it. Since the data will not exist in the cache we will receive the response from the application indicating that property cached is false.

As you know since the data does not exist in the cache it will be added and in the second call we will see that the data comes from the cache.

What will happen is that the data will be cached on keydb-1 and because there is active replication between keydb-1 and keydb-2 will be able to see it on both.

Cache Expiration

Currently the data that we have cached will be stored on both nodes for quite a long time as we have not defined a cache expiration time. It’s rarely the case that you will need the same data forever. In order to manage cache expiration time we will add two settings in our cache extensions class.

First we will add absolute expiration time, what this means is that no matter what, the cache will be invalidated after the time we will set expires. In our case I will set this to 1 minute.

One other cache expiration option is to set a sliding expiration. Sliding expiration defines what will happen if the resource that is cached is not used for a certain period. In our case I will set it to 30 seconds which will remove the resource from cache if it is not used in that period of time. As one can imagine this is useful in may cases such as not keeping huge amount of data that is relatively used in an expensive cache.

So now our extension class will look something like that:

And here is the code from the image above.

using Microsoft.Extensions.Caching.Distributed;
using System;
using System.Text.Json;
using System.Threading.Tasks;
namespace RedisVsKeyDB.Extensions
{
public static class CacheExtensions
{
public static async Task<T> GetCacheValueByKeyAsync<T>(this IDistributedCache cache, string key) where T : class
{
string result = await cache.GetStringAsync(key);
if (string.IsNullOrEmpty(result))
{
return null;
}
var deserializedResult = JsonSerializer.Deserialize<T>(result);
return deserializedResult;
}
public static async Task SetCacheValueAsync<T>(this IDistributedCache cache, string key, T value) where T : class
{
DistributedCacheEntryOptions cacheEntryOptions = new DistributedCacheEntryOptions();


// Remove item from cache after the specified duration
cacheEntryOptions.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60);
// Remove item from cache if it is not used for the specified duration
cacheEntryOptions.SlidingExpiration = TimeSpan.FromSeconds(30);

string result = JsonSerializer.Serialize(value);
await cache.SetStringAsync(key, result.ToString(), cacheEntryOptions);
}
}
}

Part 3 - The third part would be to put the two KeyDB instances behind a load-balancer for which Nginx will come into aid.

--

--