Implementing Segmented LRU with .NET & Redis

Segmented LRU (Least Recently Used) is a hybrid cache eviction technique that merges the strengths of both LRU and LFU (Least Frequently Used) strategies. It helps efficiently manage the removal of items from the cache based on both recency and frequency. Redis, as popular as it is, doesn’t natively support Segmented LRU, meaning it needs to be implemented from the application side. In this guide, I’ll demonstrate how to implement Segmented LRU using .NET Core and Redis.

💡 Recommendation: Before diving into this post, I recommend reading my previous blog on cache eviction techniques to understand the underlying concepts of LRU, LFU, and other strategies.


🚀 Demo First

Let’s dive into a demo to understand how the implementation works.

📋 Pre-requisites:

  • Docker Engine
  • Visual Studio / VS Code

⚙️ Steps to Run:

📚 Available Endpoints

  • POST /User: Create a new entry in the cache.
  • GET /User: Retrieve a user from the cache by its key.
  • GET /Cache: Check the current items in the cache (helpful for tracking what’s in the hot and cold segments).
  • DELETE /Cache: Clear the cache (useful for resetting during testing).
API description

🛠️ Assumptions:

  • Hot Cache Capacity: 2 items
  • Cold Cache Capacity: 3 items
  • Promotion Threshold: 1 access (before promotion from cold to hot cache)

🧪 Simulate Segmented LRU

Step 1: Create Initial Users

Use the POST /User endpoint to create 3 users. Here’s a sample request:

{
  "id": 1,
  "name": "Haaland"
}

Repeat this step for 3 users (with unique IDs) to populate the cold cache. Verify the entries with GET /Cache endpoint. You should see:

{
  "coldCacheKeys": [
    { "key": "1", "frequency": 0 },
    { "key": "2", "frequency": 0 },
    { "key": "3", "frequency": 0 }
  ],
  "hotCacheKeys": []
}

Step 2: Access Users and Promote Them to Hot Cache 🔥

Access users “1” and “3” using GET /User/{id}. This will increase their access frequency, making them eligible for promotion to the hot cache (based on LFU). The updated cache will now look like this:

{
  "coldCacheKeys": [
    { "key": "2", "frequency": 0 }
  ],
  "hotCacheKeys": [
    { "key": "1", "frequency": 1 },
    { "key": "3", "frequency": 1 }
  ]
}

Step 3: Fill Up Both Caches 💯

Add two more users using the POST /User endpoint. The cache will be full at this point, with the following state:

{
  "coldCacheKeys": [
    { "key": "2", "frequency": 0 },
    { "key": "4", "frequency": 0 },
    { "key": "5", "frequency": 0 }
  ],
  "hotCacheKeys": [
    { "key": "1", "frequency": 1 },
    { "key": "3", "frequency": 1 }
  ]
}

Step 4: Trigger Demotion Based on LRU

Access user “2” again using the GET /User/{id} endpoint. This promotes “2” to the hot cache, but since the hot cache is full, user “1” (the least recently used item) will be demoted back to the cold cache.

{
  "coldCacheKeys": [
    { "key": "1", "frequency": 1 },
    { "key": "4", "frequency": 0 },
    { "key": "5", "frequency": 0 }
  ],
  "hotCacheKeys": [
    { "key": "3", "frequency": 1 },
    { "key": "2", "frequency": 1 }
  ]
}

Step 5: Evict Cold Cache Items 🥶

Finally, add another user, which will evict user “1” (the least recently used item) from the cold cache:

{
  "coldCacheKeys": [
    { "key": "4", "frequency": 0 },
    { "key": "5", "frequency": 0 },
    { "key": "6", "frequency": 0 }
  ],
  "hotCacheKeys": [
    { "key": "3", "frequency": 1 },
    { "key": "2", "frequency": 1 }
  ]
}

🪧Code Walkthrough

The full code is available here. The Redis library used is StackExchange.Redis. The code is structured into two main layers:

1. API Layer 🧩

  • User Controller: Handles user-related API calls.
  • Cache Controller: Manages cache operations.

2. Service Layer ⚙️

  • Promotion Logic: Promotes cache items to the hot cache if they meet the frequency threshold.
  • Demotion Logic: Demotes items from the hot cache to the cold cache when capacity is exceeded.
  • Eviction Logic: Evicts the least recently used items when the cache exceeds capacity.

At each access, the cache checks if an item needs promotion to the hot cache:

private async Task PromoteIfRequired(CacheItem<T> cacheItem)
{
    if (cacheItem.Frequency >= _promotionThreshold)
    {
        await PromoteToHotCacheAsync(cacheItem);
    }
}

The promotion logic ensures the item is moved to the hot cache, and the capacity is managed by evicting items if needed:


private async Task PromoteToHotCacheAsync(CacheItem<T> cacheItem)
{
    _logger.LogInformation($"Promoting {cacheItem.Key} to hot segment");

    await _redis.HashSetAsync(_hotCache, cacheItem.Key, JsonConvert.SerializeObject(cacheItem));

    await _redis.HashDeleteAsync(_coldCache, cacheItem.Key);

    await CheckCapacityAsync();
}

Eviction Logic checks both the cold and hot caches for capacity and evicts the least recently used item:

private async Task CheckCapacityAsync()
{
   // Check and evict from cold cache
   var coldCacheCount = await _redis.HashLengthAsync(_coldCache);
   if (coldCacheCount > _coldCacheCapacity)
   {
       var oldestItem = await GetOldestItem(_coldCache);
       if (oldestItem != null)
       {
           _logger.LogInformation($"Evicting {oldestItem.Key} from cold cache");
           await _redis.HashDeleteAsync(_coldCache, oldestItem.Key);
       }
   }

   // Check and evict from hot cache
   var hotCacheCount = await _redis.HashLengthAsync(_hotCache);
   if (hotCacheCount > _hotCacheCapacity)
   {
       var oldestItem = await GetOldestItem(_hotCache);
       if (oldestItem != null)
       {
           _logger.LogInformation($"Evicting {oldestItem.Key} from hot cache");

           await _redis.HashDeleteAsync(_hotCache, oldestItem.Key);

           await _redis.HashSetAsync(_coldCache, oldestItem.Key, JsonConvert.SerializeObject(oldestItem));
       }
    }
}

📈 Summary

In this demo, you saw how to implement a Segmented LRU mechanism in .NET Core using Redis. By dividing the cache into hot and cold segments, we can balance both recency and frequency efficiently.