In modern systems, caching is one of the most powerful techniques to improve performance. Systems like Redis, Memcached, browsers, CDNs, and databases heavily rely on caching to reduce latency and database load.
But caching introduces an important problem.
Memory is limited.
When the cache becomes full, the system must decide:
Which cached item should be removed to make space for new data?This decision-making strategy is called a cache eviction policy.
A cache eviction policy determines which item gets evicted when the cache reaches its maximum capacity.
Why Cache Eviction Is Necessary
Suppose we have a cache with capcity:
Cache size = 3Now imagine the following requests:
A
B
CCache becomes:
[A, B, C]Now a new request arrives:
DTo store D, one of the existing entries must be removed.
[A, B, C] -> ? -> insert DThe rule used to choose the victim entry is the eviction policy.
What Is Cache Eviction?
Cache eviction = the process of removing items from cache when it becomes full.
Since cache memory is finite:
- New data must replace old data
- The eviction policies decides what to remove
Wrong eviction leads to:
- low hit rate
- high DB load
- latency spikes
- cache pollution
What Makes a Good Evictin Policy?
An ideal eviction policy should keep in cache the data that is:
- Frequently used
- Recently accessed
- Expensive to recompute
- Likely to be requested again
The goal is to maximize cache hit rate.
Cache Hit → data found in cache
Cache Miss → data fetched from database/diskHigher hit rate = faster system
Core Cache Eviction Policies
Below are the main strategies to decide what to keep and what to remove as data comes in. These decides which items stay in the cache and which ones get replaced when it fills up.
1 Least Recently Used (LRU)
Evict items that haven't been used recently. Means the item that was accessed longest ago will be evicted.
It is often implemented with the linked list or the priority queue that tracks the access order.
“Recent past predicts near future”
Example:
Cache size = 3
Access sequence:
A -> B -> C
Cache: [A, B, C]Now access:
AThis makes A the most recently used.
Cache order (recent -> old):
[A, C, B]Now insert:
DThe least recently used item is B.
Evict B
Cache -> [A, C, D]Used In:
- Redis
- In-process caches
- CDN edge caches
Advantages:
- Works great for:
- Web traffic
- API responses
- Hot user data
Disadvantages:
- Slightly complex implementation
- Requires tracking usage order
- One-time sequential scans
- Looping over large datasets
2 MRU (Most Recently Used)
MRU evicts the most recently accessed item. Remove the item with recent timestamp.
This might sound counterintuitive, but it works well in specific scenarios.
Example:
We are scanning a large dataset.
In such workloads, recently accessed items are unlikely to be used again.
So evicting the most recent item improves performance.
MRU is rarely used alone but appears in specialized systems.
3 Least Frequently Used (LFU)
Evict items that are used least often, event if accessed recently. Means evict the items that are least frequently accessed.
“Popularity over time matters more than recency”
Example:
Access sequence:
A A A
B B
CFrequency table:
A -> 3
B -> 2
C -> 1If the cache becomes full and a new item arrives:
Evit CBecause it was used the least.
Used In:
- ML feature caches
- Recommendation systems
- Search query caches
Advantages:
- Keeps frequently used items longer
- Works well when access patterns are stable
Disadvantages:
- Slow to adapt to sudden traffic changes
- Requires counters (more memory)
- Can keep stale “formerly popular” items forever
4 First in First Out (FIFO)
Evict the oldest item first. Simple, but rarely the right choice.
The item that was inserted first will be evicted first.
Example:
Cache size = 3
Insert A -> [A]
Insert B -> [A, B]
Insert C -> [A, B, C]
Insert DSince A was inserted first:
Evict A
Cache -> [B, C, D]Used In:
- Very simple systems
- Streaming buffers
Advantages:
- Easiest to implement
- Constant time operations
Disadvantages:
- Does not consider usage frequency
- Very poor hit rate in practice
5 Random Eviction
Random eviction simply removes a random item from the cache.
Example:
[A, B, C]
Insert D
Evict → random(A/B/C)Advantages:
- Very simple
- Extremely fast
- No tracking overhead
Disadvantages
- Poor hit compared to smarter policies
6 Time-To-Live (TTL)
Each item expires after a set time (e.g., 5 minutes). Great for data that can go stale, like API responses. Each entry has an expiration time.
Example:
Cache entry:
UserProfile → expires in 5 minutesAfter the TTL expires:
Entry is automatically removedUsed Everywhere:
- Redis
- CDNs
- Browser caching
- API caching
- DNS caching
TTL ensures that cached data does to become stale.
Advantages:
- Prevents stale data
- Guarantees eventual cleanup
Disadvantages:
- Alone, it does not handle memory pressure
- Can cause cache avalanche
Best Practice:
TTL + LRU Together
Leave a comment
Your email address will not be published. Required fields are marked *


