Documentation Index
Fetch the complete documentation index at: https://mintlify.com/redis/redis/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Redis is an in-memory database that requires careful memory management. This guide covers memory allocation strategies, eviction policies when memory limits are reached, and configuration options.
Memory Allocation
Allocator
Redis uses a custom memory allocator wrapper (zmalloc.h) that tracks total memory usage:
// Returns total allocated memory
size_t zmalloc_used_memory();
// Allocate with usable size tracking
void *zmalloc_usable(size_t size, size_t *usable);
The allocator can be:
- jemalloc (default, recommended)
- tcmalloc
- libc malloc
jemalloc provides better fragmentation characteristics and performance for Redis workloads.
Memory Tracking
Redis tracks memory for:
- Dataset: Keys and values
- Overhead: Server structures, client buffers
- Replication Buffers: Backlog and replica output
- AOF Buffer: Pending writes
- Module Memory: Redis module allocations
Memory Not Counted for Eviction
From evict.c:310-358:
size_t freeMemoryGetNotCountedMemory(void) {
size_t overhead = 0;
// Replication buffer exceeding backlog size
if ((long long)server.repl_buffer_mem > server.repl_backlog_size) {
size_t extra_approx_size =
(server.repl_backlog_size/PROTO_REPLY_CHUNK_BYTES + 1) *
(sizeof(replBufBlock)+sizeof(listNode));
size_t counted_mem = server.repl_backlog_size + extra_approx_size;
if (server.repl_buffer_mem > counted_mem) {
overhead += (server.repl_buffer_mem - counted_mem);
}
}
// AOF buffer
if (server.aof_state != AOF_OFF) {
overhead += sdsAllocSize(server.aof_buf);
}
return overhead;
}
Replication and AOF buffers are excluded from eviction calculations to prevent feedback loops where eviction creates more replication/AOF data, triggering more eviction.
Maxmemory Configuration
Setting Memory Limit
From redis.conf:
# Set maximum memory (examples):
maxmemory 4gb
maxmemory 4096mb
maxmemory 4294967296 # bytes
# No limit (default):
# maxmemory 0
Memory Limit Check:
From evict.c:384-420:
int getMaxmemoryState(size_t *total, size_t *logical,
size_t *tofree, float *level) {
mem_reported = zmalloc_used_memory();
if (!server.maxmemory) {
if (level) *level = 0;
return C_OK; // No limit
}
// Subtract non-counted memory
mem_used = mem_reported;
size_t overhead = freeMemoryGetNotCountedMemory();
mem_used = (mem_used > overhead) ? mem_used-overhead : 0;
// Compute ratio
if (level) *level = (float)mem_used / (float)server.maxmemory;
if (mem_used <= server.maxmemory) return C_OK;
// Over limit - compute amount to free
mem_tofree = mem_used - server.maxmemory;
if (tofree) *tofree = mem_tofree;
return C_ERR;
}
Eviction Policies
When maxmemory is reached, Redis can evict keys based on configured policy.
Available Policies
From server.h:673-692:
// Policy flags
#define MAXMEMORY_FLAG_LRU (1<<0) // Least Recently Used
#define MAXMEMORY_FLAG_LFU (1<<1) // Least Frequently Used
#define MAXMEMORY_FLAG_ALLKEYS (1<<2) // All keys eligible
#define MAXMEMORY_FLAG_LRM (1<<3) // Least Recently Modified
// Policies
#define MAXMEMORY_VOLATILE_LRU ((0<<8)|MAXMEMORY_FLAG_LRU)
#define MAXMEMORY_VOLATILE_LFU ((1<<8)|MAXMEMORY_FLAG_LFU)
#define MAXMEMORY_VOLATILE_TTL (2<<8)
#define MAXMEMORY_VOLATILE_RANDOM (3<<8)
#define MAXMEMORY_ALLKEYS_LRU ((4<<8)|MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_ALLKEYS)
#define MAXMEMORY_ALLKEYS_LFU ((5<<8)|MAXMEMORY_FLAG_LFU|MAXMEMORY_FLAG_ALLKEYS)
#define MAXMEMORY_ALLKEYS_RANDOM ((6<<8)|MAXMEMORY_FLAG_ALLKEYS)
#define MAXMEMORY_NO_EVICTION (7<<8)
#define MAXMEMORY_VOLATILE_LRM ((8<<8)|MAXMEMORY_FLAG_LRM)
#define MAXMEMORY_ALLKEYS_LRM ((9<<8)|MAXMEMORY_FLAG_LRM|MAXMEMORY_FLAG_ALLKEYS)
Configuration:
# Policy options:
maxmemory-policy noeviction # Return errors, don't evict
maxmemory-policy allkeys-lru # Evict least recently used
maxmemory-policy allkeys-lfu # Evict least frequently used
maxmemory-policy volatile-lru # LRU among keys with TTL
maxmemory-policy volatile-lfu # LFU among keys with TTL
maxmemory-policy allkeys-random # Random eviction
maxmemory-policy volatile-random # Random among keys with TTL
maxmemory-policy volatile-ttl # Evict keys with shortest TTL
Policy Selection Guide
| Use Case | Recommended Policy | Reason |
|---|
| Cache with general access | allkeys-lru | Classic LRU caching |
| Cache with frequency bias | allkeys-lfu | Protects popular items |
| Explicit TTLs only | volatile-lru | Respect application logic |
| Database mode | noeviction | Prevent data loss |
noeviction policy will return errors when memory limit is reached. Use this only when you can handle write failures.
LRU (Least Recently Used)
LRU Clock
From evict.c:54-71:
// LRU clock with reduced precision
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
unsigned int LRU_CLOCK(void) {
unsigned int lruclock;
if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
lruclock = server.lruclock; // Use cached value
} else {
lruclock = getLRUClock(); // Compute fresh
}
return lruclock;
}
Idle Time Calculation:
unsigned long long estimateObjectIdleTime(robj *o) {
unsigned long long lruclock = LRU_CLOCK();
if (lruclock >= o->lru) {
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
} else {
// Handle wrap-around
return (lruclock + (LRU_CLOCK_MAX - o->lru)) *
LRU_CLOCK_RESOLUTION;
}
}
Approximated LRU
Redis uses an approximated LRU algorithm:
- Sample N random keys (default: 5)
- Select key with oldest access time
- Evict selected key
- Repeat until under memory limit
Configuration:
# Sample size for LRU/LFU algorithms
maxmemory-samples 5
Increasing maxmemory-samples improves eviction accuracy but uses more CPU. Values of 5-10 provide good balance.
LFU (Least Frequently Used)
LFU Implementation
From evict.c:228-260, LFU uses 24 bits split into:
16 bits 8 bits
+---------------+----------+
| Access Time | Counter |
+---------------+----------+
- Access Time: Reduced precision timestamp (minutes)
- Counter: Logarithmic access frequency (0-255)
LFU Counter
Logarithmic Increment:
From evict.c:281-289:
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255; // Saturated
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval*server.lfu_log_factor+1);
if (r < p) counter++;
return counter;
}
Configuration:
# LFU logarithmic factor (default 10)
# Higher = more sensitive to access patterns
lfu-log-factor 10
# Decay time in minutes (default 1)
lfu-decay-time 1
LFU Decay
From evict.c:301-308:
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
unsigned long num_periods = server.lfu_decay_time ?
LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
Counter decreases by 1 for each lfu-decay-time minutes that elapsed.
Eviction Process
Eviction Pool
From evict.c:36-44:
#define EVPOOL_SIZE 16
#define EVPOOL_CACHED_SDS_SIZE 255
struct evictionPoolEntry {
unsigned long long idle; // Idle time or inverse frequency
sds key; // Key name
sds cached; // Cached SDS for efficiency
int dbid; // Database ID
int slot; // Hash slot
};
Redis maintains a pool of best eviction candidates:
- Sample keys from database
- Calculate score (idle time or frequency)
- Insert into pool (sorted by score)
- Evict highest-scoring key
- Repeat until memory OK
Eviction Tenacity
Configuration:
# Time limit for eviction cycle (0-100)
maxmemory-eviction-tenacity 10
From evict.c:491-506:
static unsigned long evictionTimeLimitUs(void) {
if (server.maxmemory_eviction_tenacity <= 10) {
// Linear: 0..500us
return 50uL * server.maxmemory_eviction_tenacity;
}
if (server.maxmemory_eviction_tenacity < 100) {
// Geometric: up to ~2 min at 99
return (unsigned long)(500.0 *
pow(1.15, server.maxmemory_eviction_tenacity - 10.0));
}
return ULONG_MAX; // No limit
}
- Low values (0-10): Quick eviction cycles, may not reach memory target
- High values (90-100): Persistent eviction until memory OK
- Default (10): Balanced approach
Eviction Results
From evict.c:526-531:
// performEvictions() return values:
EVICT_OK // Memory OK or eviction not needed
EVICT_RUNNING // Still evicting (time proc running)
EVICT_FAIL // Nothing left to evict
When EVICT_FAIL is returned, commands that increase memory will be rejected with OOM errors.
Memory Optimization
Data Structure Encoding
Redis automatically optimizes encodings:
Strings:
- Small integers: Stored as encoded integers (no allocation)
- Short strings: Embedded in object structure
- Long strings: Separate allocation
Lists:
- Small lists: Listpack (continuous memory)
- Large lists: Quicklist (listpack nodes)
Hashes:
- Small hashes: Listpack
- Large hashes: Hash table
Sets:
- All integers: Intset (sorted array)
- Small sets: Listpack
- Large sets: Hash table
Sorted Sets:
- Small: Listpack
- Large: Skiplist + hash table
Configuration Thresholds
# Encoding thresholds (example values)
hash-max-listpack-entries 512
hash-max-listpack-value 64
list-max-listpack-size -2
set-max-intset-entries 512
set-max-listpack-entries 128
zset-max-listpack-entries 128
zset-max-listpack-value 64
Reducing these thresholds saves memory but may impact performance for larger collections. Benchmark your workload when tuning.
Shared Objects
From server.h:125-126:
#define OBJ_SHARED_INTEGERS 10000
Redis shares objects for:
- Integers 0-9999
- Common response strings (OK, ERR, PONG, etc.)
- Small database SELECT commands
Memory Savings:
Shared integer “42” saves:
- 16 bytes (robj struct) per reference
- Integer encoding overhead
Monitoring Memory
INFO Memory
Key Metrics:
used_memory: Total allocated by Redis
used_memory_rss: Resident set size (OS perspective)
used_memory_peak: Historical peak
mem_fragmentation_ratio: RSS / allocated (ideal: ~1.0)
maxmemory: Configured limit
maxmemory_policy: Active eviction policy
MEMORY Commands
# Analyze specific key
MEMORY USAGE key
# Memory usage report
MEMORY STATS
# Trigger defragmentation
MEMORY PURGE
# Memory doctor analysis
MEMORY DOCTOR
MEMORY DOCTOR provides recommendations based on your memory usage patterns and fragmentation.
Active Defragmentation
Redis can defragment memory online:
# Enable active defragmentation
activedefrag yes
# Start defrag when fragmentation exceeds
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
# Stop defrag when fragmentation below
active-defrag-threshold-upper 100
# CPU limits for defragmentation
active-defrag-cycle-min 5
active-defrag-cycle-max 75
How it works:
- Scan allocations during idle time
- Move fragmented allocations
- Update all references
- Free old memory
Defragmentation has CPU cost. Monitor your CPU usage when enabling it.