How to Monitor Redis Memory
How to Monitor Redis Memory Redis is one of the most widely used in-memory data stores in modern application architectures. Its speed, flexibility, and support for advanced data structures make it ideal for caching, session storage, real-time analytics, and message brokering. However, because Redis stores all data in RAM, memory usage becomes a critical operational concern. Unmonitored Redis memor
How to Monitor Redis Memory
Redis is one of the most widely used in-memory data stores in modern application architectures. Its speed, flexibility, and support for advanced data structures make it ideal for caching, session storage, real-time analytics, and message brokering. However, because Redis stores all data in RAM, memory usage becomes a critical operational concern. Unmonitored Redis memory consumption can lead to performance degradation, out-of-memory (OOM) crashes, and costly infrastructure overprovisioning.
Monitoring Redis memory is not merely about tracking usage numbersits about understanding patterns, identifying memory leaks, optimizing data structures, and ensuring system reliability. Without proper visibility, even a well-designed Redis deployment can become a bottleneck. This guide provides a comprehensive, step-by-step approach to monitoring Redis memory effectively, from basic commands to advanced tooling and real-world strategies.
Step-by-Step Guide
1. Understand Redis Memory Metrics
Before you begin monitoring, you must understand the key memory-related metrics Redis exposes. These metrics are accessible via the INFO memory command and include:
- used_memory: Total number of bytes allocated by Redis using its allocator (typically jemalloc or libc malloc).
- used_memory_human: Human-readable version of
used_memory(e.g., 1.23G). - used_memory_rss: Resident Set Sizethe amount of physical memory (RAM) consumed by the Redis process, including overhead from the operating system.
- used_memory_peak: Peak memory usage since Redis started.
- used_memory_peak_human: Human-readable peak memory usage.
- mem_fragmentation_ratio: Ratio of
used_memory_rsstoused_memory. A value significantly above 1 indicates memory fragmentation; below 1 suggests memory swapping. - mem_allocator: The memory allocator in use (e.g., jemalloc, libc).
- active_defrag_running: Indicates if active memory defragmentation is currently in progress.
Understanding the difference between used_memory and used_memory_rss is essential. used_memory reflects what Redis believes its using; used_memory_rss reflects what the OS reports. A large gap between them often signals fragmentation or memory not being returned to the OS after deletions.
2. Connect to Your Redis Instance
To begin monitoring, you need access to your Redis instance. This can be done via the Redis CLI or through a remote connection.
If Redis is running locally:
redis-cli
If Redis is remote, use:
redis-cli -h your-redis-host.com -p 6379 -a yourpassword
Always ensure secure access. Avoid using plaintext passwords in scripts. Instead, use Redis ACLs with strong credentials and TLS encryption where possible.
3. Run INFO Memory Command
Once connected, execute:
INFO memory
This returns a block of memory-related statistics. For a cleaner output, use:
redis-cli INFO memory
Sample output:
Memory
used_memory:1048576
used_memory_human:1.00M
used_memory_rss:21434368
used_memory_peak:12582912
used_memory_peak_human:12.00M
used_memory_overhead:819200
used_memory_startup:786432
used_memory_dataset:229376
used_memory_dataset_perc:21.88%
total_system_memory:16777216000
total_system_memory_human:15.62G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_policy:noeviction
mem_fragmentation_ratio:20.44
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
Key observations from this output:
- Redis is using 1MB of logical memory but 20.4MB of physical memorya fragmentation ratio of 20.44, which is very high.
- Peak memory usage was 12MB, suggesting recent spikes or memory accumulation.
- No
maxmemorylimit is set, meaning Redis can grow until the system runs out of RAM.
4. Set a Memory Limit (maxmemory)
By default, Redis has no memory limit. This is dangerous in production. Always configure maxmemory to prevent Redis from consuming all system memory.
Edit your Redis configuration file (redis.conf):
maxmemory 2gb
maxmemory-policy allkeys-lru
Restart Redis or reload the configuration dynamically:
CONFIG SET maxmemory 2147483648
CONFIG SET maxmemory-policy allkeys-lru
Available eviction policies:
- noeviction: Return errors on write commands when memory is full.
- allkeys-lru: Evict least recently used keys (recommended for general caching).
- volatile-lru: Evict least recently used keys with an expire set.
- allkeys-random: Evict random keys.
- volatile-random: Evict random keys with an expire set.
- volatile-ttl: Evict keys with the shortest TTL.
For most use cases, allkeys-lru is optimal. It ensures frequently accessed data stays in memory while less-used data is removed automatically.
5. Monitor Memory Usage Over Time
Memory usage is not static. To detect trends, leaks, or anomalies, you must monitor over time. Use scripting to collect and log metrics.
Example Bash script to log memory every 5 minutes:
!/bin/bash
REDIS_HOST="localhost"
REDIS_PORT="6379"
LOG_FILE="/var/log/redis-memory.log"
while true; do
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
MEMORY=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT INFO memory | grep "used_memory_human" | cut -d: -f2 | tr -d ' ')
RSS=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT INFO memory | grep "used_memory_rss" | cut -d: -f2 | tr -d ' ')
FRAG_RATIO=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT INFO memory | grep "mem_fragmentation_ratio" | cut -d: -f2 | tr -d ' ')
echo "$TIMESTAMP | Used: $MEMORY | RSS: $((RSS / 1048576))MB | Fragmentation: $FRAG_RATIO" >> $LOG_FILE
sleep 300
done
Run this script in the background with nohup ./redis-memory-monitor.sh &. Log files help identify memory growth patterns, such as daily spikes or slow leaks.
6. Identify Memory-Intensive Keys
Not all keys consume equal memory. Some large strings, hashes, or lists can dominate memory usage. Use the MEMORY USAGE command to inspect individual keys:
MEMORY USAGE my_large_hash
This returns the number of bytes used by that key. To find the top memory-consuming keys across your dataset:
redis-cli --bigkeys
Example output:
Scanning the entire keyspace to find biggest keys as well as
average sizes per key type. You can use -i 0.1 to sleep 0.1 sec
per 100 SCAN commands (not usually needed).
[00.00%] Biggest string found so far 'session:123456789' with 1048576 bytes
[00.00%] Biggest hash found so far 'user:profile:98765' with 2097152 bytes
[00.00%] Biggest list found so far 'queue:notifications' with 8388608 bytes
-------- summary -------
Sampled 123456 keys in the keyspace!
Total key length in bytes is 1234567 (avg len 9.99)
Biggest string found 'session:123456789' has 1048576 bytes
Biggest hash found 'user:profile:98765' has 2097152 bytes
Biggest list found 'queue:notifications' has 8388608 bytes
123456 strings with 1234567 bytes (100.00% of keys, avg size 10.00)
123 hashes with 256789 bytes (0.10% of keys, avg size 2087.72)
45 lists with 12345678 bytes (0.04% of keys, avg size 274348.40)
This reveals that a single list, queue:notifications, is consuming over 8MB. This could be a sign of a producer that doesnt consume items fast enough, or a misconfigured TTL. Investigate and optimize such keys immediately.
7. Use Redis Memory Analyzer Tools
While CLI tools are powerful, visual analyzers simplify deep analysis. Tools like RedisInsight (official GUI from Redis Labs) provide real-time memory heatmaps, key size distributions, and memory trend graphs.
Install RedisInsight via Docker:
docker run -d -p 8001:8001 --name redisinsight redislabs/redisinsight:latest
Access it at http://localhost:8001, connect to your Redis instance, and navigate to the Memory tab. Youll see:
- A graph of memory usage over time.
- A breakdown of memory by key type (strings, hashes, sets, etc.).
- A list of top 100 largest keys with size and TTL.
- Fragmentation trends and memory allocator stats.
RedisInsight also allows you to export key data, delete keys in bulk, and set TTLs visuallymaking it indispensable for memory optimization.
8. Enable and Monitor Redis Slow Log
Memory issues can sometimes be caused by slow commands that block the Redis thread. Use the slow log to detect operations that may be indirectly affecting memory pressure.
Configure slow log thresholds:
CONFIG SET slowlog-log-slower-than 1000
CONFIG SET slowlog-max-len 1000
This logs any command taking longer than 1 millisecond. View slow logs with:
SLOWLOG GET 10
Look for commands like KEYS *, FLUSHALL, or large HGETALL operations. These can cause temporary memory spikes or delays that affect eviction behavior.
9. Monitor OS-Level Memory and Swap
Redis performance is directly tied to system memory. Use OS tools to monitor overall memory pressure:
- Linux: Use
free -h,top, orhtopto check available RAM and swap usage. - Check for swapping: If
used_memory_rssis high butfree -hshows low available memory, Redis may be swapping. Swapping is catastrophic for Redis performance. - Use
vmstat 1to monitor swap-in/out activity. - Enable OOM killer logging: Check
dmesg | grep -i "oom\|kill"for Redis process terminations.
Prevent swapping by:
- Setting
vm.overcommit_memory=1in/etc/sysctl.conf. - Reducing the swappiness value:
echo 1 > /proc/sys/vm/swappiness.
10. Set Up Alerts for Critical Thresholds
Manual monitoring isnt scalable. Automate alerts based on thresholds:
- Alert if
used_memoryexceeds 80% ofmaxmemory. - Alert if
mem_fragmentation_ratio> 3.0 (indicates severe fragmentation). - Alert if
used_memory_rss> 90% of total system memory. - Alert if eviction rate increases suddenly (check
expired_keysandevicted_keysinINFO stats).
Use monitoring platforms like Prometheus + Grafana or Datadog to create dashboards and alerts. Example Prometheus metric:
redis_memory_used_bytes{instance="redis-01"} > 1610612736 1.5GB
Combine with alerting rules in Alertmanager to notify via email, Slack, or PagerDuty.
Best Practices
1. Always Set maxmemory and a Policy
Never run Redis without a memory limit. Even if your server has 64GB of RAM, Redis should be constrained to avoid destabilizing the entire system. Use allkeys-lru unless you have a specific reason to use another policy.
2. Avoid Large Keys
Storing 10MB strings or lists in a single key is a performance and memory anti-pattern. Split large datasets into smaller keys using prefixes or sharding. For example, instead of storing all user data in user:123:profile, split into user:123:basic, user:123:preferences, user:123:activity.
3. Use Appropriate Data Structures
Choose the right structure for your data:
- Use hashes for objects with multiple fields (e.g., user profiles).
- Use sorted sets for ranked data (e.g., leaderboards).
- Use streams for message queues instead of lists when possible.
- Avoid storing JSON strings as valuesdeserialize and use native Redis types instead.
Hashes are memory-efficient for small objects. For example, storing a user profile as a hash with 10 fields uses less memory than 10 separate string keys.
4. Set TTLs on All Cache Keys
Every cached key should have an expiration. Even if you plan to refresh it, set a TTL to prevent stale data from accumulating. Use EXPIRE or PX options when setting keys:
SET user:123:token abc123 EX 3600
Without TTLs, keys live foreverleading to memory bloat.
5. Regularly Review and Clean Up
Perform weekly audits using redis-cli --bigkeys and MEMORY USAGE. Delete unused keys manually or automate cleanup with scripts. For example, remove all keys matching a pattern:
redis-cli --scan --pattern "temp:*" | xargs redis-cli del
6. Enable Active Memory Defragmentation
Redis 4.0+ includes active defragmentation to reclaim fragmented memory. Enable it in redis.conf:
activedefrag yes
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100
active-defrag-cycle-min 5
active-defrag-cycle-max 75
This automatically reclaims memory when fragmentation exceeds 10% and the total fragmentation is over 100MB.
7. Monitor Eviction Rates
High eviction rates indicate your memory limit is too low. Track evicted_keys in INFO stats. If this number is consistently rising, increase maxmemory or optimize key usage.
8. Use Redis Cluster for Large Deployments
For memory-heavy workloads, consider Redis Cluster. It shards data across multiple nodes, distributing memory load and improving resilience. Each node can have its own maxmemory limit, allowing better control.
9. Avoid Using KEYS Command
KEYS * blocks Redis and scans the entire dataset. Use SCAN instead for non-blocking iteration. Never use KEYS in production.
10. Document Memory Usage Patterns
Create a memory usage playbook: whats normal, whats alarming, and what actions to take. Share this with your team to ensure consistent response to memory alerts.
Tools and Resources
RedisInsight
Official GUI from Redis. Provides real-time memory monitoring, key analysis, performance graphs, and configuration management. Available as a desktop app or Docker container. Free for all use cases.
Prometheus + Grafana
Open-source monitoring stack. Use the redis_exporter to scrape Redis metrics and visualize them in Grafana dashboards. Ideal for Kubernetes and cloud environments.
Redis Exporter
Go-based exporter that exposes Redis metrics in Prometheus format. Install via Docker:
docker run -d -p 9121:9121 -e REDIS_ADDR=redis://your-redis-host:6379 oliver006/redis_exporter
Access metrics at http://localhost:9121/metrics.
Datadog
Commercial monitoring platform with built-in Redis integration. Offers automatic dashboards, anomaly detection, and alerting. Best for enterprises with existing Datadog infrastructure.
New Relic
Provides deep Redis performance insights, including memory trends, command latency, and topology views. Integrates with application performance monitoring (APM) for end-to-end tracing.
Netdata
Real-time performance monitoring with zero configuration. Includes a Redis dashboard out of the box. Lightweight and ideal for small to medium deployments.
Command-Line Tools
- redis-cli: Essential for manual inspection.
- redis-benchmark: Test performance under load to simulate memory pressure.
- htop / top: Monitor system-level memory usage.
- awk / grep / sed: Parse and filter Redis output for automation.
Documentation and References
- Redis Official Monitoring Guide
- Redis Persistence and Memory
- Redis Documentation Repository
- Memory Optimization Best Practices
Real Examples
Example 1: Memory Leak Due to Missing TTL
A team deployed a Redis-backed session store but forgot to set TTLs on session keys. After two weeks, Redis memory usage grew from 500MB to 8GB. The redis-cli --bigkeys command revealed over 500,000 session keys with no expiration.
Resolution:
- Set
maxmemory 4gbandallkeys-lruto prevent crash. - Deployed a script to scan and add TTLs to all session keys.
- Updated application code to set TTL on every session write.
- Result: Memory stabilized at 1.2GB with 20% fragmentation.
Example 2: High Fragmentation from Frequent Updates
A real-time analytics system stored user activity as a single large list. Every user action appended to the list, and old entries were removed with LTRIM. Over time, mem_fragmentation_ratio reached 35.
Resolution:
- Switched from list to stream data structure for better memory efficiency.
- Enabled active defragmentation.
- Used
MEMORY PURGEto force memory reclaim. - Result: Fragmentation dropped to 1.8, and memory usage decreased by 40%.
Example 3: OOM Crash on Shared Server
Redis was running on a VM with 8GB RAM alongside other services. No maxmemory was set. A spike in traffic caused Redis to consume 7.8GB of RAM, triggering the Linux OOM killer, which terminated the Redis process.
Resolution:
- Moved Redis to a dedicated VM with 16GB RAM.
- Set
maxmemory 12gbandmaxmemory-policy allkeys-lru. - Added monitoring with Prometheus and alerts at 80% usage.
- Result: No more crashes. System now handles 3x the traffic without incident.
Example 4: Memory Optimization with Hashes
An e-commerce platform stored product metadata as individual string keys:
product:123:name = "Wireless Headphones"
product:123:price = "99.99"
product:123:category = "Electronics"
...
With 1 million products, this used 24GB of memory.
Optimization:
- Converted to hashes:
HSET product:123 name "Wireless Headphones" price "99.99" category "Electronics" - Used
hash-max-ziplist-entries 512andhash-max-ziplist-value 64for memory efficiency. - Result: Memory usage dropped to 8GBa 67% reduction.
FAQs
Why is used_memory_rss higher than used_memory?
This is normal and indicates memory fragmentation. Redis allocates memory in chunks, and when keys are deleted, the allocator may not return memory to the OS immediately. A ratio above 1.5 suggests fragmentation. Enable active defragmentation to mitigate.
Should I use maxmemory with noeviction?
Only if you want Redis to return errors on writes when full. This is useful for critical data stores where accidental evictions are unacceptable. For caching, use allkeys-lru to allow automatic cleanup.
How often should I check Redis memory usage?
For production systems, monitor continuously. Use automated tools to collect metrics every 1560 seconds. Set alerts for thresholds, not just manual checks.
Can Redis release memory back to the OS?
Yes, but only under certain conditions. Redis uses allocators like jemalloc that may retain memory for performance. Use MEMORY PURGE (Redis 5.0+) to force release. Also, restarting Redis will reset memory usage.
What causes memory to keep growing even after deleting keys?
Memory fragmentation and allocator behavior. Deleted keys leave gaps in memory. The allocator doesnt always compact them. Enable active defragmentation and consider restarting Redis periodically if fragmentation remains high.
Is Redis memory usage affected by replication?
Yes. Replication buffers and replication backlog consume additional memory. Monitor repl_backlog_active and repl_backlog_size in INFO replication. Large backlogs can consume hundreds of MBs.
How do I know if Redis is swapping?
Check free -h and vmstat 1. If swap usage is increasing while Redis memory usage is high, its swapping. Swapping causes severe latency spikes. Prevent it by ensuring sufficient RAM and setting vm.swappiness=1.
Can I monitor Redis memory in Kubernetes?
Yes. Use the Redis exporter with Prometheus and Grafana. Deploy the exporter as a sidecar or separate pod. Use Kubernetes metrics server to correlate Redis memory with pod resource limits.
Whats the difference between eviction and expiration?
Expiration is when a keys TTL reaches zero and its automatically deleted. Eviction is when Redis removes a key because maxmemory is reached and it needs space. Expiration is predictable; eviction is reactive.
How do I find memory leaks in Redis?
There are no true memory leaks in Redis (it doesnt have heap corruption). But memory bloat occurs due to:
- Missing TTLs on keys.
- Large, unbounded data structures.
- Client-side bugs (e.g., infinite pipelines).
- Replication backlog growth.
Use redis-cli --bigkeys, INFO stats, and INFO replication to diagnose.
Conclusion
Monitoring Redis memory is not a one-time taskits an ongoing discipline essential for system stability, performance, and cost-efficiency. Rediss in-memory nature makes it fast, but also vulnerable to runaway memory usage if left unmanaged. By understanding key metrics, setting appropriate limits, identifying memory-heavy keys, enabling defragmentation, and automating alerts, you transform Redis from a potential liability into a reliable, high-performance component of your infrastructure.
The tools and practices outlined in this guideranging from basic INFO memory commands to advanced dashboards in RedisInsight and Prometheusprovide a complete framework for proactive memory management. Real-world examples demonstrate how simple oversights, like forgetting TTLs or ignoring fragmentation, can lead to system-wide failures. Conversely, applying best practices results in predictable performance, reduced operational overhead, and optimized resource utilization.
As your applications scale and Redis usage grows, your monitoring strategy must evolve. Regular audits, team education, and automated alerting ensure that memory health remains a top prioritynot an afterthought. With the right approach, Redis continues to deliver its legendary speed without compromising stability.