Redis Caching
ReliaPulse uses Redis for API response caching to improve performance under high load.
Overview
Public-facing endpoints like the status page API and RSS feeds can receive high traffic, especially during incidents when users frequently refresh. Redis caching reduces database load and improves response times.
Cache Configuration
TTL Guidelines
| Endpoint | TTL | Rationale |
|---|---|---|
| Public Status Page | 30 seconds | Frequently viewed, needs fresh data |
| Public Feeds (RSS/Atom/JSON) | 60 seconds | Less critical freshness |
| Organization Settings | 5 minutes | Rarely changes |
| Component Aggregations | 10 seconds | Moderate freshness needed |
Cache Keys
// Public status page data
cache:public:status:{slug}
// Public feeds by format
cache:public:feed:{slug}:rss
cache:public:feed:{slug}:atom
cache:public:feed:{slug}:json
// Organization settings
cache:org:settings:{organizationId}
// Component aggregations
cache:org:components:{organizationId}Usage
Basic Caching
import { withCache, cacheKeys, cacheTTL } from "@/lib/cache";
// Wrap expensive operations with caching
const data = await withCache(
cacheKeys.publicStatus(slug),
cacheTTL.PUBLIC_STATUS,
async () => {
// This only runs if cache miss
return await fetchStatusData(slug);
}
);Manual Cache Operations
import { cacheGet, cacheSet, cacheDel } from "@/lib/cache";
// Get cached value
const cached = await cacheGet<StatusData>(key);
// Set with TTL
await cacheSet(key, data, 30); // 30 second TTL
// Delete cache entry
await cacheDel(key);Cache Invalidation
Cache is automatically invalidated when data changes:
import { invalidatePublicStatusCacheByOrgId } from "@/lib/cache";
// Call after creating/updating/deleting data
// This is fire-and-forget (doesn't block response)
invalidatePublicStatusCacheByOrgId(organizationId).catch(console.error);Automatic Invalidation
The following actions automatically invalidate the public status cache:
- Components: Create, update, delete, bulk actions
- Incidents: Create, update, delete
- Maintenances: Create, update, delete
This ensures users see fresh data after changes while still benefiting from caching during stable periods.
HTTP Caching
In addition to Redis caching, the API sets HTTP cache headers:
Cache-Control: public, max-age=30This allows:
- Browser caching for individual users
- CDN edge caching if deployed behind a CDN
- Proper cache busting via cache keys
Performance Impact
With caching enabled:
| Metric | Without Cache | With Cache |
|---|---|---|
| Average response time | 50-200ms | 5-15ms |
| Database queries per request | 3-5 | 0 (on cache hit) |
| Requests before throttling | ~100/min | ~1000/min |
Configuration
Caching uses the same Redis instance as the job queue. No additional configuration is needed beyond setting REDIS_URL:
REDIS_URL=redis://localhost:6379Monitoring
Monitor cache performance via Redis commands:
# Connect to Redis
redis-cli
# View cache keys
KEYS cache:*
# Check TTL on a key
TTL cache:public:status:acme-corp
# View memory usage
INFO memoryBest Practices
- Short TTLs for critical data: Use 30 seconds or less for status data
- Fire-and-forget invalidation: Don't block API responses waiting for cache clearing
- Graceful degradation: If Redis is unavailable, bypass cache and hit database directly
- Monitor hit rates: Track cache hits vs misses to optimize TTLs