Redis vs PostgreSQL for Database Caching
When backend performance starts to slow down, Redis is often the first tool people reach for. But before adding a new service to your stack, it’s worth asking: is PostgreSQL already caching what you think Redis will?
This post breaks down what Postgres already does under the hood, when Redis actually helps, and how to tune Postgres for better caching before reaching for external tools.
How PostgreSQL Caches by Default
PostgreSQL uses a shared buffer pool to cache data blocks in memory, reducing disk access for queries. For read-heavy workloads with data that fits in memory, Postgres acts like a cache without extra tools.
What Postgres Caches
Frequently accessed rows and indexes stay in memory.
Repeated reads skip disk access entirely.
Caching works across connections and processes.
Checking Cached Data
Use this query to inspect table usage and size:
SELECT relname, pg_size_pretty(pg_relation_size(c.oid)) AS table_size
FROM pg_class c
JOIN pg_stat_user_tables stat ON c.relname = stat.relname
ORDER BY stat.seq_scan DESC LIMIT 10;If your working set (hot data) fits within shared_buffers, Postgres delivers cache-like performance.
Does Postgres Cache Queries?
Postgres does not cache query results like a key-value store (e.g., Redis). However, it caches data blocks and indexes, making repeated queries fast if:
You use connection pooling to reduce overhead.
Your working set is smaller than shared_buffers.
Queries avoid unnecessary cache invalidation (e.g., optimize updates).
For simple lookups (e.g., SELECT * FROM users WHERE id = ?), Postgres is often fast enough without external caching.
When to Use Redis
Redis shines in specific scenarios. Consider it when:
Storing Computed Results: Expensive queries (e.g., joins, aggregations for dashboards) that change infrequently (e.g., hourly).
Needing TTLs or Eviction: Session tokens, rate limits, or temporary data with expiration.
Caching Non-Database Data: External API responses or pre-rendered content (e.g., JSON, HTML).
Postgres Can’t Keep Up: Even after tuning, your database struggles with load.
Example Use Case
A user dashboard with complex joins takes 500ms to render but updates every hour. Cache the result in Redis:
const cached = await redis.get(`dashboard:${userId}`);
if (cached) return JSON.parse(cached);
const dashboard = await db.query(complexDashboardQuery, { userId });
await redis.set(`dashboard:${userId}`, JSON.stringify(dashboard), ‘EX’, 3600);
return dashboard;This avoids hitting Postgres for every request.
Redis Trade-Offs
Adding Redis introduces complexity:
Cache Synchronization: You must keep Redis in sync with Postgres or tolerate stale data.
Invalidation Challenges: Deciding when to clear cached data is tricky.
Infrastructure Overhead: Redis requires monitoring, scaling, and maintenance.
Over-Caching Risks: Hides underlying database issues that tuning could fix.
For simple queries (e.g., single-table lookups), Postgres is often sufficient.
Optimize Postgres First
Before adopting Redis, maximize Postgres performance with these steps.
1. Connection Pooling
Unpooled connections exhaust resources, especially in serverless setups. Use:
PgBouncer: Lightweight, transaction-level pooling.
Pgpool-II: Advanced pooling with load balancing and query caching.
Hosted Solutions: Neon, Supabase, or AWS RDS Proxy include built-in pooling.
Example PgBouncer config:
[databases]
app_db = host=localhost port=5432 dbname=app
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 202. Memory Tuning
Postgres defaults are conservative. Adjust these settings:
shared_buffers: Set to 25-40% of RAM (e.g., 2GB for 8GB system).
work_mem: 4-16MB per query, but monitor to avoid memory overuse.
effective_cache_size: 50-75% of RAM to guide query planning.
Example for a 16GB server:
ALTER SYSTEM SET shared_buffers = ‘4GB’;
ALTER SYSTEM SET effective_cache_size = ‘12GB’;
ALTER SYSTEM SET work_mem = ‘8MB’;Restart Postgres after changes:
sudo systemctl restart postgresql3. Analyze Query Performance
Use pg_stat_statements to identify slow or frequent queries:
SELECT query, calls, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;Optimize high-time queries with indexes or rewrite them before caching.
4. Materialized Views for Heavy Queries
For expensive, stable queries (e.g., daily reports), use materialized views:
CREATE MATERIALIZED VIEW daily_summary AS
SELECT date, COUNT(*) as user_count
FROM user_actions
GROUP BY date
WITH DATA;
REFRESH MATERIALIZED VIEW daily_summary;This caches results within Postgres, refreshed on a schedule.
Redis vs Postgres: A Code Comparison
Postgres Only (No Cache)
const user = await db.user.findUnique({ where: { id } });Fast for simple lookups if data is in shared_buffers.
With Redis
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await db.user.findUnique({ where: { id } });
await redis.set(`user:${id}`, JSON.stringify(user), ‘EX’, 60);
return user;Redis adds code and infra overhead, only worth it for complex or non-database data.
Considerations
Serverless Trends: Serverless platforms (e.g., Vercel, Netlify) amplify connection limits, making pooling critical.
Managed Postgres: Providers like Neon and Supabase offer auto-scaling and caching extensions (e.g., Supabase’s pg_graphql for API caching).
Redis Alternatives: Tools like Upstash (serverless Redis) or AWS ElastiCache reduce management overhead but still add cost.
Security: Ensure Redis is secured (e.g., TLS, auth) to avoid breaches like the 2024 Redis exploits reported on X.
TL;DR
Postgres already caches data at the block level
For simple reads, it’s often fast enough
Use Redis for computed results, rate limits, session stores, etc.
Connection pooling and memory tuning go a long way
Don’t default to Redis just because you saw it in a blog post
Let your database work a little harder before offloading to something else.
You might be surprised how much you can get without running docker run redis ever again.

