
Redis Battle Demo
A live distributed systems demo: two Node.js instances compete for a Redlock distributed lock every 2 seconds, with the winner broadcasted to all clients via @socket.io/redis-adapter.
Performance & Impact
48 tests
Test Coverage
Config, Redlock, Socket events, Redis adapter, HTTP endpoints
≤1.5s
Lock TTL
Redlock TTL shorter than tick interval — guarantees release before next race
5 Prometheus
Metrics
Connected clients, active rooms, attacks, ticks acquired/skipped
The Problem
Demonstrating distributed systems patterns (leader election, exactly-once processing, cross-process pub/sub) is hard without infrastructure overhead. Most demos fake it — they run a single process and claim horizontal scaling.
The Solution
Two genuinely separate Node.js processes sharing one Redis. The Redlock race is real: one instance wins, one loses. The browser shows which one won each tick. The Prometheus metrics show the counts accumulate correctly across both instances.
System Architecture
A standalone, 5-minute-review demo extracting three distributed-systems patterns from EduScale into a single runnable app: (1) @socket.io/redis-adapter publishes Socket.io events through Redis Pub/Sub so multiple Node.js instances share the same room state, (2) Redlock (the Redlock algorithm) ensures only one instance wins each tick interval — preventing duplicate score updates that would corrupt battle state, (3) prom-client exposes a Prometheus /metrics endpoint with 5 counters/gauges for connected clients, active rooms, attacks, ticks acquired, and ticks skipped. Deployed on Render (free tier, single instance) with Upstash Redis.
Core Engineering Achievements:
System Architecture
Transport
Distributed Coordination
Observability
"Each tick interval, both instances race to acquire the Redlock mutex. The winner emits server_tick (with its instance ID) to all clients via Redis Pub/Sub, then releases the lock. Clients visualize which instance won — purple for this instance, yellow for the other. This directly mirrors how distributed cron jobs and leader election work in production."
The Engineering Challenge
The key insight was using retryCount: 0 on Redlock. With retries enabled, both instances queue up for the lock, and when the tick interval fires again before the queue drains, you get multiple ticks per interval — exactly the race condition you're trying to prevent. Zero retries means: if you didn't win this tick, you skip it. Clean, exactly-once semantics at the cost of occasional missed ticks under high contention.
User Journey
Interested in the full engineering breakdown?
I'm always open to discussing technical implementations, from state management strategies to infrastructure scaling.