Supabase Realtime: Building Multiplayer Features Without Running a Single WebSocket Server
The Three Primitives
Supabase Realtime gives you three distinct primitives, and understanding when to use each is the difference between a smooth experience and a laggy mess:
1. Broadcast — Client-to-client messaging through the server. No persistence. Sub-20ms latency. Think: cursor positions, typing indicators, ephemeral reactions.
2. Presence — Shared state that tracks who's online and their current status. Automatically synced when users join, leave, or disconnect. Think: "5 users viewing this document," online indicators.
3. Postgres Changes — CDC (Change Data Capture) from your Postgres WAL. Any INSERT, UPDATE, DELETE on a subscribed table pushes to connected clients. Think: live feeds, collaborative editing persistence, real-time dashboards.
Building a Collaborative Document Editor
Let me walk through the real implementation. The document editor needs:
- Live cursor positions (Broadcast)
- Who's currently editing (Presence)
- Persistent document changes (Postgres Changes)
Setting Up the Channel
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY, {
realtime: {
params: {
eventsPerSecond: 20, // Rate limit outgoing events
},
},
});
// One channel per document
const channel = supabase.channel(`document:${documentId}`, {
config: {
broadcast: { self: false }, // Don't echo my own broadcasts back
presence: { key: userId }, // Stable presence key
},
});
Presence: Who's Here?
// Track this user's presence
channel.subscribe(async (status) => {
if (status === 'SUBSCRIBED') {
await channel.track({
userId: user.id,
name: user.name,
avatar: user.avatar_url,
color: assignedColor,
cursor: null,
lastActive: new Date().toISOString(),
});
}
});
// Listen for presence changes
channel.on('presence', { event: 'sync' }, () => {
const presenceState = channel.presenceState();
// Convert to array of active users
const activeUsers = Object.entries(presenceState).map(([key, values]) => ({
...values[0], // Most recent presence for this key
presenceKey: key,
}));
setActiveUsers(activeUsers);
});
channel.on('presence', { event: 'join' }, ({ key, newPresences }) => {
console.log(`${newPresences[0].name} joined the document`);
showToast(`${newPresences[0].name} is now viewing this document`);
});
channel.on('presence', { event: 'leave' }, ({ key, leftPresences }) => {
console.log(`${leftPresences[0].name} left the document`);
removeCursor(leftPresences[0].userId);
});
The presenceKey: userId in the config is critical. Without it, Supabase generates a random key per connection. When a user's network blips and they reconnect, they get a new key — so the UI shows them leaving and rejoining. With a stable user ID as the key, reconnections update the existing presence entry instead of creating a new one.
Broadcast: Live Cursors
// Send cursor position (throttled to 20fps by eventsPerSecond config)
function handleMouseMove(e) {
const position = editorToDocumentCoords(e.clientX, e.clientY);
channel.send({
type: 'broadcast',
event: 'cursor',
payload: {
userId: user.id,
x: position.x,
y: position.y,
selection: getCurrentSelection(), // Text selection range, if any
},
});
}
// Receive other users' cursors
channel.on('broadcast', { event: 'cursor' }, ({ payload }) => {
updateRemoteCursor(payload.userId, payload.x, payload.y, payload.selection);
});
// Typing indicator via broadcast
let typingTimeout;
function handleKeyPress() {
channel.send({
type: 'broadcast',
event: 'typing',
payload: { userId: user.id, name: user.name },
});
// Clear typing indicator after 2 seconds of inactivity
clearTimeout(typingTimeout);
typingTimeout = setTimeout(() => {
channel.send({
type: 'broadcast',
event: 'stopped_typing',
payload: { userId: user.id },
});
}, 2000);
}
Broadcast messages go through the Supabase Realtime server but never touch the database. Latency is typically 8-15ms between clients in the same region. For cursor positions at 20fps, that's imperceptible.
Postgres Changes: Persistent Document State
// Listen for document changes from the database
channel.on(
'postgres_changes',
{
event: '*', // INSERT, UPDATE, DELETE
schema: 'public',
table: 'document_blocks',
filter: `document_id=eq.${documentId}`,
},
(payload) => {
switch (payload.eventType) {
case 'INSERT':
insertBlock(payload.new);
break;
case 'UPDATE':
updateBlock(payload.new, payload.old);
break;
case 'DELETE':
deleteBlock(payload.old);
break;
}
}
);
The filter is essential. Without filter: \document_id=eq.${documentId}`, you'd receive changes for ALL documents in the document_blocks` table. On a busy app, that's a firehose of irrelevant data.
Conflict Resolution: The Hard Part
When two users edit the same block simultaneously, you need a merge strategy. I use operational transformation (OT) — not because it's the best approach (CRDTs are theoretically superior), but because it's simpler to implement correctly for text editing:
// When saving a local change
async function saveBlockEdit(blockId, newContent, baseVersion) {
const { data, error } = await supabase
.from('document_blocks')
.update({
content: newContent,
version: baseVersion + 1,
last_edited_by: user.id,
last_edited_at: new Date().toISOString(),
})
.eq('id', blockId)
.eq('version', baseVersion) // Optimistic concurrency control
.select()
.single();
if (error) {
// Version mismatch — someone else edited this block
const { data: current } = await supabase
.from('document_blocks')
.select('*')
.eq('id', blockId)
.single();
// Merge: transform our edit against the remote edit
const mergedContent = transformEdit(newContent, current.content, baseVersion);
return saveBlockEdit(blockId, mergedContent, current.version);
}
return data;
}
The optimistic concurrency control via .eq('version', baseVersion) ensures that if someone else updated the block between our read and write, the update fails and we can merge properly. Postgres guarantees this is atomic — no race conditions, no lost updates.
Scaling Considerations
Connection Management
Each browser tab opens one WebSocket connection to Supabase Realtime. If a user has your app open in 3 tabs, that's 3 connections. At scale, this adds up fast.
// Detect duplicate tabs and share a single connection
const broadcastChannel = new BroadcastChannel('supabase-realtime');
// Leader election — only one tab maintains the WebSocket
let isLeader = false;
broadcastChannel.onmessage = (event) => {
if (event.data.type === 'leader-heartbeat') {
isLeader = false;
}
if (event.data.type === 'realtime-event') {
// Forward events from leader tab to follower tabs
handleRealtimeEvent(event.data.payload);
}
};
// Try to become leader
setTimeout(() => {
if (!isLeader) {
isLeader = true;
setInterval(() => {
broadcastChannel.postMessage({ type: 'leader-heartbeat' });
}, 1000);
initializeSupabaseRealtime(); // Only the leader tab connects
}
}, Math.random() * 1000); // Random delay to avoid conflicts
Database-Level Optimization
Postgres CDC works by reading the WAL (Write-Ahead Log). Every change to a subscribed table generates a WAL event that Supabase Realtime processes. On a busy table with frequent writes, this can lag:
-- Create a publication for only the tables you need CDC on
-- This prevents Realtime from processing WAL events for unrelated tables
CREATE PUBLICATION supabase_realtime FOR TABLE
document_blocks,
comments,
activity_feed;
-- Add an index for the filter column to speed up server-side filtering
CREATE INDEX idx_document_blocks_document_id
ON document_blocks (document_id);
-- Reduce WAL volume by only including changed columns
ALTER TABLE document_blocks REPLICA IDENTITY DEFAULT;
-- Or for more control:
-- ALTER TABLE document_blocks REPLICA IDENTITY FULL;
Rate Limiting Outgoing Events
// Custom throttle that batches cursor updates
function createCursorThrottle(channel, userId, fps = 15) {
let lastPosition = null;
let scheduled = false;
return function throttledCursorUpdate(x, y, selection) {
lastPosition = { x, y, selection };
if (!scheduled) {
scheduled = true;
requestAnimationFrame(() => {
if (lastPosition) {
channel.send({
type: 'broadcast',
event: 'cursor',
payload: { userId, ...lastPosition },
});
lastPosition = null;
}
scheduled = false;
});
}
};
}
Real-World Performance Numbers
I measured these on a Supabase Pro instance with a Next.js frontend deployed on Vercel:
| Feature | Latency (p50) | Latency (p99) | Messages/sec | |---------|---------------|---------------|--------------| | Broadcast (same region) | 12ms | 28ms | 500+ per client | | Broadcast (cross-region) | 65ms | 120ms | 500+ per client | | Presence sync | 15ms | 40ms | N/A (state-based) | | Postgres Changes (filtered) | 35ms | 85ms | Limited by DB writes | | Postgres Changes (unfiltered) | 50ms | 150ms | Limited by DB writes |
The cross-region broadcast latency (65ms p50) is worth noting. If your users are globally distributed and need sub-30ms latency, Supabase Realtime might not be sufficient — consider Cloudflare Durable Objects or a dedicated WebSocket infrastructure. But for most collaborative tools where 65ms is imperceptible to humans (document editing, project management boards, chat), it's more than adequate.
Edge Functions for Server-Side Validation
Don't trust client-side events. Anyone can send arbitrary broadcast messages. For events that affect game state or business logic, validate through Supabase Edge Functions:
// supabase/functions/validate-move/index.ts
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';
Deno.serve(async (req) => {
const { gameId, playerId, move } = await req.json();
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
);
// Fetch current game state
const { data: game } = await supabase
.from('games')
.select('*')
.eq('id', gameId)
.single();
// Validate the move server-side
if (game.current_turn !== playerId) {
return new Response(JSON.stringify({ error: 'Not your turn' }), { status: 400 });
}
if (!isValidMove(game.board, move)) {
return new Response(JSON.stringify({ error: 'Invalid move' }), { status: 400 });
}
// Apply the move — this triggers Postgres Changes for all subscribers
const { data: updated } = await supabase
.from('games')
.update({
board: applyMove(game.board, move),
current_turn: nextPlayer(game),
move_history: [...game.move_history, { playerId, move, timestamp: new Date() }],
})
.eq('id', gameId)
.select()
.single();
return new Response(JSON.stringify(updated));
});
The Architecture That Emerged
[Browser] ←→ [Supabase Realtime (WebSocket)]
├── Broadcast: cursors, typing, ephemeral events
├── Presence: online users, status
└── Postgres Changes: document content, game state
↑
[Vercel Edge] → [Supabase Edge Functions] → [Postgres]
Validation, auth, business logic
The mental model: Broadcast is a WebSocket relay, Presence is a distributed hash table, and Postgres Changes is a push notification system for your database. Layer them together and you get a surprisingly capable real-time platform without managing any infrastructure.
After six months in production with ~300 daily active users, the collaboration features have been solid. We've had two Supabase Realtime outages (both under 5 minutes), zero data loss (because the persistent state is in Postgres, not in the WebSocket layer), and our Supabase bill is $25/month for the Pro tier. The equivalent custom infrastructure — a WebSocket server, Redis for presence, a change stream consumer — would cost 10x more in both money and maintenance time.