LiveQuery & observers
How Meteor's reactive layer works, why it falls back to polling, and how UptimeClarity surfaces it.
LiveQuery is Meteor's reactive data layer — the engine that powers
Meteor.publish / Meteor.subscribe and reactive Mongo cursors. When a
client subscribes to a publication, LiveQuery watches the underlying MongoDB
query and pushes added / changed / removed messages over DDP to keep
the client's Minimongo cache in sync.
It's also the source of most "my Meteor app got slow at scale" stories.
The three drivers
LiveQuery uses one of three drivers to detect changes in MongoDB. Which one runs is decided per publication, at runtime, based on what the database supports.
change_stream (default on replica sets)
Meteor 2.8+ uses Mongo change streams when available. Efficient, modern, and the right answer for almost every modern deployment.
oplog
The classic driver. Tails Mongo's replication oplog. Fast, scales to many
observers, but requires the app's Mongo user to have access to local.oplog.rs.
polling
The fallback. Re-runs the query every ~10 seconds for every observer. Cheap to set up, but every active observer adds load and falls behind reality. A handful of observers polling 10× a minute will saturate Mongo on a busy app.
How a publication picks a driver
LiveQuery walks down this list and uses the first driver that's eligible for the query:
- 1
Change streams — used if the cluster is a replica set, the query is supported by
$changeStream(no$where, no geo, no text), and Mongo ≥ 3.6. - 2
Oplog — used if the app has oplog access and the query is "oplog-safe" (no
$where, noMongo.Cursortransforms, etc.). - 3
Polling — used otherwise. Always.
A single app often runs all three drivers at once across different publications. And — crucially — a publication that was on change streams can silently fall back to polling if its query is changed in a subtle way.
Observers
Each Meteor.subscribe on the client creates an observer on the server
that holds onto the LiveQuery cursor. Observers are deduplicated when their
arguments match exactly, so 1000 clients subscribed to posts.byUser with
the same userId create 1 observer, not 1000.
But:
- Observers are not deduplicated across servers in a horizontally-scaled deployment — a 6-pod cluster can hold 6 observers for the same args.
- Observers are not automatically stopped when a client disconnects mid-flight.
- A common Meteor footgun is starting observers in
Meteor.publishcallbacks without registeringthis.onStop(), leading to observer leaks: the observer count grows unbounded over time.
Spotting an observer leak
The pattern is:
- Steady or decreasing connection count.
- Steadily increasing observer count.
- RAM creeping upward; eventually OOM.
- MongoDB load growing in the background.
UptimeClarity's Reactive view surfaces this directly:
Publication Driver Observers Δ/min
posts.byUser polling 342 +12 ⚠ leak detected
inbox.unread change_stream 18 ±0
team.members oplog 7 ±0The diagnosis copy on the homepage —
1 leaking observer in
posts.byUser— switch to the oplog driver.
— is exactly this view, condensed.
Common fixes
1. Always register this.onStop()
Meteor.publish('posts.byUser', function (userId) {
check(userId, String);
const handle = Posts.find({ userId }).observeChanges({ /* … */ });
this.onStop(() => handle.stop()); // ← critical
this.ready();
});2. Eliminate polling where you can
If a publication is using polling because of a $where clause or text search,
restructure the query to avoid them. The win is usually 10–100× in
throughput.
3. Cap subscription cardinality
A publication that takes a freeform user input should validate it. A user
subscribing to posts.byUser('§§§') shouldn't create a new observer — return
a known-empty cursor for invalid args.