Node.js Event Loop Internals — Phases, Microtasks, and Performance
Most Node.js developers know the event loop exists. Fewer understand why their setTimeout(fn, 0) callback fires after Promise.resolve().then(fn), or why a single CPU-bound function can freeze an entire HTTP server. This guide breaks open the event loop, phase by phase, so you can write non-blocking code with confidence.
Why the event loop matters#
Node.js runs JavaScript on a single thread. Every incoming HTTP request, every database callback, every file read — they all funnel through one event loop. If you block it, nothing else runs. Understanding its internals is the difference between a server that handles 10,000 concurrent connections and one that chokes at 100.
The engine under the hood: libuv#
Node.js delegates asynchronous I/O to libuv, a C library that provides:
- A cross-platform event loop
- Thread pool for file system operations and DNS lookups
- Abstractions over epoll (Linux), kqueue (macOS), and IOCP (Windows)
When you call fs.readFile(), Node hands the work to libuv's thread pool. When the read completes, libuv queues a callback for the event loop to pick up.
The six phases of the event loop#
Each iteration (called a tick) passes through these phases in order:
1. Timers#
Executes callbacks scheduled by setTimeout() and setInterval(). The delay you specify is a minimum, not a guarantee — the callback runs only after the event loop reaches this phase.
// Scheduled for "0ms" but runs after the current tick completes
setTimeout(() => console.log("timer"), 0);
2. Pending callbacks#
Handles I/O callbacks deferred from the previous loop iteration — for example, TCP error callbacks like ECONNREFUSED. Most developers never interact with this phase directly.
3. Idle / Prepare#
Internal-only phase used by libuv for housekeeping. Node uses it to gather metrics and prepare for polling. You cannot schedule work here from JavaScript.
4. Poll#
The most important phase. The poll phase does two things:
- Calculates how long to block waiting for I/O
- Processes events in the poll queue (file reads, network data, etc.)
If the poll queue is empty, the loop checks for setImmediate() callbacks and moves to the check phase. If no immediates are scheduled, it waits for new I/O events.
5. Check#
Executes setImmediate() callbacks. This phase runs immediately after the poll phase completes, which is why setImmediate() always fires before setTimeout(fn, 0) when called from within an I/O callback.
const fs = require("fs");
fs.readFile(__filename, () => {
setImmediate(() => console.log("immediate")); // Always first
setTimeout(() => console.log("timeout"), 0); // Always second
});
6. Close callbacks#
Handles close events like socket.on('close', ...). This is cleanup territory — resources being torn down after their work is done.
Microtasks vs macrotasks#
This is where most confusion lives. The event loop has two additional queues that run between phases:
Microtask queue (highest priority)#
Promise.then()/Promise.catch()/Promise.finally()queueMicrotask()- Drained completely between every phase transition
Next tick queue (even higher than microtasks)#
process.nextTick()- Drained before microtasks, after the current operation completes
setTimeout(() => console.log("1: timeout"), 0);
Promise.resolve().then(() => console.log("2: promise"));
process.nextTick(() => console.log("3: nextTick"));
console.log("4: sync");
// Output: 4: sync -> 3: nextTick -> 2: promise -> 1: timeout
The execution order rule#
After each phase of the event loop completes:
- Drain the
process.nextTick()queue - Drain the microtask (Promise) queue
- Move to the next phase
This means a recursive process.nextTick() call can starve the event loop — it will never advance to the next phase.
// WARNING: This starves the event loop forever
function recurse() {
process.nextTick(recurse);
}
recurse();
// setTimeout callbacks will NEVER fire
Blocking the event loop#
Common blockers#
| Blocker | Why it hurts |
|---|---|
JSON.parse() on large payloads | Synchronous CPU work |
crypto.pbkdf2Sync() | Intentionally slow hashing |
| Complex regex on user input | Catastrophic backtracking (ReDoS) |
Large Array.sort() | O(n log n) on the main thread |
fs.readFileSync() | Blocks until disk I/O completes |
How to detect blocking#
// Simple event loop lag detector
let lastCheck = Date.now();
setInterval(() => {
const now = Date.now();
const lag = now - lastCheck - 1000;
if (lag > 50) console.warn(`Event loop lag: ${lag}ms`);
lastCheck = now;
}, 1000);
In production, use tools like clinic.js or the built-in perf_hooks module:
const { monitorEventLoopDelay } = require("perf_hooks");
const h = monitorEventLoopDelay({ resolution: 20 });
h.enable();
// Check p99 event loop delay
setInterval(() => {
console.log(`p99 loop delay: ${h.percentile(99) / 1e6}ms`);
h.reset();
}, 5000);
Worker threads: escaping the single thread#
When you genuinely need CPU-intensive work, worker threads let you run JavaScript in parallel without blocking the event loop.
const { Worker, isMainThread, parentPort } = require("worker_threads");
if (isMainThread) {
const worker = new Worker(__filename);
worker.on("message", (result) => {
console.log(`Fibonacci result: ${result}`);
});
worker.postMessage(42);
} else {
parentPort.on("message", (n) => {
// CPU-heavy work runs off the main thread
parentPort.postMessage(fibonacci(n));
});
}
When to use worker threads#
- Image processing or video transcoding
- Heavy JSON transformation
- Cryptographic operations
- Machine learning inference
- Any computation that takes more than ~50ms
When NOT to use worker threads#
- I/O-bound work (use async APIs instead)
- Simple request handling (the overhead of thread creation is not worth it)
- Work that can be offloaded to a separate service
The libuv thread pool#
Libuv maintains a default thread pool of 4 threads (configurable via UV_THREADPOOL_SIZE, max 1024). These threads handle:
- File system operations (
fs.readFile,fs.stat, etc.) - DNS lookups (
dns.lookup, NOTdns.resolve) - Some crypto operations (
crypto.pbkdf2,crypto.randomBytes) - zlib compression
If all 4 threads are busy, new operations queue up. For I/O-heavy applications, increasing the pool size can dramatically reduce latency:
UV_THREADPOOL_SIZE=16 node server.js
Common patterns for non-blocking code#
Break up CPU work with setImmediate#
function processLargeArray(items, callback) {
let index = 0;
function chunk() {
const end = Math.min(index + 1000, items.length);
for (; index < end; index++) {
// Process one item
transform(items[index]);
}
if (index < items.length) {
setImmediate(chunk); // Yield to the event loop
} else {
callback();
}
}
chunk();
}
Use streams instead of buffering#
// BAD: Loads entire file into memory, blocks during parse
const data = JSON.parse(fs.readFileSync("large.json", "utf8"));
// GOOD: Stream and process incrementally
const JSONStream = require("JSONStream");
fs.createReadStream("large.json")
.pipe(JSONStream.parse("*"))
.on("data", (item) => process(item));
Event loop in production: key metrics#
Monitor these to keep your Node.js service healthy:
| Metric | Target | Tool |
|---|---|---|
| Event loop lag (p99) | Under 50ms | perf_hooks, Clinic.js |
| Active handles | Stable, no leaks | process._getActiveHandles() |
| Active requests | Proportional to load | process._getActiveRequests() |
| Thread pool utilization | Under 80% | Custom UV metrics |
| Heap used | Under 70% of limit | process.memoryUsage() |
Conclusion#
The Node.js event loop is not magic — it is a deterministic state machine with well-defined phases and priority queues. Timers, I/O polling, immediates, and close callbacks each get their turn, with microtasks and nextTick draining between every transition. Understanding this machinery lets you avoid blocking, choose the right async primitive, and know exactly when to reach for worker threads.
This is article #416 on Codelit.io — your deep-dive resource for system design, backend engineering, and infrastructure patterns. Explore more at codelit.io.
Try it on Codelit
AI Architecture Review
Get an AI audit covering security gaps, bottlenecks, and scaling risks
Related articles
API Backward Compatibility: Ship Changes Without Breaking Consumers
6 min read
api designBatch API Endpoints — Patterns for Bulk Operations, Partial Success, and Idempotency
8 min read
system designCircuit Breaker Implementation — State Machine, Failure Counting, Fallbacks, and Resilience4j
7 min read
Try these templates
Multiplayer Game Backend
Real-time multiplayer game server with matchmaking, state sync, leaderboards, and anti-cheat.
8 componentsApache Kafka Event Streaming Platform
Distributed event streaming with producers, brokers, consumer groups, partitions, and exactly-once semantics.
10 componentsEvent Sourcing with CQRS
Event-driven architecture with separate read/write models, event store, projections, and eventual consistency.
10 componentsBuild this architecture
Generate an interactive architecture for Node.js Event Loop Internals in seconds.
Try it in Codelit →
Comments