Database Triggers and Events — Reacting to Data Changes at the Source
Why react to data changes at the database level?#
Application-level events are convenient but unreliable. If your app writes a row and then publishes an event, a crash between those two steps means the event is lost. Database-level mechanisms tie event emission to the transaction itself, giving you stronger guarantees.
There are several approaches — each with different tradeoffs in coupling, performance, and complexity.
Database triggers#
A trigger is a stored procedure that fires automatically when a row is inserted, updated, or deleted.
PostgreSQL trigger example#
CREATE OR REPLACE FUNCTION notify_order_created()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('order_events', json_build_object(
'action', TG_OP,
'order_id', NEW.id,
'total', NEW.total
)::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER order_after_insert
AFTER INSERT ON orders
FOR EACH ROW
EXECUTE FUNCTION notify_order_created();
This trigger fires after every insert on the orders table, sending a JSON payload through PostgreSQL's NOTIFY channel.
MySQL trigger example#
CREATE TRIGGER order_after_insert
AFTER INSERT ON orders
FOR EACH ROW
BEGIN
INSERT INTO order_events (order_id, action, payload, created_at)
VALUES (NEW.id, 'INSERT', JSON_OBJECT('total', NEW.total), NOW());
END;
MySQL lacks a built-in pub/sub mechanism, so triggers typically write to an events table that a separate process polls.
When triggers make sense#
- Enforcing business rules that must never be bypassed (audit logs, cascading updates)
- Lightweight notifications where the trigger body is simple
- Legacy systems where modifying application code is impractical
When triggers become a problem#
- Hidden logic — triggers execute invisibly, making debugging harder
- Performance — every row operation pays the trigger cost
- Testing — triggers are difficult to mock or disable in test environments
- Coupling — schema changes can silently break trigger logic
PostgreSQL LISTEN/NOTIFY#
PostgreSQL provides a built-in pub/sub system that lets any connected client subscribe to named channels.
-- Publisher (inside a transaction or trigger)
NOTIFY order_events, '{"order_id": 42, "status": "paid"}';
-- Subscriber (separate connection)
LISTEN order_events;
In application code (Node.js example):
const { Client } = require('pg');
const client = new Client();
await client.connect();
await client.query('LISTEN order_events');
client.on('notification', (msg) => {
const payload = JSON.parse(msg.payload);
console.log('Order event:', payload);
});
LISTEN/NOTIFY limitations#
- Payload size — limited to 8000 bytes per notification
- No persistence — if no listener is connected, the message is lost
- No acknowledgment — no built-in retry or dead-letter queue
- Single database — does not work across database clusters without custom plumbing
For lightweight, low-latency notifications within a single PostgreSQL instance, LISTEN/NOTIFY is excellent. For anything requiring durability, you need something more.
MySQL binary log (binlog)#
MySQL's binlog records every data modification as a stream of events. Originally designed for replication, it has become the foundation for change data capture.
The binlog contains:
- Row-based events — the before and after image of each modified row
- Statement-based events — the SQL statements themselves
- Mixed mode — a combination chosen by the server
Tools like Debezium, Maxwell, and Canal read the binlog and publish changes to Kafka, RabbitMQ, or other message brokers.
binlog event: INSERT INTO orders (id, total) VALUES (42, 99.00)
-> Debezium -> Kafka topic: db.public.orders
-> Consumer: update search index, send notification
Binlog advantages over triggers#
- Non-invasive — no stored procedures to maintain
- Complete stream — captures all changes, including those from migrations or admin queries
- Decoupled consumers — downstream systems subscribe to Kafka, not the database
Change data capture (CDC) as an alternative#
CDC captures row-level changes from the database transaction log and streams them to external systems. It is the modern alternative to both triggers and polling.
How CDC works#
- The CDC connector reads the database's write-ahead log (PostgreSQL) or binlog (MySQL)
- Each change is converted to a structured event with before/after state
- Events are published to a message broker (Kafka, Pulsar, etc.)
- Consumers process events asynchronously
Popular CDC tools#
| Tool | Databases | Output | Model |
|---|---|---|---|
| Debezium | PostgreSQL, MySQL, MongoDB, SQL Server | Kafka | Log-based |
| Maxwell | MySQL | Kafka, RabbitMQ, Redis | Binlog |
| pg_logical | PostgreSQL | Custom consumers | Logical replication |
| AWS DMS | Most major databases | Kinesis, S3 | Managed service |
CDC gives you the real-time reactivity of triggers without the coupling or performance overhead.
The transactional outbox pattern#
The outbox pattern solves the dual-write problem: how do you atomically update your database and publish an event?
How it works#
- Within the same database transaction, write your business data AND an event row to an
outboxtable - A separate process (the relay) polls the outbox table or reads it via CDC
- The relay publishes each event to the message broker
- After successful publication, the relay marks the event as processed
BEGIN;
INSERT INTO orders (id, customer_id, total) VALUES (42, 7, 99.00);
INSERT INTO outbox (aggregate_type, aggregate_id, event_type, payload)
VALUES ('Order', '42', 'OrderCreated', '{"total": 99.00}');
COMMIT;
Because both writes happen in the same transaction, either both succeed or both roll back. The relay guarantees at-least-once delivery to the broker.
Outbox relay strategies#
- Polling — query the outbox table every N milliseconds for unprocessed events
- CDC on outbox — use Debezium to tail the outbox table (no polling overhead)
- LISTEN/NOTIFY on outbox — PostgreSQL trigger on the outbox table notifies the relay
The CDC-based approach is preferred for high-throughput systems because it avoids polling overhead and captures events in transaction order.
Triggers vs application events vs CDC#
| Approach | Atomicity | Coupling | Performance | Debugging |
|---|---|---|---|---|
| Database triggers | Strong (same tx) | High (logic in DB) | Medium | Hard |
| Application events | Weak (dual write) | Low | High | Easy |
| Outbox + CDC | Strong (same tx) | Low | High | Medium |
| LISTEN/NOTIFY | Strong (same tx) | Medium | High | Medium |
For new systems, the outbox pattern with CDC offers the best balance. You get transactional guarantees without burying logic in stored procedures.
Visualize event-driven data flows#
On Codelit, generate a PostgreSQL CDC pipeline or an outbox pattern architecture to see how data changes propagate from the database through Kafka to downstream consumers.
This is article #408 in the Codelit engineering blog series.
Build and explore event-driven database architectures visually at codelit.io.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
Related articles
API Backward Compatibility: Ship Changes Without Breaking Consumers
6 min read
api designBatch API Endpoints — Patterns for Bulk Operations, Partial Success, and Idempotency
8 min read
system designCircuit Breaker Implementation — State Machine, Failure Counting, Fallbacks, and Resilience4j
7 min read
Try these templates
Build this architecture
Generate an interactive architecture for Database Triggers and Events in seconds.
Try it in Codelit →
Comments