# 5.1 The Process Family
When you start Postgres, you aren't launching a single program. You're launching a **supervisor** that immediately spawns a small family of specialized worker processes. Each one has a narrow, well-defined job. Understanding who they are—and what they do between your queries—is essential to understanding every chapter that follows.
You can see the family yourself:
```sql
SELECT pid, backend_type
FROM pg_stat_activity
WHERE backend_type != 'client backend';
```
```
pid | backend_type
-----+------------------------------
68 | autovacuum launcher
65 | background writer
64 | checkpointer
69 | logical replication launcher
67 | walwriter
```
These five processes (plus the **Postmaster** itself, which doesn't appear in `pg_stat_activity`) are running at all times, even when no client is connected. They are the engine's autonomic nervous system.
## The Postmaster
The **Postmaster** is the parent of every other Postgres process. It is the first process that starts and the last to die.
Its responsibilities are simple and strict:
1. **Listen** for incoming connections on the configured port.
2. **Fork** a new **backend process** for each accepted connection.
3. **Supervise** all child processes. If any child crashes, the Postmaster terminates all remaining children, cleans up shared memory, and restarts the background workers from a clean state.
The Postmaster itself never executes queries. It never touches your data. It exists solely to manage the lifecycle of the processes that do.
> [!IMPORTANT]
> **Why this matters**: The Postmaster's crash-recovery behavior is the reason Postgres uses processes instead of threads. A crashed backend takes down only itself. The Postmaster detects the failure, resets shared memory, and re-spawns the background workers—all without a full server restart.
## Backend Processes (One Per Connection)
When you connect via `psql` or your application's connection pool, the Postmaster forks a dedicated **backend process** for your session. This process:
- Parses your SQL
- Plans the query
- Executes it against **[[Chapter 5/5.2 - The Warming Rack (Shared Buffers)|Shared Buffers]]**
- Returns results to your client
Each backend is an independent OS process with its own memory space (for `work_mem` sorts, hash tables, etc.) but sharing the global `shared_buffers` pool. When you disconnect, the backend exits.
## The Background Workers
The remaining processes run continuously in the background, performing maintenance that keeps the engine healthy between—and during—your queries.
### Checkpointer
Periodically flushes all **dirty pages** from Shared Buffers to disk and writes a **Checkpoint Record** to the WAL. After a checkpoint, Postgres knows it can recover from that point forward without replaying older WAL segments. Controlled by `checkpoint_timeout` (default: 5 minutes) and `max_wal_size`.
→ *Introduced in [[Chapter 4/4.1 - The Pocket Diary (WAL & fsync)|4.1 The Pocket Diary]]*
### Background Writer
Proactively scans Shared Buffers for dirty pages and flushes small batches to disk in quiet intervals. Its goal is to ensure that when a backend needs a free buffer, one is already available—preventing the backend from stalling to perform an expensive synchronous write. Controlled by `bgwriter_delay` (default: 200ms).
→ *Introduced in [[Chapter 4/4.1 - The Pocket Diary (WAL & fsync)|4.1 The Pocket Diary]]*
### WAL Writer
Flushes the **WAL Buffers** (in-memory WAL records) to the WAL files on disk. When `synchronous_commit = on`, individual backends perform their own `fsync()` calls, but the WAL Writer handles the background flushing for asynchronous commits and group commit batching. Controlled by `wal_writer_delay` (default: 200ms).
→ *Introduced in [[Chapter 4/4.1.1 - The Loose Handshake (Commit Tuning)|4.1.1 The Loose Handshake]]*
### Autovacuum Launcher
The launcher is a scheduler. It monitors table statistics and, when a table accumulates enough dead tuples, spawns **Autovacuum Worker** processes to reclaim space and update visibility maps. The number of concurrent workers is limited by `autovacuum_max_workers` (default: 3).
→ *Covered in depth in [[Chapter 5/5.4 - The Housekeepers (Vacuum & Freezing)|5.4 The Housekeepers]]*
### Logical Replication Launcher
If logical replication is configured, this process manages the lifecycle of **Logical Replication Workers**—the processes responsible for applying changes received from a publisher to the local database.
→ *Referenced in [[Chapter 4/4.3 - The Town Crier (Logical Replication)|4.3 The Town Crier]]*
### Startup Process
The first child the Postmaster spawns on boot. It replays WAL records to bring the database to a consistent state after an unclean shutdown. On a **replica**, the startup process runs continuously, applying WAL streamed from the primary.
→ *Referenced in [[Chapter 4/4.2 - The Recovery Parade (Crash Recovery)|4.2 The Recovery Parade]]*
### Stats Collector
Aggregates runtime statistics from all backends—row counts, block hits, sequential scans, index usage—and writes them to the `pg_stat_*` views. This data feeds the **Query Planner's** cost estimates and the **Autovacuum Launcher's** threshold decisions. Without it, both systems are flying blind.
### Archiver
Active only when `archive_mode = on`. When the WAL Writer finishes a 16MB WAL segment, the Archiver copies it to a designated backup location (local directory, S3, etc.). This is the foundation of **Point-in-Time Recovery (PITR)**—the ability to restore the database to any arbitrary moment in the past.
### Logger
Active when `logging_collector = on`. Captures all server log output (errors, warnings, slow queries) and writes it to log files, handling rotation and retention. Without it, log output goes to `stderr` only.
### WAL Sender / WAL Receiver
Used in **streaming replication**. The **WAL Sender** runs on the primary and streams WAL records over the network to replicas. The **WAL Receiver** runs on the replica and feeds the received records to the **Startup Process** for replay.
→ *Referenced in [[Chapter 7/7.0 - The Elephant in the Clouds (Distributed Storage)|Chapter 7 - Distributed Storage]]*
## The Full Picture
```
postmaster (PID 1)
├── startup (crash recovery / WAL replay)
├── checkpointer
├── background writer
├── walwriter
├── stats collector
├── autovacuum launcher
│ └── autovacuum worker (spawned on demand)
├── logical replication launcher
│ └── logical replication worker (spawned on demand)
├── archiver (if archive_mode = on)
├── logger (if logging_collector = on)
├── wal sender (if replicas are connected)
├── backend (connection 1)
├── backend (connection 2)
└── backend (connection N)
```
Every process in this tree shares the same `shared_buffers` segment. Every process writes to the same WAL. The Postmaster watches them all, and if any single one fails, it resets the shared state and rebuilds the family from scratch.
---
| ← Previous | ↑ Table of Contents | Next → |
| :--- | :---: | ---: |
| [[Chapter 5/5.0 - The Hunger of Resources (Memory & Disk)\|5.0 The Hunger of Resources]] | [[Learn You a Postgres for Great Good\|Home]] | [[Chapter 5/5.2 - The Warming Rack (Shared Buffers)\|5.2 The Warming Rack (Shared Buffers)]] |