![[res_disk.png|64]]
Postgres reads and writes data through the OS kernel. All page reads/writes go through the **shared buffer pool** first; only on a cache miss does Postgres issue actual disk I/O. WAL writes bypass the buffer pool and go directly to disk.
- **Tuning**: [19.4.5 I/O](https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-IO), [19.4.2 Disk](https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-DISK)
- **Visibility**: `pg_stat_io`, wait events of type [[IO]]
## Key Concepts
## shared_buffers
- **Description**: The in-memory buffer pool. Pages live here before being flushed to disk. Sized by `shared_buffers` (default 128 MB; recommended 25% of RAM)
- **Impact**: Larger = fewer disk reads; under-sized = constant `DataFileRead` waits
## WAL (Write-Ahead Log)
- **Description**: All changes are written to WAL before the data file is modified. WAL writes are sequential and cheaper than random data file writes
- **Impact**: `WALWrite` / `WALSync` waits indicate WAL throughput is saturated
## Temporary Files
- **Description**: Spill-to-disk buffers used when `Sort`, `Hash`, or `Aggregate` operations exceed `work_mem`
- **Impact**: `BufFileRead` / `BufFileWrite` waits; controlled by `temp_file_limit`
## Checkpoints
- **Description**: Periodic flush of all dirty shared buffers to disk. Generates bursts of `DataFileFlush` / `DataFileSync` writes
- **Impact**: Tune with `checkpoint_completion_target`, `max_wal_size`
## Key Config Parameters
| Parameter | Default | Purpose |
|---|---|---|
| `shared_buffers` | 128 MB | Shared page cache size |
| `effective_io_concurrency` | 16 | Concurrent I/O ops for queries |
| `maintenance_io_concurrency` | 16 | Concurrent I/O ops for maintenance |
| `io_max_combine_limit` | 128 kB | Max combined I/O operation size |
| `io_combine_limit` | 128 kB | Largest I/O size in combined ops |
| `io_method` | `worker` | Async I/O backend (`worker`, `io_uring`, `sync`) |
| `io_workers` | 3 | Worker processes for async I/O |
| `backend_flush_after` | 0 | Force OS writeback after this many bytes |
| `temp_file_limit` | -1 (unlimited) | Max disk space per process for temp files |
| `checkpoint_completion_target` | 0.9 | Spread checkpoint writes over this fraction of interval |
## Related Workloads
- [[IO]] — all disk read/write wait events
- [[LWLock]] (`CheckpointerComm`, `WALBufMapping`, `WALWrite`, `WALInsert`)
- [[Timeout]] (`CheckpointWriteDelay`, `RegisterSyncRequest`)