# 5.2 The Warming Rack (Shared Buffers)

In the Elephant Cafe, the **[[Chapter 1/1.4 - The Table (The Relation)|Filing Cabinet]]** is the ultimate destination for data, but it is too slow for frequent access. To hide the lethargy of the disk, Postgres uses a massive, shared area of RAM known as the **Shared Buffers**.
This is the **Warming Rack**: a high-speed staging area where the most popular 8KB **[[Chapter 1/1.3 - The Shipping Container (The Page)|Pages]]** are kept hot and ready for the staff.
## The Shared Memory Structure
In Postgres, every active query is executed by a completely separate, isolated worker process. Because these workers are isolated for stability, they cannot naturally see each other's memory.
To solve this, Postgres allocates `shared_buffers` at boot time as a massive block of **Inter-Process Communication (IPC) Memory**. This is the **Warming Rack**: a centralized, high-speed staging area where all isolated workers can access the exact same 8KB **[[Chapter 1/1.3 - The Shipping Container (The Page)|Pages]]** without constantly copying data back and forth.
When a query needs a page, Postgres doesn't just wander over to the disk; it first performs a high-speed lookup in the **Buffer Mapping Hash Table**.
- **Buffer Hit**: The hash table points to a slot in RAM. The page is fetched in nanoseconds.
- **Buffer Miss**: The hash table returns nothing. The engine must issue a physical `pread()` to the OS, find a free slot on the rack, and load the page from disk.
### The Divided Registry (Mapping Locks)
If hundreds of backends all tried to check the Warming Rack at the same time, they would collide at the registry. To prevent this, Postgres divides the **Buffer Mapping Hash Table** into **128 separate partitions** (by default).
Each partition has its own **Lightweight Lock (LWLock)**. This allows many staff members to check for different pages simultaneously without waiting for a single global lock. It is the architectural difference between a single receptionist and a row of 128 self-check-in kiosks.
### The Bolt on the Plate (Buffer Pinning)
When a staff member finds a page and starts reading it, they don't just hold it in their hands. They **Pin** the buffer.
Think of this as **Bolting the plate to the rack**. In technical terms, a Pin is an **Atomic Reference Count**. When a worker pins a buffer, the count goes up by one. While a buffer is pinned (reference count > 0):
1. **Eviction Protection**: The Clock Sweep is physically incapable of kicking this page off the rack to make room for new data.
2. **Structural Safety**: Other workers can still *read* the page (a Shared Pin), but nobody can *move* the physical location of the page or *repurpose* the memory slot.
If you see a query waiting on a **`BufferPin`** wait event, it means an operation (like a `VACUUM` trying to physically clean up a page) has ground to a halt. It is standing next to the plate with a wrench, waiting for all other processes to finish reading and un-bolt their pins so the reference count can drop back to zero.
> [!TIP]
> **Why not 100% RAM?**: We usually set `shared_buffers` to **25% of system RAM**. We leave the rest for the Building Manager (the OS Page Cache). This avoids "Double Buffering"—a situation where both Postgres and the OS are hoarding the exact same page, effectively halving the Cafe's usable memory.
## The Clock Sweep (Victim Buffer Selection)
Because the Warming Rack is finite, the database engine must eventually decide which pages to kick off the rack to make room for new ones. This is enforced by the **Clock Sweep Algorithm**.
Imagine a clock hand sweeping across the Warming Rack. Every time the hand passes a page descriptor, it performs a mechanical check of the `usage_count`:
1. **Check the Temperature**: If the `usage_count` is greater than 0, the hand decrements it by 1 and moves to the next page.
2. **Find a Victim**: If the `usage_count` is 0, the page is a candidate to be evicted (a **Victim Buffer**).
3. **The Flash Heat**: If a staff member accesses a page while it’s on the rack, the database engine instantly bumps its `usage_count` back up (to a maximum of 5).
This ensures that "Hot" pages (those with frequent hits) stay on the rack indefinitely, while "Cold" data slowly loses its metadata heat and is eventually recycled.
> [!NOTE]
> **Strategy Overrides**: For massive operations like a `VACUUM` or a `Sequential Scan` of a multi-terabyte table, Postgres doesn't want to "pollute" the entire Warming Rack. Instead, it uses a **Buffer Access Strategy**—a private, tiny loop of just 32 pages—to avoid kicking out all the useful, hot data that other guests are using.
### The Size of the Rack
You can ask the database engine exactly how large its Warming Rack is with a simple command:
```sql
-- How many pages can we keep warm?
SHOW shared_buffers;
-- Result:
-- shared_buffers
-- ----------------
-- 128MB
```
If you want to see exactly *which* suitcases are currently warm, you can use the **`pg_buffercache`** extension. It’s like a thermal camera for the database!
---
| ← Previous | ↑ Table of Contents | Next → |
| :--- | :---: | ---: |
| [[Chapter 5/5.1 - The Process Family\|5.1 The Process Family]] | [[Learn You a Postgres for Great Good\|Home]] | [[Chapter 5/5.3 - The Private Desk (Work Mem)\|5.3 The Private Desk (Work Mem)]] |