# 6.3.2 Microscopic Traffic (Latches & LockManager) ![The Shoulder Nudge](assets/arch_shoulder_nudge.png) While Heavyweight Locks govern the logical consistency of your data, the engine relies on a much faster mechanism to protect its shared memory structures. These are **Latches** and **LWLocks (Lightweight Locks)**—the high-speed synchronization primitives that prevent memory corruption at the CPU level. ## LWLocks: Memory Synchronization An **LWLock** exists to protect shared memory data structures (like the Buffer Mapping Hash Table or the WAL buffers) from concurrent access. Unlike heavyweight locks, LWLocks are not transactional—they are typically held for microseconds. Postgres implements these using atomic hardware instructions like **Compare-and-Swap (CAS)**. A process checks a specific memory address: if it's free, the process flips a bit to "locked" in a single CPU cycle. If it's busy, the process may "spin" briefly (a Spinlock) or yield to the kernel. - **`LWLock:BufferContent`**: The most common synchronization wait. Occurs when a process wants to read a page in the buffer pool while another process is currently modifying it. This is the hardware reality of "Hot Page" contention. - **`LWLock:WALWrite`**: The lock protecting the WAL insertion buffer. In high-write workloads, multiple backends may fight for the right to append their records to the WAL stream. ## Hardware Stability: BufferPins What happens if a process is actively reading a page in `shared_buffers`, but the **Background Writer** decides to evict that page to make room for new data? To prevent this, Postgres uses **BufferPins**. Pinning a buffer is a signal to the engine's eviction algorithm: "This memory address is currently in use; do not reuse this slot." - **`BufferPin` Wait**: If a maintenance task (like `VACUUM`) needs to modify a page, but a query process still has it pinned, the maintenance task must wait until the pin is released. ## Internal Coordination: The Lock Manager Even the system that manages locks needs its own synchronization. - **`LWLock:LockManager`**: This is a lock on the **LockManager's internal partitions**. If thousands of processes are all requesting heavyweight locks simultaneously, the sheer overhead of updating the "Master List" in shared memory can become the bottleneck. This is a sign of **meta-contention**: your system is spending more time coordinating than it is doing work. ## Parallel Coordination (DSM) Finally, specialized latches are used to coordinate the lifecycle of parallel workers. - **`LWLock:ParallelCoordination`**: Used during the startup and shutdown of parallel query teams as they map **Dynamic Shared Memory (DSM)** segments and synchronize their relative progress. If you see these synchronization waits, you are hitting the limits of your CPU's ability to coordinate shared memory. This usually suggests a need for better connection pooling or more efficient query plans that reduce the total number of processes fighting for the same memory buffers. > [!info] Diagnostics: Latches & Traffic Signals > To explore the thousands of millisecond-long clicks that keep the kitchen safe, browse the **[[Workloads/LWLock/_LWLock|LWLock]]**, **[[Workloads/BufferPin/_BufferPin|BufferPin]]**, and **[[Workloads/LWLock/Locking/LockManager|_LockManager]]** diagnostic libraries. --- | ← Previous | ↑ Table of Contents | Next → | | :--- | :---: | ---: | | [[Chapter 6/6.3.1 - The Iron Padlock (Heavyweight Locks)\|6.3.1 The Iron Padlock (Heavyweight Locks)]] | [[Learn You a Postgres for Great Good\|Home]] | [[Chapter 6/6.4 - The Sweat (CPU-Bound Workloads)\|6.4 The Sweat (CPU-Bound Workloads)]] |