# 6.3.1 The Iron Padlock (Heavyweight Locks)

When reading the Maitre D's Service Receipt (`pg_stat_activity`), seeing an I/O wait simply means the database is slow. But seeing a **Lock Wait** means the database is actively paralyzed by its own citizens.
In the Elephant Cafe, these are known as **Heavyweight Locks**—the formal, transactional barricades that protect logical data consistency.
These locks exist to ensure **Isolation**. If one process is modifying a table's structure, another process cannot be allowed to read from it in an inconsistent state. Postgres enforces this using the Lock Manager.
## Heavyweight Lock Categories
When you see a `wait_event_type: Lock` in `pg_stat_activity`, a process is standing in a queue managed by the shared memory `LockManager`. The specific `wait_event` identifies the resource being contested:
### 1. `relation` (Table-Level)
This specifies that an entire table (a relation) is locked.
- **Cause**: DDL commands (`ALTER TABLE`, `TRUNCATE`, `VACUUM FULL`) or explicit `LOCK TABLE` commands.
- **Payoff**: Prevents a query from reading a table while its physical structure is being modified.
### 2. `tuple` (Row-Level Contention)
This specifies that multiple processes are fighting over a single, specific row.
- **Cause**: High-concurrency `UPDATE` or `SELECT FOR UPDATE` on the same primary key.
- **Payoff**: Ensures row-level integrity without locking the entire table.
### 3. `transactionid` (The Transaction Handoff)
Every transaction holds an exclusive lock on its own Virtual Transaction ID (vXID). If Process B needs to modify a row currently held by Process A, it doesn't wait on the "row" itself — it requests a lock on Process A's `transactionid`.
- **Payoff**: This allows the operating system to put Process B to sleep efficiently until the specific transaction holding the resource completes.
## Deep Dive: The Mechanics of Exclusivity
To understand *why* these three lock types exist, we have to look past the metaphors directly at the C source code. How does Postgres actually enforce Exclusivity?
### The Memory Matrix (LockManager)
Heavyweight relation locks do not live on the disk; they are allocated out of a massive **Shared Memory Hash Table** called the `LockManager`.
When a process needs to access a relation, it must lodge a formal request in this hash table. But how does Postgres know if your request conflicts with another process?
It uses the **Exclusivity Matrix**. In Postgres, this isn't just a conceptual diagram; it is a literal array of 32-bit integers (`LockConflicts[]`).
| Requested Lock | AS | RS | RE | SUE | S | SRE | E | AE |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Access Share (AS)** | | | | | | | | X |
| **Row Share (RS)** | | | | | | | X | X |
| **Row Exclusive (RE)** | | | | | X | X | X | X |
| **Share Update Excl.** | | | | X | X | X | X | X |
| **Share (S)** | | | X | X | | X | X | X |
| **Share Row Excl.** | | | X | X | X | X | X | X |
| **Exclusive (E)** | | X | X | X | X | X | X | X |
| **Access Exclusive** | X | X | X | X | X | X | X | X |
*(X = Conflict)*
Postgres takes the bitmask of all currently granted locks on that table and performs a **bitwise AND (`&`)** against your requested lock mask.
If the result is `0`, you are instantly granted the lock. If the result is non-zero (a conflict), the engine adds your process to a `waitProcs` queue and physically puts your OS process to sleep using a semaphore.
### The Brilliant Handoff: Why Tuple Locks are Different
1. **The Relation Lock (In RAM)**: A process acquires a low-level `RowExclusive` lock on the entire table in the Hash Table. This requires only 1 lock object and protects the table's schema (e.g., prevents a concurrent `DROP TABLE`).
2. **The Tuple Lock (On Page)**: The process navigates to the physical page and stamps its Transaction ID into the **`xmax`** field of the target tuples. This type of lock is physically written within the data pages, consuming zero memory in the `LockManager` hash table.
3. **The Transaction ID Lock (The Sleep Handoff)**: If a second process attempts to update one of those tuples, it reads the `xmax` value and discovers the first process's Transaction ID. Because OS processes cannot "wait" on a bit of data on a disk page, the second process goes back to the Shared Hash Table and requests a `ShareLock` on the first process's **Transaction ID**. Since every transaction holds an exclusive lock on its own ID, the second process triggers a conflict and is placed into a sleep state by the kernel.
When the first transaction finally issues a `COMMIT`, its Transaction ID lock is released in RAM. The kernel wakes the second process, which re-checks the physical tuple state and proceeds.
This is the architectural magic of Postgres locking: **Table locks live in RAM. Row locks live in Data Pages. Transaction ID locks bridge the gap so OS processes can sleep without exhausting memory.**
## The Deadlock Detector
Locks are passive waiting mechanisms, which makes them susceptible to circular dependency. If Process A locks Resource 1 and waits for Resource 2, while Process B locks Resource 2 and waits for Resource 1, they will wait indefinitely.
This is a **Deadlock**. To resolve it, a background monitoring task (the deadlock detector) periodically audits the lock wait-queue. If it detects a cycle, it forces an error on one of the transactions (aborting it) so the other can proceed. If your application frequently triggers deadlock errors, it is a sign that concurrent processes are acquiring locks in an inconsistent order.
## Diagnostics: Checking the Waiter's Clipboard
If you suspect an elephant is stuck behind an Iron Padlock, you can inspect the Manager's live clipboard. This requires an "X-Ray" that joins the **Master List of Padlocks** (`pg_locks`) with the **List of Active Elephants** (`pg_stat_activity`):
```sql
-- Find the Symptom (the blocked) and the Disease (the blocker)
SELECT
blocked_locks.pid AS blocked_pid,
blocking_locks.pid AS blocking_pid,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS current_statement_in_blocking_process
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
-- Match them by the resource they are fighting over:
AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation -- Cabinet
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple -- Suitcase
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid -- Pinky Swear
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.GRANTED; -- Show only the ones who are actually waiting
```
This report tells you the **Symptom** (the blocked process) and the **Disease** (the blocking process). Once you find the blocker, you can decide whether to wait or politely ask them to leave (`pg_terminate_backend`).
> [!info] Diagnostics: The Iron Padlock (Locks)
> Are your elephants stuck at a structural blockade? Dive into the **[[Workloads/Lock/_Lock|Heavyweight Lock Library]]** to further identify the exact queues of table, transaction, and relation-level stalls.
---
| ← Previous | ↑ Table of Contents | Next → |
| :--- | :---: | ---: |
| [[Chapter 6/6.3 - The Narrow Bridge (Locks & Latches)\|6.3 The Narrow Bridge (Locks & Latches)]] | [[Learn You a Postgres for Great Good\|Home]] | [[Chapter 6/6.3.2 - Microscopic Traffic (Latches & LockManager)\|6.3.2 Microscopic Traffic (Latches & LockManager)]] |