A "Deadlock detected" error occurs when two or more transactions are waiting for each other to release locks, creating a circular dependency that cannot be resolved. PostgreSQL automatically detects deadlocks and terminates one of the transactions to break the cycle, allowing the others to proceed.
Impact
Deadlocks cause transaction failures and require application retry logic. While PostgreSQL handles detection automatically, frequent deadlocks can significantly impact application performance and user experience, particularly in high-concurrency environments.
Common Causes
- Transactions locking rows in different orders
- Complex multi-table transactions
- Long-running transactions holding locks
- Concurrent UPDATE operations on same rows
- Foreign key constraint checks causing locks
- Index locks during concurrent updates
- Application logic with suboptimal locking patterns
Troubleshooting and Resolution Steps
Identify deadlock from error message:
-- Error example: -- ERROR: deadlock detected -- DETAIL: Process 12345 waits for ShareLock on transaction 67890; -- blocked by process 12346. -- Process 12346 waits for ShareLock on transaction 67891; -- blocked by process 12345. -- HINT: See server log for query details.Enable deadlock logging for analysis:
-- In postgresql.conf ALTER SYSTEM SET log_lock_waits = on; ALTER SYSTEM SET deadlock_timeout = '1s'; ALTER SYSTEM SET log_min_duration_statement = 1000; SELECT pg_reload_conf(); -- Check logs -- tail -f /var/log/postgresql/postgresql-15-main.logOrder operations consistently:
-- BAD: Transactions lock in different order -- Transaction 1: BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT; -- Transaction 2 (runs concurrently): BEGIN; UPDATE accounts SET balance = balance - 50 WHERE id = 2; UPDATE accounts SET balance = balance + 50 WHERE id = 1; COMMIT; -- GOOD: Always lock in same order (by ID) -- Transaction 1: BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Lock ID 1 first UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Then ID 2 COMMIT; -- Transaction 2: BEGIN; UPDATE accounts SET balance = balance + 50 WHERE id = 1; -- Lock ID 1 first UPDATE accounts SET balance = balance - 50 WHERE id = 2; -- Then ID 2 COMMIT;Use explicit locking to control order:
-- Lock rows in predictable order BEGIN; SELECT * FROM accounts WHERE id IN (1, 2) ORDER BY id -- Critical: always same order FOR UPDATE; -- Now perform updates UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT;Reduce transaction scope and duration:
-- BAD: Long transaction holding locks BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- ... lots of application logic ... -- ... external API calls ... UPDATE other_table SET value = value + 1; COMMIT; -- GOOD: Minimize transaction scope BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT; -- Do non-database work here BEGIN; UPDATE other_table SET value = value + 1; COMMIT;Use advisory locks for custom locking:
-- Advisory locks for application-level coordination BEGIN; -- Lock in ascending order SELECT pg_advisory_xact_lock(LEAST(1, 2)); SELECT pg_advisory_xact_lock(GREATEST(1, 2)); -- Perform operations UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT; -- Advisory locks automatically releasedImplement retry logic in application:
# Python with psycopg2 from psycopg2 import OperationalError import time def transfer_with_retry(from_id, to_id, amount, max_retries=3): for attempt in range(max_retries): try: conn = get_connection() cursor = conn.cursor() # Always lock in ascending ID order min_id, max_id = min(from_id, to_id), max(from_id, to_id) cursor.execute(""" SELECT * FROM accounts WHERE id IN (%s, %s) ORDER BY id FOR UPDATE """, (min_id, max_id)) cursor.execute( "UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, from_id) ) cursor.execute( "UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, to_id) ) conn.commit() return True except OperationalError as e: conn.rollback() if 'deadlock detected' in str(e): if attempt < max_retries - 1: time.sleep(0.1 * (attempt + 1)) # Exponential backoff continue raise return FalseUse SELECT FOR UPDATE NOWAIT to fail fast:
-- Fail immediately if row is locked BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE NOWAIT; -- Raises error immediately if locked -- Or with timeout SET lock_timeout = '2s'; SELECT * FROM accounts WHERE id = 1 FOR UPDATE;Reduce foreign key lock contention:
-- Foreign keys acquire locks on referenced rows -- Consider using ON DELETE CASCADE to reduce manual cleanup -- Or batch operations to reduce lock time BEGIN; DELETE FROM order_items WHERE order_id IN ( SELECT id FROM orders WHERE status = 'cancelled' LIMIT 100 ); COMMIT;Monitor and analyze deadlocks:
-- Check for blocking queries SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.query AS blocked_statement, blocking_activity.query AS blocking_statement FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.granted;
Additional Information
- PostgreSQL deadlock detector runs every
deadlock_timeout(default 1 second) - The transaction with less work done is typically chosen as deadlock victim
- Deadlocks are logged with full details if logging is enabled
- Shorter transactions reduce deadlock probability
- Read Committed isolation level (default) helps avoid some deadlocks
- Consider application-level queuing for highly contentious operations
Frequently Asked Questions
Q: How does PostgreSQL choose which transaction to abort?
A: PostgreSQL usually aborts the transaction that has done the least amount of work to minimize wasted effort.
Q: Can deadlocks be completely prevented?
A: While you can't eliminate them entirely, proper design (consistent lock ordering, short transactions) can reduce them to near zero.
Q: What's the difference between a deadlock and a lock wait?
A: A lock wait is normal - one transaction waits for another to release a lock. A deadlock is circular: A waits for B, and B waits for A.
Q: Should I use Serializable isolation level to avoid deadlocks?
A: Serializable isolation can actually increase serialization errors. It's better to fix lock ordering issues.
Q: How long does PostgreSQL wait before detecting a deadlock?
A: The deadlock_timeout parameter controls this (default 1 second). Shorter timeouts detect deadlocks faster but increase overhead.
Q: Do READ operations cause deadlocks?
A: Regular SELECT operations don't acquire row locks, but SELECT FOR UPDATE does and can participate in deadlocks.
Q: What's the best way to handle deadlocks in production?
A: Implement automatic retry with exponential backoff in your application, log occurrences, and analyze patterns to fix root causes.