The "Could not serialize access due to concurrent update" error occurs when using Repeatable Read or Serializable isolation levels and two transactions attempt to modify the same data concurrently. PostgreSQL detects that allowing both transactions would violate the isolation guarantee and aborts one to maintain consistency.
Impact
This error causes transaction rollbacks in high-concurrency environments, requiring retry logic in applications. It's a tradeoff for stronger consistency guarantees provided by higher isolation levels.
Common Causes
- Using Repeatable Read or Serializable isolation levels
- Concurrent updates to the same rows
- Read-modify-write patterns without proper locking
- Long-running transactions increasing conflict window
- High transaction concurrency on popular rows
- Missing application retry logic
Troubleshooting and Resolution Steps
Understand isolation levels:
-- Check current isolation level SHOW transaction_isolation; -- Default is Read Committed (no serialization errors) -- Repeatable Read and Serializable can have serialization errorsLower isolation level if appropriate:
-- Use Read Committed (default) to avoid serialization errors BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Your queries COMMIT; -- Read Committed allows concurrent updates -- May see different data in same transactionImplement retry logic:
# Python with psycopg2 from psycopg2 import OperationalError, extensions import time def transfer_with_retry(from_id, to_id, amount, max_retries=5): for attempt in range(max_retries): try: conn = get_connection() conn.set_isolation_level( extensions.ISOLATION_LEVEL_REPEATABLE_READ ) cursor = conn.cursor() cursor.execute( "SELECT balance FROM accounts WHERE id = %s", (from_id,) ) balance = cursor.fetchone()[0] if balance < amount: raise ValueError("Insufficient funds") cursor.execute( "UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, from_id) ) cursor.execute( "UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, to_id) ) conn.commit() return True except OperationalError as e: conn.rollback() if 'could not serialize' in str(e).lower(): if attempt < max_retries - 1: time.sleep(0.01 * (2 ** attempt)) # Exponential backoff continue raise return FalseUse SELECT FOR UPDATE for explicit locking:
-- Prevents concurrent updates in Read Committed BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE; -- Row is now locked, other transactions will wait UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT; -- No serialization error because lock prevents concurrencyUse ON CONFLICT for upserts:
-- Atomic upsert avoids read-modify-write race INSERT INTO counters (key, value) VALUES ('page_views', 1) ON CONFLICT (key) DO UPDATE SET value = counters.value + 1;Reduce transaction scope:
-- BAD: Long transaction increases conflict window BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT balance FROM accounts WHERE id = 1; -- ... lots of application logic ... UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT; -- GOOD: Minimize transaction time -- Do processing outside transaction BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT balance FROM accounts WHERE id = 1; UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT;Use advisory locks for coordination:
BEGIN; -- Acquire advisory lock SELECT pg_advisory_xact_lock(1); -- No other transaction can acquire this lock -- Do work UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT;Optimize for Serializable isolation:
-- Serializable provides strongest guarantees BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -- Read and modify data UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT; -- Requires aggressive retry logic due to more conflictsMonitor serialization failures:
-- Check transaction conflicts SELECT datname, conflicts FROM pg_stat_database WHERE datname = current_database(); -- Reset statistics SELECT pg_stat_reset();Consider application-level optimizations:
# Use optimistic locking with version column def update_with_version(account_id, amount): cursor.execute( "SELECT balance, version FROM accounts WHERE id = %s", (account_id,) ) balance, version = cursor.fetchone() cursor.execute(""" UPDATE accounts SET balance = balance - %s, version = version + 1 WHERE id = %s AND version = %s """, (amount, account_id, version)) if cursor.rowcount == 0: raise ConcurrentModificationError("Version mismatch")
Additional Information
- Read Committed (default) doesn't have serialization errors
- Repeatable Read prevents non-repeatable reads but can have serialization failures
- Serializable provides full isolation but highest conflict rate
- Always implement retry logic when using Repeatable Read or Serializable
- Consider if you really need these isolation levels
- Short transactions reduce conflict probability
Frequently Asked Questions
Q: What isolation level should I use?
A: Read Committed (default) for most applications. Use Repeatable Read or Serializable only when you specifically need those guarantees and can handle retry logic.
Q: How is this different from a deadlock?
A: Deadlocks involve circular lock waiting. Serialization errors are about maintaining isolation level guarantees when concurrent updates occur.
Q: Can I prevent these errors entirely?
A: Use Read Committed isolation or explicit locking (SELECT FOR UPDATE). Both prevent serialization errors but with different trade-offs.
Q: How many retries should I implement?
A: Typically 3-5 retries with exponential backoff. Most conflicts resolve quickly.
Q: Does this error mean data was lost?
A: No, the transaction is rolled back cleanly. Retry will succeed once conflict is resolved.
Q: When should I use Serializable isolation?
A: When you need to prevent all anomalies (phantom reads, write skew) and can implement robust retry logic. Financial applications are common use cases.
Q: Can indexes help reduce serialization errors?
A: Indexes don't directly reduce serialization errors, but they can speed up transactions, reducing the conflict window.