How to Fix PostgreSQL Error: Out of Shared Memory

The "Out of shared memory" error occurs when PostgreSQL exhausts the amount of shared memory allocated for its operation. Shared memory is used for caching data, managing locks, storing connection information, and other critical database functions.

Impact

This error prevents new connections, blocks transactions, and can cause database operation failures. It indicates resource exhaustion that can lead to performance degradation and service unavailability.

Common Causes

  1. Too many concurrent connections
  2. max_connections set too high for available memory
  3. Shared memory parameters configured too low
  4. Memory leaks in extensions
  5. Many long-running transactions holding locks
  6. Large number of prepared transactions
  7. Insufficient system shared memory limits

Troubleshooting and Resolution Steps

  1. Check current memory usage:

    -- View memory settings
    SHOW shared_buffers;
    SHOW max_connections;
    
    -- Check current connections
    SELECT COUNT(*) FROM pg_stat_activity;
    
    -- View all memory-related settings
    SELECT name, setting, unit, context
    FROM pg_settings
    WHERE name LIKE '%mem%' OR name LIKE '%buffer%';
    
  2. Increase shared_buffers:

    -- In postgresql.conf or via ALTER SYSTEM
    -- Typical: 25% of system RAM, up to 40% for dedicated DB server
    ALTER SYSTEM SET shared_buffers = '4GB';
    
    -- Requires restart
    -- sudo systemctl restart postgresql
    
  3. Adjust max_connections:

    -- Reduce if set too high
    ALTER SYSTEM SET max_connections = 100;
    
    -- Or use connection pooling instead of increasing
    -- Each connection consumes shared memory
    
  4. Configure system shared memory limits:

    # Linux: Check current limits
    sysctl kernel.shmmax
    sysctl kernel.shmall
    
    # Calculate required shared memory (bytes)
    # shared_buffers + max_connections * work_mem + maintenance_work_mem + ...
    
    # Increase limits (add to /etc/sysctl.conf)
    kernel.shmmax = 17179869184  # 16 GB
    kernel.shmall = 4194304       # 16 GB in pages
    
    # Apply changes
    sudo sysctl -p
    
  5. Optimize work_mem and maintenance_work_mem:

    -- These contribute to total memory usage
    SHOW work_mem;
    SHOW maintenance_work_mem;
    
    -- Set reasonable limits
    ALTER SYSTEM SET work_mem = '4MB';  -- Per operation
    ALTER SYSTEM SET maintenance_work_mem = '64MB';
    
    -- Can set higher for specific sessions
    SET work_mem = '64MB';  -- For current session only
    
  6. Implement connection pooling:

    # Use PgBouncer to reduce actual database connections
    # Install PgBouncer
    sudo apt-get install pgbouncer
    
    # Configure /etc/pgbouncer/pgbouncer.ini
    [databases]
    mydb = host=localhost port=5432 dbname=mydb
    
    [pgbouncer]
    listen_port = 6432
    listen_addr = *
    auth_type = md5
    auth_file = /etc/pgbouncer/userlist.txt
    pool_mode = transaction
    max_client_conn = 1000
    default_pool_size = 25
    
    # Application connects to PgBouncer on port 6432
    # PgBouncer maintains pool of 25 connections to PostgreSQL
    
  7. Clean up idle connections:

    -- Find idle connections
    SELECT pid, usename, application_name, state, state_change
    FROM pg_stat_activity
    WHERE state = 'idle'
    AND state_change < now() - interval '10 minutes';
    
    -- Terminate idle connections
    SELECT pg_terminate_backend(pid)
    FROM pg_stat_activity
    WHERE state = 'idle'
    AND state_change < now() - interval '10 minutes';
    
    -- Set idle_in_transaction_session_timeout
    ALTER SYSTEM SET idle_in_transaction_session_timeout = '30min';
    
  8. Monitor memory usage:

    -- Check shared memory usage
    SELECT
        name,
        setting,
        unit,
        (setting::bigint *
            CASE unit
                WHEN '8kB' THEN 8192
                WHEN 'kB' THEN 1024
                ELSE 1
            END
        ) AS bytes
    FROM pg_settings
    WHERE name IN ('shared_buffers', 'effective_cache_size', 'work_mem');
    
    -- View database size
    SELECT
        datname,
        pg_size_pretty(pg_database_size(datname)) AS size
    FROM pg_database
    ORDER BY pg_database_size(datname) DESC;
    
  9. Tune other memory parameters:

    -- Effective cache size (hint for planner, doesn't allocate memory)
    ALTER SYSTEM SET effective_cache_size = '8GB';
    
    -- WAL buffers
    ALTER SYSTEM SET wal_buffers = '16MB';
    
    -- Autovacuum work memory
    ALTER SYSTEM SET autovacuum_work_mem = '64MB';
    
    -- Reload configuration
    SELECT pg_reload_conf();
    
  10. Application-level memory management:

    # Python: Implement connection pooling
    from psycopg2 import pool
    
    # Create connection pool
    connection_pool = pool.SimpleConnectionPool(
        minconn=1,
        maxconn=20,  # Limit concurrent connections
        host='localhost',
        database='mydb',
        user='user',
        password='password'
    )
    
    # Get connection from pool
    conn = connection_pool.getconn()
    try:
        # Use connection
        cursor = conn.cursor()
        cursor.execute("SELECT * FROM users")
    finally:
        # Return to pool (doesn't close connection)
        connection_pool.putconn(conn)
    

Additional Information

  • Shared memory is allocated at PostgreSQL startup and cannot grow dynamically
  • More connections = more shared memory needed
  • Use connection pooling to reduce connection overhead
  • Monitor memory usage regularly
  • Calculate total memory needs before configuring
  • Consider using smaller work_mem and letting OS cache handle more
  • Docker/container environments may need different tuning

Frequently Asked Questions

Q: How much shared_buffers should I allocate?
A: Start with 25% of system RAM. For dedicated database servers, can go up to 40%. Above 40% often doesn't help due to double buffering with OS cache.

Q: What's the relationship between max_connections and shared memory?
A: Each connection uses shared memory for buffers, locks, and state. More connections = more memory needed. Use connection pooling to reduce actual connections.

Q: Can I change shared_buffers without restarting?
A: No, shared_buffers requires a full PostgreSQL restart. Plan changes during maintenance windows.

Q: Why do I get this error with low connection count?
A: Could be system shared memory limits (kernel.shmmax), memory leaks in extensions, or other PostgreSQL memory parameters consuming available space.

Q: Should I increase or decrease max_connections?
A: Usually decrease and use connection pooling. High max_connections increases memory overhead. Most applications work fine with 100-200 actual connections via pooling.

Q: How do I calculate total PostgreSQL memory needs?
A: Roughly: shared_buffers + (max_connections * (work_mem + temp_buffers)) + maintenance_work_mem + wal_buffers + OS overhead

Q: What's the difference between shared_buffers and effective_cache_size?
A: shared_buffers is actual allocated memory. effective_cache_size is a hint to the query planner about total available cache (PostgreSQL + OS cache combined).

Pulse - Elasticsearch Operations Done Right

Pulse can solve your Elasticsearch issues

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.