How to Fix PostgreSQL Error: No Space Left on Device

The "No space left on device" error (ENOSPC) occurs at the operating system level when attempting to write to a filesystem that has no available space. This is similar to "Disk full" but manifests as a system-level error rather than a PostgreSQL-specific message.

Impact

This critical error can cause immediate database failure, prevent new connections, cause transaction rollbacks, corrupt data, and make the database completely unavailable. Immediate action is required.

Common Causes

  1. Disk partition completely full
  2. Inode exhaustion (too many small files)
  3. WAL archive accumulation
  4. Transaction log growth
  5. Temp files not being cleaned
  6. Snapshot/backup files on same partition
  7. Log file explosion
  8. Large pg_dump files

Troubleshooting and Resolution Steps

  1. Immediate disk space assessment:

    # Check all filesystems
    df -h
    
    # Check specific PostgreSQL partition
    df -h /var/lib/postgresql
    
    # Check inode usage (can be full even with disk space)
    df -i
    
    # If inodes are full:
    # Find directories with most files
    for dir in /var/lib/postgresql/*; do
        echo -n "$dir: "
        find "$dir" -type f | wc -l
    done
    
  2. Emergency cleanup - logs:

    # Immediately compress or remove old logs
    cd /var/log/postgresql
    gzip postgresql-*.log.1 postgresql-*.log.2
    
    # Or delete very old logs (CAREFUL!)
    find /var/log/postgresql -name "*.log" -mtime +7 -delete
    
    # Truncate current log (will lose recent log data)
    # Only if absolutely necessary
    truncate -s 0 /var/log/postgresql/postgresql-15-main.log
    
  3. Emergency cleanup - temporary files:

    # Stop PostgreSQL (safe cleanup)
    sudo systemctl stop postgresql
    
    # Remove temp files
    find /var/lib/postgresql/*/main/base/pgsql_tmp* -type f -delete
    find /tmp -name "pg_*" -mtime +1 -delete
    
    # Start PostgreSQL
    sudo systemctl start postgresql
    
  4. Clean WAL files (if safe):

    # Check WAL directory
    ls -lh /var/lib/postgresql/15/main/pg_wal/
    
    # If archive_mode is off and WAL files accumulating:
    # Check archive status
    sudo -u postgres psql -c "SELECT * FROM pg_stat_archiver;"
    
    # If archive_command is failing, fix it or disable archiving
    # Then restart to clean up old WAL files
    
  5. Emergency space recovery - database cleanup:

    -- Connect to database
    sudo -u postgres psql
    
    -- Find largest tables
    SELECT
        schemaname ||'.'|| tablename AS table_name,
        pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
    LIMIT 10;
    
    -- Emergency: Drop large temporary or staging tables
    DROP TABLE IF EXISTS temp_import_data;
    DROP TABLE IF EXISTS old_staging_table;
    
    -- Truncate log tables (if acceptable)
    TRUNCATE TABLE application_logs;
    TRUNCATE TABLE audit_trail;
    
  6. Vacuum to reclaim space:

    -- Quick VACUUM on largest tables
    VACUUM VERBOSE largest_table;
    
    -- VACUUM FULL to return space to OS (locks table!)
    VACUUM FULL VERBOSE largest_table;
    
    -- Check progress
    SELECT
        datname,
        pid,
        wait_event_type,
        wait_event,
        query
    FROM pg_stat_activity
    WHERE query LIKE '%VACUUM%';
    
  7. Move or delete old backups:

    # Find old backup files
    find /var/lib/postgresql -name "*.dump" -o -name "*.sql"
    find /var/backups/postgresql -type f
    
    # Move to different partition
    mv /var/lib/postgresql/backup_*.dump /external/storage/
    
    # Or compress
    gzip /var/backups/postgresql/*.sql
    
    # Or delete old backups (CAREFUL!)
    find /var/backups/postgresql -name "*.dump" -mtime +30 -delete
    
  8. Expand storage (long-term fix):

    # For LVM
    sudo lvextend -L +100G /dev/vg0/postgresql_lv
    sudo resize2fs /dev/vg0/postgresql_lv
    
    # For cloud (AWS example)
    # 1. Modify volume size in console
    # 2. Extend partition
    sudo growpart /dev/nvme0n1 1
    # 3. Resize filesystem
    sudo resize2fs /dev/nvme0n1p1
    
    # Verify new size
    df -h /var/lib/postgresql
    
  9. Setup proactive monitoring:

    # Monitoring script
    cat > /usr/local/bin/disk_space_alert.sh << 'EOF'
    #!/bin/bash
    
    THRESHOLD=85
    PARTITION="/var/lib/postgresql"
    EMAIL="admin@example.com"
    
    USAGE=$(df -h "$PARTITION" | tail -1 | awk '{print $5}' | sed 's/%//')
    
    if [ "$USAGE" -gt "$THRESHOLD" ]; then
        SUBJECT="CRITICAL: PostgreSQL Disk Space Alert"
        MESSAGE="Disk usage on $PARTITION is at ${USAGE}%"
        echo "$MESSAGE" | mail -s "$SUBJECT" "$EMAIL"
    fi
    EOF
    
    chmod +x /usr/local/bin/disk_space_alert.sh
    
    # Run every 5 minutes
    echo "*/5 * * * * /usr/local/bin/disk_space_alert.sh" | crontab -
    
  10. Implement retention policies:

    -- Create cleanup job for old data
    CREATE OR REPLACE FUNCTION cleanup_old_data()
    RETURNS void AS $$
    BEGIN
        -- Delete old log entries
        DELETE FROM application_logs
        WHERE created_at < NOW() - INTERVAL '90 days';
    
        -- Delete old audit records
        DELETE FROM audit_records
        WHERE timestamp < NOW() - INTERVAL '1 year';
    
        -- Vacuum to reclaim space
        PERFORM pg_catalog.pg_vacuum('application_logs');
        PERFORM pg_catalog.pg_vacuum('audit_records');
    
        RAISE NOTICE 'Cleanup completed';
    END;
    $$ LANGUAGE plpgsql;
    
    -- Schedule via pg_cron or external cron
    

Additional Information

  • "No space left on device" is OS-level error, not PostgreSQL-specific
  • Can also occur due to inode exhaustion, not just disk space
  • Emergency: stop PostgreSQL, clean temp files, restart
  • Prevention: monitor disk usage, implement retention policies
  • Use separate partitions for data, logs, and WAL
  • Configure proper log rotation
  • Automate old data archiving
  • Regular VACUUM to prevent bloat

Frequently Asked Questions

Q: Why do I get this error when df shows available space?
A: Check inode usage with df -i. Filesystems can run out of inodes (file pointers) even with free disk space, especially with many small files.

Q: Is it safe to delete files from pg_wal directory?
A: Never manually delete WAL files while PostgreSQL is running. If WAL files accumulate, fix archive_command or disable archiving, then restart PostgreSQL.

Q: Can this error corrupt my database?
A: Yes, writes interrupted by disk full can cause corruption. Always maintain adequate free space and have backups.

Q: How do I recover if PostgreSQL won't start due to no space?
A: Free space by deleting/compressing logs, removing old backups, cleaning temp files. Don't delete files from data directory.

Q: What's minimum free space I should maintain?
A: Keep at least 20-30% free. For production: 30-40% to handle growth and temporary spikes.

Q: Can I use tmpfs for PostgreSQL temp files?
A: Yes, configure temp_tablespaces to use tmpfs mount, but ensure sufficient RAM and monitor usage.

Q: How do I move PostgreSQL to larger disk?
A: Stop PostgreSQL, rsync data directory to new location, update configuration, start PostgreSQL. Or use streaming replication for zero-downtime migration.

Q: What should I do first when this error occurs?
A: 1) Identify what's consuming space 2) Clean logs and temp files 3) Free space via VACUUM or data deletion 4) Plan for storage expansion.

Pulse - Elasticsearch Operations Done Right

Pulse can solve your Elasticsearch issues

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.