NEW

Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

ClickHouse DB::Exception: Too many tables

The "DB::Exception: Too many tables" error means you have hit the configured limit on the number of tables that can exist on a ClickHouse server or within a database. The error code TOO_MANY_TABLES is a guardrail that prevents runaway table creation from exhausting server resources like memory and file descriptors.

Impact

No new tables can be created until existing tables are removed or the limit is raised. Existing tables continue to function normally -- only CREATE TABLE operations are blocked. Pipelines or applications that dynamically create tables (e.g., per-tenant or per-day table patterns) will stall.

Common Causes

  1. Dynamic table creation patterns -- creating a new table per tenant, per day, or per import batch without cleaning up old ones.
  2. Materialized views proliferation -- each materialized view creates an underlying storage table, doubling the effective table count.
  3. Overly granular partitioning expressed as separate tables -- using many small tables instead of a single partitioned table.
  4. Forgotten test or temporary tables -- accumulation of tables from debugging sessions and experiments.
  5. The default limit being too low for your use case -- some architectures legitimately need hundreds or thousands of tables.

Troubleshooting and Resolution Steps

  1. Check the current table count:

    SELECT database, count() AS table_count
    FROM system.tables
    GROUP BY database
    ORDER BY table_count DESC;
    
  2. Check the configured limit. The limit is controlled by the max_table_size_to_drop is unrelated -- look for max_number_of_tables in your configuration or check the server settings:

    SELECT name, value FROM system.settings WHERE name LIKE '%table%';
    

    Also check config.xml for <max_table_num_to_throw> or similar settings.

  3. Identify and drop unnecessary tables:

    -- Find the oldest or least recently accessed tables
    SELECT database, name, engine, metadata_modification_time
    FROM system.tables
    WHERE database NOT IN ('system', 'information_schema', 'INFORMATION_SCHEMA')
    ORDER BY metadata_modification_time ASC
    LIMIT 50;
    
  4. Drop unused tables in bulk. Generate DROP statements:

    SELECT 'DROP TABLE IF EXISTS ' || database || '.' || name || ' SYNC;'
    FROM system.tables
    WHERE database = 'my_database' AND name LIKE 'temp_%';
    
  5. Increase the limit if your use case genuinely requires many tables. In config.xml or a config override:

    <max_table_num_to_throw>10000</max_table_num_to_throw>
    

    Restart ClickHouse after changing the configuration.

  6. Consolidate tables. If you are creating per-day or per-tenant tables, consider refactoring to a single table with appropriate partition keys:

    -- Instead of events_2024_01_01, events_2024_01_02, ...
    CREATE TABLE events (
        event_date Date,
        tenant_id UInt32,
        data String
    ) ENGINE = MergeTree()
    PARTITION BY toYYYYMM(event_date)
    ORDER BY (tenant_id, event_date);
    

Best Practices

  • Prefer partitioned tables over many separate tables for time-series or multi-tenant data.
  • Regularly audit and clean up temporary, test, and unused tables.
  • Monitor the total table count as a system metric and alert before reaching the limit.
  • Remember that materialized views count toward the table limit -- factor them into your capacity planning.
  • Use naming conventions (e.g., tmp_ prefix) for temporary tables to make cleanup easier.

Frequently Asked Questions

Q: What is the default table limit in ClickHouse?
A: The default varies by version and configuration. In many installations, there is no hard-coded limit, but the practical limit is determined by available memory and file descriptors. The max_table_num_to_throw setting allows you to set an explicit cap.

Q: Do system tables count toward the limit?
A: Typically no. The limit applies to user-created tables. System tables, information_schema tables, and other internal tables are usually excluded.

Q: Is there a performance penalty for having many tables?
A: Yes. Each table consumes memory for its metadata, and having thousands of tables can slow down server startup, increase ZooKeeper load (for replicated tables), and consume more file descriptors.

Q: Can I use EXCHANGE TABLES to replace a table without temporarily increasing the count?
A: Yes. EXCHANGE TABLES t1 AND t2 atomically swaps two existing tables without creating a new one, so it does not increase the total table count.

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.