NEW

Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

ClickHouse DB::Exception: Too large distributed depth

The "DB::Exception: Too large distributed depth" error in ClickHouse is raised when a distributed query exceeds the maximum allowed recursion depth. The error code is TOO_LARGE_DISTRIBUTED_DEPTH. This happens when a distributed table routes a query to another distributed table, which in turn routes to yet another, creating a chain of distributed hops. ClickHouse enforces a depth limit to prevent infinite loops and unbounded resource consumption.

Impact

This error causes the query to fail immediately, and it typically points to a configuration problem:

  • Queries against the affected distributed table cannot execute
  • It may indicate a circular reference in your distributed table definitions
  • Left unresolved, it prevents any meaningful use of the involved distributed tables

Common Causes

  1. Circular distributed table references -- A distributed table on cluster A points to a distributed table on cluster B, which points back to cluster A, creating an infinite loop.
  2. Distributed table pointing to itself -- The distributed table's cluster configuration routes queries back to the same node, which re-enters the distributed table, recursing endlessly.
  3. Layered distributed tables without depth awareness -- Architectures with multiple layers of distributed tables (e.g., a global distributed table over regional distributed tables) can exceed the depth limit if not carefully designed.
  4. Misconfigured cluster definitions -- The remote_servers configuration includes the local node in the shard list for a table that is itself distributed, causing unintended recursion.
  5. max_distributed_depth set too low -- In legitimate multi-hop architectures, the default depth limit may be too restrictive.

Troubleshooting and Resolution Steps

  1. Identify the distributed table chain. Start with the table you are querying and trace its definition:

    SELECT
        database,
        name,
        engine,
        engine_full
    FROM system.tables
    WHERE engine = 'Distributed'
      AND database = 'your_db';
    

    The engine_full column shows the cluster name, database, table, and sharding key for each distributed table.

  2. Check for circular references. Follow the chain: if distributed table A references table B on a remote cluster, check whether table B references table A or itself:

    -- On the remote node, check what table B points to
    SELECT name, engine_full FROM system.tables WHERE name = 'table_b' AND engine = 'Distributed';
    
  3. Verify cluster definitions do not include the local node inappropriately:

    SELECT cluster, shard_num, replica_num, host_name, is_local
    FROM system.clusters;
    

    If is_local = 1 for a shard in a cluster used by a distributed table, the query will recurse back into the same distributed table on the local node.

  4. Break the recursion by pointing distributed tables to local (non-distributed) tables on remote shards:

    -- Instead of a distributed table pointing to another distributed table:
    -- CREATE TABLE dist_global AS dist_regional ENGINE = Distributed('global_cluster', 'db', 'dist_regional', rand())
    
    -- Point to the underlying local table:
    CREATE TABLE dist_global AS local_table ENGINE = Distributed('global_cluster', 'db', 'local_table', rand());
    
  5. If a multi-hop architecture is legitimate, increase the depth limit:

    SET max_distributed_depth = 5;  -- default is 5, increase if needed
    

    Use this cautiously -- a high depth limit can mask circular references and cause performance issues.

  6. Restructure your distributed table topology to minimize hops. Ideally, a distributed table should point directly to local tables on each shard, not to other distributed tables.

Best Practices

  • Design distributed table hierarchies to be as flat as possible -- one distributed table pointing directly to local tables on each shard.
  • Never create distributed tables that reference other distributed tables on the same cluster unless you fully understand the recursion implications.
  • Document your distributed table topology so that circular references can be identified during code review.
  • Use is_local in system.clusters output to verify that distributed queries do not unintentionally recurse on the local node.
  • Keep max_distributed_depth at its default unless you have a validated multi-tier architecture that requires deeper nesting.

Frequently Asked Questions

Q: What is the default value of max_distributed_depth?
A: The default is 5. This allows up to 5 levels of distributed query forwarding before the error is raised.

Q: Can this error occur with a single distributed table?
A: Yes, if the distributed table's cluster configuration routes queries back to the same node and the same distributed table. This creates a self-referencing loop that quickly exceeds the depth limit.

Q: How do I design a multi-region distributed setup without hitting this error?
A: Create a top-level distributed table that references the local (non-distributed) MergeTree tables on each regional shard directly. Avoid layering distributed tables on top of other distributed tables. If regional distributed tables are needed for local queries, use separate table names so the global distributed table does not accidentally reference them.

Q: Is increasing max_distributed_depth a safe workaround?
A: Only if you are certain there are no circular references. Increasing the depth for a legitimate multi-hop architecture is fine. Increasing it to work around a circular reference will lead to exponential query multiplication and potential resource exhaustion.

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.