NEW

Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

ClickHouse DB::Exception: Cluster doesn't exist

The "DB::Exception: Cluster doesn't exist" error in ClickHouse appears when a query or DDL statement references a cluster name that is not defined in the server's configuration. The error code is CLUSTER_DOESNT_EXIST. This commonly surfaces when creating distributed tables, running ON CLUSTER DDL statements, or querying existing distributed tables whose cluster definition has been removed from the config.

Impact

This error prevents any operation that depends on the missing cluster from executing:

  • Distributed table creation fails
  • ON CLUSTER DDL statements cannot target any nodes
  • Existing distributed tables referencing the missing cluster become non-functional
  • Queries against affected distributed tables return errors instead of results

Common Causes

  1. Typo in the cluster name -- The cluster name in the query does not match the name defined in remote_servers.
  2. Cluster not configured on the current node -- The cluster exists in the config of other nodes but was not added to the node executing the query.
  3. Configuration file not loaded -- The file containing the cluster definition (often in config.d/) was not included or has a syntax error preventing it from being parsed.
  4. Cluster definition removed during config cleanup -- Someone removed or renamed the cluster without updating all references.
  5. Case sensitivity -- Cluster names in ClickHouse are case-sensitive. my_cluster and My_Cluster are different names.
  6. Using a config template with unresolved variables -- If the cluster definition relies on environment variables or macros that were not set, the definition may not load.

Troubleshooting and Resolution Steps

  1. List all clusters defined on the current node:

    SELECT DISTINCT cluster FROM system.clusters;
    

    Verify that the cluster name you are trying to use appears in this output.

  2. Check for typos by comparing the cluster name in your query with what is configured:

    -- Your query references 'production_cluster'
    -- Check if it exists (case-sensitive):
    SELECT cluster FROM system.clusters WHERE cluster = 'production_cluster';
    
  3. Inspect the configuration files for the cluster definition:

    grep -r 'remote_servers' /etc/clickhouse-server/ --include='*.xml'
    

    Then examine the file that contains the cluster definition to verify the name and structure.

  4. If the cluster config is in a separate include file, verify it is loaded:

    ls -la /etc/clickhouse-server/config.d/
    

    Ensure the file has proper XML syntax:

    xmllint --noout /etc/clickhouse-server/config.d/cluster.xml
    
  5. Add the missing cluster definition if it does not exist. Create or update the config file:

    <!-- /etc/clickhouse-server/config.d/cluster.xml -->
    <clickhouse>
        <remote_servers>
            <production_cluster>
                <shard>
                    <replica>
                        <host>node1.example.com</host>
                        <port>9000</port>
                    </replica>
                    <replica>
                        <host>node2.example.com</host>
                        <port>9000</port>
                    </replica>
                </shard>
            </production_cluster>
        </remote_servers>
    </clickhouse>
    
  6. Reload the configuration:

    SYSTEM RELOAD CONFIG;
    

    Then verify:

    SELECT DISTINCT cluster FROM system.clusters;
    
  7. If a distributed table references a now-missing cluster, update or recreate the table:

    -- Check what cluster the distributed table expects
    SELECT name, engine_full FROM system.tables WHERE engine = 'Distributed';
    

Best Practices

  • Use consistent, descriptive cluster names and document them in your operational runbooks.
  • Deploy identical cluster configuration files to every node using configuration management tools.
  • Validate configuration file syntax before deploying changes with xmllint or similar tools.
  • Be aware that cluster names are case-sensitive -- establish a naming convention (e.g., all lowercase with underscores) and stick to it.
  • When renaming or removing clusters, search for all references in distributed table definitions and DDL scripts first.
  • Keep a record of all cluster names and their purposes to avoid accidental removal during config cleanup.

Frequently Asked Questions

Q: Can I define clusters in ClickHouse Keeper instead of XML configuration files?
A: ClickHouse supports automatic cluster discovery through Keeper, where nodes register themselves and clusters are formed dynamically. This approach reduces configuration drift but requires setting up the discovery mechanism in the config.

Q: Does the cluster need to be defined on every node, or just the node running the query?
A: For distributed queries and ON CLUSTER DDL, the cluster must be defined on the node initiating the query. For the query to succeed, the target nodes must also be reachable and properly configured.

Q: Will existing distributed tables break if I rename a cluster?
A: Yes. Distributed tables store the cluster name in their engine definition. If the cluster is renamed, you must recreate the distributed tables with the new cluster name.

Q: How do I check which distributed tables reference a specific cluster?
A: Query the system.tables table and filter on the engine definition:

SELECT database, name, engine_full
FROM system.tables
WHERE engine = 'Distributed' AND engine_full LIKE '%your_cluster_name%';

Q: Can I have multiple cluster definitions on the same node?
A: Absolutely. A single ClickHouse node can participate in multiple clusters simultaneously. Each cluster is defined as a separate entry under <remote_servers> in the configuration.

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.