The "DB::Exception: Table metadata already exists" error occurs when you try to create a table but ClickHouse finds that a metadata file for a table with the same name already exists on disk. This typically happens after an incomplete or interrupted DROP TABLE operation that removed the table from ClickHouse's in-memory catalog but left the metadata file behind.
Impact
You cannot create a new table with the intended name until the stale metadata is resolved. This can block deployments, migrations, and automated provisioning scripts. The issue is localized to the specific table name on the specific server — other tables and operations are unaffected.
Common Causes
- Interrupted DROP TABLE operation — if the server crashed or was killed during a DROP, the metadata file may not have been fully removed.
- Filesystem-level issues — permission problems or disk errors can prevent ClickHouse from deleting the metadata file during DROP.
- Manual manipulation of the metadata directory — copying metadata files from another server or backup without properly registering them.
- Race condition during concurrent DDL — multiple sessions issuing CREATE/DROP for the same table simultaneously.
- Replicated table cleanup failure — on ReplicatedMergeTree tables, the local metadata may remain after a failed ZooKeeper cleanup.
Troubleshooting and Resolution Steps
Check if the table actually exists in ClickHouse:
SELECT name FROM system.tables WHERE database = 'your_db' AND name = 'your_table';If it doesn't appear here but the error persists, you have an orphaned metadata file.
Locate the orphaned metadata file. Metadata files are stored in the metadata directory, typically at:
/var/lib/clickhouse/metadata/your_db/your_table.sqlCheck if the file exists on the filesystem.
Try DROP TABLE IF EXISTS first:
DROP TABLE IF EXISTS your_db.your_table;This may clean up the stale metadata properly.
Manually remove the metadata file if the DROP command doesn't resolve it. Stop ClickHouse or be very careful:
# Verify the file is orphaned, then remove it rm /var/lib/clickhouse/metadata/your_db/your_table.sqlAfter removing the file, either restart ClickHouse or run
SYSTEM RELOAD DICTIONARY(for dictionaries) or simply retry your CREATE TABLE.For replicated tables, clean up ZooKeeper. The ZooKeeper path may also need to be cleaned:
SYSTEM DROP REPLICA 'replica_name' FROM ZKPATH '/clickhouse/tables/your_table';Or use
clickhouse-keeper-client/zkCli.shto remove the orphaned ZooKeeper node.Check filesystem permissions. Make sure the ClickHouse process has write access to the metadata directory so future DROP operations can complete successfully.
Best Practices
- Avoid killing the ClickHouse process during DDL operations. Use
SYSTEM SHUTDOWNfor graceful shutdown. - Monitor the ClickHouse error log for warnings about failed metadata operations.
- Use
IF NOT EXISTSandIF EXISTSin CREATE and DROP statements to make scripts idempotent. - In automated deployment pipelines, include a retry/cleanup step that handles orphaned metadata.
- Ensure proper filesystem permissions for the ClickHouse data and metadata directories.
Frequently Asked Questions
Q: Is it safe to manually delete the metadata file?
A: Yes, if you have confirmed the table does not exist in system.tables and there is no corresponding data directory. The file is simply a SQL text file that ClickHouse uses at startup to recreate the table object.
Q: Can CREATE TABLE IF NOT EXISTS bypass this error?
A: It depends. If the table exists in the catalog, IF NOT EXISTS skips creation silently. But if only the metadata file exists (orphaned), ClickHouse may still raise the error. Try DROP TABLE IF EXISTS first.
Q: Will this happen on every node in a cluster?
A: Not necessarily. The orphaned metadata file is a local filesystem issue. It typically affects only the node where the interrupted operation occurred.
Q: How do I prevent this during server crashes?
A: You can't completely prevent it, but using reliable storage, a UPS for power protection, and enabling ClickHouse's atomic database engine (the default in recent versions) reduces the likelihood. The Atomic engine uses a more robust rename-based approach for metadata operations.