The "DB::Exception: Table was not dropped" error occurs when ClickHouse attempts to drop a table but cannot complete the operation. The error code TABLE_WAS_NOT_DROPPED indicates that something prevented the drop from finishing -- typically an active lock held by ongoing queries, an internal error during metadata cleanup, or a filesystem-level issue.
Impact
The table remains in place and fully functional, so there is no data loss. However, your workflow is blocked if it depends on the table being removed. Repeated attempts to drop the table may continue to fail until the underlying cause is resolved.
Common Causes
- Active queries holding locks on the table -- long-running SELECT, INSERT, or ALTER operations prevent the drop from acquiring an exclusive lock.
- Concurrent DDL operations -- another ALTER or OPTIMIZE running against the same table.
- Filesystem permission issues -- ClickHouse cannot delete the table's data files on disk.
- Corrupted table metadata -- internal inconsistencies that prevent clean removal.
- ZooKeeper issues with replicated tables -- the drop cannot proceed because ZooKeeper nodes cannot be modified or deleted.
Troubleshooting and Resolution Steps
Check for active queries on the table:
SELECT query_id, query, elapsed, read_rows FROM system.processes WHERE query LIKE '%my_table%';Kill blocking queries if appropriate:
KILL QUERY WHERE query_id = 'blocking-query-id';Use this carefully -- killing queries in production can have side effects.
Retry the drop with SYNC to wait for completion:
DROP TABLE my_database.my_table SYNC;Check for filesystem issues. Verify that the ClickHouse user has write permissions to the table's data directory:
ls -la /var/lib/clickhouse/data/my_database/my_table/For replicated tables, check ZooKeeper connectivity:
SELECT * FROM system.zookeeper WHERE path = '/clickhouse/tables/shard1/my_table';If ZooKeeper is unreachable, resolve the connectivity issue first.
Check ClickHouse server logs for more details:
grep -i "my_table" /var/log/clickhouse-server/clickhouse-server.err.log | tail -20As a last resort, detach and then drop:
DETACH TABLE my_database.my_table; DROP TABLE my_database.my_table;
Best Practices
- Avoid dropping tables while they have active queries. Schedule drops during maintenance windows or low-traffic periods.
- Use
SYSTEM FLUSH LOGSbefore dropping tables that appear in queries, to ensure all log writes are complete. - Monitor long-running queries with
system.processesand set appropriatemax_execution_timelimits. - For replicated tables, ensure ZooKeeper is healthy before performing DDL operations.
- Use the
SYNCkeyword on DROP statements in scripts to ensure deterministic behavior.
Frequently Asked Questions
Q: How long should I wait before retrying a failed DROP TABLE?
A: Check system.processes first. If there are active queries on the table, wait for them to complete or kill them. If no queries are running, retry immediately -- the issue may be transient.
Q: Can a failed DROP TABLE leave the table in a broken state?
A: Generally no. If the drop fails, the table should remain fully functional. However, if the failure was due to metadata corruption, the table may already be in an inconsistent state.
Q: What is the difference between DROP TABLE and DETACH TABLE?
A: DROP TABLE removes both the metadata and the data files. DETACH TABLE only removes the table from ClickHouse's catalog while leaving the data files on disk. You can later ATTACH a detached table.
Q: Will killing a query that is blocking the drop cause data loss?
A: Killing a SELECT query has no data impact. Killing an INSERT may lose the in-progress batch. Killing an ALTER could leave the table in an intermediate state, though ClickHouse usually handles this gracefully.