To drop a table in ClickHouse, run DROP TABLE [IF EXISTS] [db.]name. On the default Atomic database engine the drop is asynchronous and the underlying data files are kept around for a configurable window so the table can be recovered with UNDROP TABLE. Pass SYNC to make the operation block until the storage is actually reclaimed. On older Ordinary databases, drops are immediate. This guide covers every variant of the syntax, the recovery options on Atomic databases, the cluster-wide semantics, and the pitfalls that catch people in production.
The Quick Answer
DROP TABLE analytics.events_old;
That marks analytics.events_old for deletion. On Atomic databases (the default since ClickHouse 20.5), the table is hidden from system.tables immediately but the data parts are kept on disk for a grace period, so you can still UNDROP TABLE if you change your mind.
If you need the disk reclaimed right now and have no plans to recover:
DROP TABLE analytics.events_old SYNC;
Full DROP TABLE Syntax
DROP [TEMPORARY] TABLE [IF EXISTS] [IF EMPTY]
[db1.]name1 [, [db2.]name2 ...]
[ON CLUSTER cluster_name]
[SYNC];
Multiple tables can be dropped in a single statement. TEMPORARY is required only for temporary tables (sessions-scoped). All other flags are optional.
SYNC: Atomic vs Ordinary Engine Semantics
The behavior depends on the database engine of the parent database:
| Engine | Without SYNC |
With SYNC |
|---|---|---|
Atomic (default) |
Marks table for deletion. Underlying parts kept for database_atomic_delay_before_drop_table_sec seconds. UNDROP available. |
Drops immediately, deletes parts, no recovery. |
Ordinary (legacy) |
Drops immediately. No recovery either way. | Same as without. |
Replicated |
DDL replicates via Keeper to every node. Underlying engine semantics apply. | Same plus block until all replicas acknowledge. |
The docs are explicit: "If the SYNC modifier is specified, the entity is dropped without delay."
If you operate at any scale, leave the default behavior in place. The grace window has saved many on-call shifts.
UNDROP TABLE: Recovering an Atomic Drop
DROP TABLE analytics.events; -- oops
UNDROP TABLE analytics.events; -- restores it before the grace window expires
UNDROP TABLE works only on Atomic databases and only within the database_atomic_delay_before_drop_table_sec window (default 8 minutes (480 seconds) on most builds). After that, the data parts are physically removed and the table cannot be recovered without a backup.
You can adjust the window in config.xml:
<database_atomic_delay_before_drop_table_sec>3600</database_atomic_delay_before_drop_table_sec>
An hour is a reasonable production value. Much longer means dropped tables keep occupying disk; much shorter and you lose the safety net.
IF EXISTS and IF EMPTY
DROP TABLE IF EXISTS analytics.events_old;
DROP TABLE IF EMPTY analytics.events_v1;
IF EXISTS silently succeeds when the table is not there, which is what you want in idempotent cleanup scripts. IF EMPTY succeeds only if the table has no rows; useful in migration scripts where you do not want to lose data accidentally. The docs note: "the server checks the emptiness of the table only on the replica which received the query," which means on a replicated cluster you should target the active replica or run OPTIMIZE TABLE first to make sure replication has converged.
ON CLUSTER
DROP TABLE analytics.events ON CLUSTER my_cluster SYNC;
ON CLUSTER runs the drop on every node, blocking until they all acknowledge. For Replicated database engines this is implicit; for plain Atomic databases used in a cluster setup you typically use ON CLUSTER. Combine with SYNC if you want to be sure storage is reclaimed across the whole cluster before moving on.
Temporary Tables
CREATE TEMPORARY TABLE staging (id UInt64, payload String);
DROP TEMPORARY TABLE staging;
Temporary tables live for the duration of the session and are not visible to other sessions. They are dropped automatically when the session closes, but you can drop them explicitly. The TEMPORARY keyword is required; without it, the server tries to find a permanent table with that name and fails.
Dropping Tables Defined by a Table Function
-- This works
CREATE TABLE s3_cache ENGINE = MergeTree ORDER BY id AS SELECT ... FROM s3(...);
DROP TABLE s3_cache;
-- This does NOT
SELECT * FROM s3('https://...', ...); -- you cannot DROP a table function reference
Table functions are referenced inline; they are not actual tables and have nothing to drop. Only persistent tables created via CREATE TABLE accept DROP.
Replicated Tables and Keeper
For ReplicatedMergeTree, the drop coordinates through ClickHouse Keeper. If Keeper is unreachable or a replica is offline, the drop may hang. Diagnose with:
SELECT * FROM system.replicas WHERE database = 'analytics' AND table = 'events';
SELECT * FROM system.zookeeper WHERE path = '/clickhouse/tables/...';
Add SYNC to force the wait, or use the FORCE option in extreme cases (SYSTEM DROP REPLICA ...). For routine drops on a healthy cluster, DROP TABLE ON CLUSTER is the right tool.
Common Errors
Table doesn't exist: the table name is wrong or it was already dropped. UseIF EXISTSif the caller is idempotent.Cannot drop table because it is dependent of another table: usually aMaterialized Viewis targeting it. Drop the view first.READONLY: Table is in readonly mode: a replica is in read-only mode (most often because of Keeper connectivity). See the readonly error guide.Timeout waiting for drop on cluster: a replica is unreachable. Investigatesystem.clustersandsystem.replicas.Access denied: required privilege DROP on db.table: missingDROP TABLEprivilege. See the grant guide.
How Pulse Helps With Table Cleanup
The most painful ClickHouse incidents we see at Pulse are not the ones where a drop went wrong, they are the ones where a drop was never run at all: hundreds of *_v1 and *_backup tables consuming disk, materialized view target tables that outlive the view that filled them, and dropped tables that never got cleaned up because the SYNC was skipped on a busy node. Pulse continuously inventories ClickHouse tables and surfaces candidates for retirement, materialized view orphans, tables marked for deletion that have been waiting suspiciously long, and tables whose growth makes them likely candidates for a partition-drop instead of a full table drop. Connect your ClickHouse cluster to Pulse and let it watch the long tail of table lifecycle.
Frequently Asked Questions
Q: Can I recover a dropped table in ClickHouse?
Yes, on Atomic databases, within the grace window, using UNDROP TABLE name. After the window expires (database_atomic_delay_before_drop_table_sec, 8 minutes by default), the data is physically gone and you need a backup to restore.
Q: What is the difference between DROP TABLE and TRUNCATE TABLE?
DROP TABLE removes the table itself: schema, data, and access entries. TRUNCATE TABLE removes only the rows and keeps the schema. For periodic clean-outs where the table will be repopulated, use TRUNCATE. For permanent removal, use DROP.
Q: Does DROP TABLE block other queries?
On Atomic databases the drop is metadata-only at the SQL layer and does not block running queries on other tables. The background cleanup of parts is asynchronous. With SYNC, the statement blocks until the cleanup finishes, which can take a while for very large tables.
Q: Can I drop a table that has a materialized view depending on it?
Drop the materialized view first. If you try to drop the source table while a view depends on it, ClickHouse returns an error. The reverse (dropping the view and keeping the source) is always safe.
Q: How do I drop many tables at once?
DROP TABLE a, b, c [, ...] in a single statement, or generate DROP TABLE statements from system.tables and run them through clickhouse-client. For a whole database, DROP DATABASE db is faster than dropping each table individually.
Q: What happens to backups when I drop a table?
Backups are independent storage. Dropping a live table does not touch the backup repository; existing snapshots still contain the table and you can restore from them.
Q: Can I drop a table on a single replica without affecting the others?
Not via DROP TABLE. The right tool is SYSTEM DROP REPLICA if you want to detach a single replica's data; that is a recovery operation, not a normal drop.