The "DB::Exception: Table size exceeds max_table_size_to_drop" error is a safety mechanism built into ClickHouse. Identified by the error code TABLE_SIZE_EXCEEDS_MAX_DROP_SIZE_LIMIT, it prevents you from accidentally dropping a table whose on-disk size exceeds the configured threshold. By default, this limit is set to 50 GB, meaning any table larger than 50 GB will trigger this error when you attempt to drop it.
Impact
The DROP TABLE statement is blocked, and the table remains intact. This is intentional -- ClickHouse is protecting you from accidentally destroying a large amount of data. No data is lost, but your operation is paused until you explicitly confirm you want to proceed.
Common Causes
- Attempting to drop a large production table -- the table has grown beyond the safety threshold.
- Default threshold is too low for your use case -- if you routinely work with tables in the hundreds-of-gigabytes range, the 50 GB default may be too restrictive.
- Automated cleanup scripts hitting large tables -- scripts that worked fine when tables were small start failing as data grows.
- Not realizing how large the table has become -- accumulated data over time pushed the table past the limit.
Troubleshooting and Resolution Steps
Check the current table size:
SELECT database, name, formatReadableSize(total_bytes) AS size FROM system.tables WHERE name = 'my_table';Check the current drop size limit:
SELECT value FROM system.settings WHERE name = 'max_table_size_to_drop';Option A: Temporarily override the limit for a single session. Set the value to 0 to disable the check:
SET max_table_size_to_drop = 0; DROP TABLE my_database.my_table;Note: this setting may require the
SYSTEM DROPprivilege in newer ClickHouse versions.Option B: Create a flag file to allow the drop. ClickHouse checks for a special file that overrides the limit:
touch /var/lib/clickhouse/flags/force_drop_tableThen retry the DROP TABLE. ClickHouse deletes the flag file after one successful drop, so it is a one-time override.
Option C: Increase the limit permanently in the server config. Edit
config.xmlor a config override:<max_table_size_to_drop>0</max_table_size_to_drop>Setting it to
0disables the check entirely. You can also set it to a specific byte value.If you want to reduce the table size before dropping, truncate it first:
TRUNCATE TABLE my_database.my_table; DROP TABLE my_database.my_table;
Best Practices
- Do not permanently disable
max_table_size_to_dropin production unless you have other safeguards in place. The safety check exists for good reason. - Prefer the flag file approach for one-off large table drops -- it is self-cleaning and does not leave a permanent configuration change.
- Document your
max_table_size_to_dropsetting in your operational runbook. - Set up monitoring for table sizes so you are aware when tables approach the threshold.
- Consider partitioning large tables so you can drop partitions instead of entire tables.
Frequently Asked Questions
Q: What is the default value of max_table_size_to_drop?
A: The default is 50 GB (53,687,091,200 bytes). Any table larger than this will trigger the safety check.
Q: Does this limit apply to TRUNCATE TABLE as well?
A: Yes, there is a similar setting called max_table_size_to_drop that also affects TRUNCATE operations on large tables. Both operations remove significant amounts of data, so both are protected.
Q: Is the flag file method safe for production?
A: Yes. The flag file at /var/lib/clickhouse/flags/force_drop_table is consumed after one successful drop, so it does not leave your system in a permanently unprotected state.
Q: Can I set different limits for different users?
A: Yes, max_table_size_to_drop can be set in user profiles, so you can give certain admin users a higher or unlimited threshold while keeping the default for regular users.
Q: Does dropping partitions also trigger this limit?
A: No. ALTER TABLE ... DROP PARTITION is not subject to max_table_size_to_drop. This is one reason why partitioning is recommended for large tables -- it gives you fine-grained control over data removal.