The "DB::Exception: Cannot truncate file" error in ClickHouse appears when the server fails to truncate a file to a specified size. Represented by the CANNOT_TRUNCATE_FILE error code, this operation is used internally during certain write and merge operations, as well as for managing temporary files. A truncation failure is an OS-level problem that prevents ClickHouse from properly managing file sizes on disk.
Impact
When file truncation fails, you may see:
- Failed merge operations that were attempting to finalize output files
- Write operations that cannot complete, blocking inserts
- Temporary file management issues that degrade query performance
- Accumulation of oversized or improperly sized files in the data directory
Common Causes
- Insufficient permissions on the target file or directory
- Disk is full, preventing the filesystem from updating file metadata
- The filesystem is mounted read-only
- The file resides on a filesystem that does not support truncation (some FUSE or network filesystems)
- SELinux or AppArmor policy denying the truncate operation
- File locked by another process
Troubleshooting and Resolution Steps
Check the error log for the specific file:
grep "Cannot truncate" /var/log/clickhouse-server/clickhouse-server.err.log | tail -5Verify disk space:
df -h /var/lib/clickhouseEven though truncation reduces file size, some filesystems need free space to update metadata. Free up space if the disk is full.
Check file permissions:
ls -la /path/to/affected/file stat /path/to/affected/fileThe file must be writable by the ClickHouse process user:
sudo chown clickhouse:clickhouse /path/to/affected/file sudo chmod 644 /path/to/affected/fileVerify filesystem mount mode:
mount | grep $(df /var/lib/clickhouse --output=source | tail -1)If read-only, fix the underlying issue and remount read-write.
Check for security policy denials:
sudo ausearch -m avc -ts recent sudo journalctl | grep -i apparmor | tail -10Test truncation manually:
sudo -u clickhouse truncate -s 0 /tmp/test_truncate_fileThis helps confirm whether the ClickHouse user can perform truncate operations at all.
Restart ClickHouse once the underlying cause is addressed.
Best Practices
- Ensure the ClickHouse user has full read/write ownership of all data directories
- Maintain adequate free disk space at all times; configure alerts at 80% utilization
- Use filesystems known to work well with ClickHouse (ext4, xfs) rather than exotic or FUSE-based options
- Review security policies (SELinux, AppArmor) during initial deployment to prevent unexpected denials
- Monitor for filesystem remounting to read-only, which often indicates deeper storage problems
Frequently Asked Questions
Q: When does ClickHouse need to truncate files?
A: ClickHouse may truncate files during merge operations, when finalizing written data parts, or when cleaning up temporary files. It is part of normal internal file management.
Q: Can this error lead to data corruption?
A: The error itself does not corrupt data -- it prevents an operation from completing. However, the underlying cause (such as a failing disk) could put your data at risk if not addressed promptly.
Q: I see this on an NFS mount. Is that expected?
A: NFS can sometimes behave unexpectedly with truncation due to caching or permission mapping issues. Using local storage or a well-tested distributed filesystem is recommended for ClickHouse data directories.
Q: Is CANNOT_TRUNCATE_FILE the same as CANNOT_WRITE_TO_FILE?
A: No, they are distinct error codes. CANNOT_TRUNCATE_FILE specifically refers to the ftruncate system call, while CANNOT_WRITE_TO_FILE relates to general write failures. The troubleshooting steps overlap, but the root causes can differ.