The "DB::Exception: File cache access denied" error in ClickHouse indicates that the server process cannot read from or write to the file cache directory. The file cache is used to locally cache data from remote storage backends like S3, GCS, or HDFS, improving read performance by avoiding repeated remote fetches. The error code is FILECACHE_ACCESS_DENIED.
Impact
Queries that rely on cached remote data will fail or fall back to reading directly from remote storage, significantly increasing latency and network costs. If the cache directory is completely inaccessible, operations on tables using remote storage engines may be blocked entirely.
Common Causes
- The file cache directory does not exist and ClickHouse cannot create it.
- The ClickHouse process user lacks read/write permissions on the cache directory.
- The filesystem where the cache resides is mounted read-only or has run out of space.
- SELinux or AppArmor policies are blocking access to the cache path.
- The cache directory path in the configuration points to a location outside the allowed filesystem scope.
Troubleshooting and Resolution Steps
Check the configured cache path: Look in your ClickHouse configuration for the
<cache>or<local_cache>settings under the storage policy or disk configuration:<disks> <s3_cache> <type>cache</type> <disk>s3_disk</disk> <path>/var/lib/clickhouse/disks/s3_cache/</path> </s3_cache> </disks>Verify the directory exists and has correct permissions:
ls -la /var/lib/clickhouse/disks/s3_cache/ sudo mkdir -p /var/lib/clickhouse/disks/s3_cache/ sudo chown -R clickhouse:clickhouse /var/lib/clickhouse/disks/s3_cache/ sudo chmod 750 /var/lib/clickhouse/disks/s3_cache/Check available disk space:
df -h /var/lib/clickhouse/disks/s3_cache/Review SELinux/AppArmor policies: If security modules are active, check for denials:
# SELinux sudo ausearch -m avc -ts recent | grep clickhouse # AppArmor sudo dmesg | grep apparmor | grep clickhouseVerify filesystem mount options: Ensure the mount is not read-only:
mount | grep $(df --output=source /var/lib/clickhouse/disks/s3_cache/ | tail -1)Restart ClickHouse: After fixing permissions or creating the directory, restart the server:
sudo systemctl restart clickhouse-server
Best Practices
- Include cache directory creation and permission setup in your ClickHouse provisioning scripts.
- Monitor disk space on cache volumes with alerts to prevent filling up.
- Place the file cache on fast local storage (NVMe SSDs) for optimal performance.
- Set appropriate cache size limits in the configuration to prevent unbounded growth.
Frequently Asked Questions
Q: Can I disable the file cache entirely?
A: Yes, you can configure your storage policy to use the remote disk directly without a cache layer. However, this will significantly increase query latency and remote storage costs for read-heavy workloads.
Q: How large should the file cache be?
A: It depends on your working set size. A good starting point is to allocate enough space to hold the most frequently accessed data. Monitor cache hit rates using system.disks and system.filesystem_cache_settings to tune the size.
Q: Does ClickHouse automatically evict old cache entries?
A: Yes. When the cache reaches its configured size limit, ClickHouse evicts the least recently used entries to make room for new data.