NEW

Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

ClickHouse DB::Exception: HDFS error

The "DB::Exception: HDFS error" is thrown when ClickHouse cannot complete a read or write operation against a Hadoop Distributed File System (HDFS) cluster. The HDFS_ERROR code covers a broad range of failures returned by the HDFS client library, including connectivity problems, authentication issues, and file-level permission denials.

Impact

This error prevents ClickHouse from accessing data stored in HDFS. Queries using the HDFS table engine or the hdfs() table function will fail, and any MergeTree table backed by an HDFS disk will become inaccessible. If HDFS is used as part of a tiered storage policy, background merges and mutations targeting HDFS will also stall.

Common Causes

  1. The HDFS NameNode is unreachable — wrong hostname, port, or the NameNode service is down.
  2. Kerberos authentication failure — expired ticket, missing keytab, or misconfigured principal.
  3. Insufficient HDFS file-system permissions for the ClickHouse user.
  4. HDFS high-availability (HA) failover in progress, causing transient connectivity issues.
  5. Misconfigured hdfs-site.xml or core-site.xml referenced by ClickHouse.
  6. Incompatible HDFS protocol version between the ClickHouse libhdfs3 client and the Hadoop cluster.
  7. DataNode failures causing file blocks to be unavailable.

Troubleshooting and Resolution Steps

  1. Review the complete error message in the ClickHouse log for the HDFS client error:

    grep -i "HDFS_ERROR\|hdfs\|libhdfs" /var/log/clickhouse-server/clickhouse-server.log | tail -30
    
  2. Verify the NameNode is reachable from the ClickHouse host:

    curl -s "http://namenode-host:9870/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"
    
  3. If using Kerberos, check that the ticket is valid:

    klist -e
    kinit -kt /path/to/clickhouse.keytab clickhouse/hostname@REALM
    
  4. Test HDFS access independently:

    hdfs dfs -ls hdfs://namenode-host:8020/clickhouse/data/
    
  5. Verify the HDFS configuration files referenced by ClickHouse are correct. ClickHouse reads these from the path specified in the hdfs configuration section or via the libhdfs3_conf setting:

    <hdfs>
      <libhdfs3_conf>/etc/clickhouse-server/hdfs-client.xml</libhdfs3_conf>
    </hdfs>
    
  6. Check HDFS file permissions for the ClickHouse user:

    hdfs dfs -ls -d /clickhouse/data/
    
  7. If the Hadoop cluster uses HA with multiple NameNodes, confirm the failover proxy configuration is set up in the HDFS config files that ClickHouse references.

Best Practices

  • Keep the HDFS configuration files (core-site.xml, hdfs-site.xml) on ClickHouse nodes in sync with the Hadoop cluster configuration.
  • Use Kerberos keytabs with automatic renewal rather than manually obtained tickets.
  • Monitor HDFS NameNode and DataNode health alongside ClickHouse to correlate failures.
  • Test HDFS connectivity from the ClickHouse host during initial setup using standard HDFS CLI tools.
  • Consider using the hdfs() table function for ad-hoc queries and the HDFS table engine for persistent access patterns.
  • Set appropriate timeouts in the HDFS client configuration to prevent ClickHouse queries from hanging indefinitely.

Frequently Asked Questions

Q: Which HDFS client library does ClickHouse use?
A: ClickHouse uses libhdfs3, a native C/C++ HDFS client. It does not require a full Hadoop installation, but it does need proper HDFS configuration files and, if applicable, Kerberos libraries.

Q: Can ClickHouse work with HDFS high availability?
A: Yes, as long as the HA configuration (nameservices, failover proxies) is properly defined in the HDFS configuration files that ClickHouse references.

Q: Why does my HDFS query work from the command line but fail from ClickHouse?
A: The ClickHouse server process may be running as a different user with different Kerberos tickets or HDFS permissions. Confirm that the clickhouse system user has the same access rights as the user you test with on the command line.

Q: Does ClickHouse support writing to HDFS or is it read-only?
A: ClickHouse supports both reading and writing to HDFS. However, HDFS is best suited for append-only workloads. The HDFS table engine supports INSERT operations, and HDFS can also be used as a storage tier in MergeTree policies.

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.