NEW

Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

ClickHouse DB::Exception: Number of dimensions mismatched in multi-dimensional array

The DB::Exception: NUMBER_OF_DIMENSIONS_MISMATCHED error in ClickHouse is raised when you attempt to insert or process multi-dimensional array data where the nesting depth of the arrays does not match the expected column type. For example, inserting a 2D array into a column defined as Array(Array(Array(UInt32))) would trigger this error because the dimensions do not align.

Impact

When this error occurs, the affected INSERT or SELECT operation fails immediately. No partial data is written for the failing block, so data integrity is preserved. However, data pipelines that rely on batch inserts may stall until the dimension mismatch is corrected. Queries that attempt to cast or coerce arrays with mismatched dimensions will also be rejected.

Common Causes

  1. The source data contains arrays with inconsistent nesting depths across rows --- for instance, one row has [[1,2]] while another has [[[1,2]]].
  2. A schema migration changed the array dimensionality of a column without updating the ingestion pipeline.
  3. JSON or CSV parsing inferred different nesting levels depending on the data, especially when using schema inference with formats like JSONEachRow.
  4. Manual construction of array literals in INSERT statements with the wrong number of nested brackets.
  5. Using arrayConcat or similar functions on arrays that have different dimensionalities.

Troubleshooting and Resolution Steps

  1. Verify the column definition to understand the expected dimensions:

    DESCRIBE TABLE your_database.your_table;
    

    Look at the type column --- Array(Array(UInt32)) means exactly two levels of nesting.

  2. Inspect the problematic data before insertion. If loading from a file, preview a few rows and confirm the nesting depth is uniform:

    SELECT * FROM file('data.json', 'JSONEachRow') LIMIT 10;
    
  3. If the source data genuinely has variable nesting, flatten or pad the arrays in your transformation layer before inserting into ClickHouse. You can use arrayFlatten to reduce one level of nesting:

    SELECT arrayFlatten([[1, 2], [3, 4]]) AS flat;
    -- Result: [1, 2, 3, 4]
    
  4. When using schema inference and the inferred type does not match your target table, specify the schema explicitly in the INSERT:

    INSERT INTO your_table
    SELECT col1, CAST(col2 AS Array(Array(UInt32)))
    FROM file('data.json', 'JSONEachRow', 'col1 UInt64, col2 Array(Array(UInt32))');
    
  5. If a schema migration changed the array depth, update your ingestion scripts accordingly and backfill any data that was written with the old dimensionality by reading, transforming, and re-inserting it.

Best Practices

  • Define array column types explicitly in your CREATE TABLE statements rather than relying on schema inference for production tables.
  • Validate incoming data dimensions in your ETL pipeline before it reaches ClickHouse.
  • Use consistent serialization in upstream producers so that empty arrays are still represented at the correct depth (e.g., [[]] rather than [] for a 2D array column).
  • Document the expected array dimensionality for each column in your data catalog.

Frequently Asked Questions

Q: Can ClickHouse automatically coerce a 1D array into a 2D array?
A: No. ClickHouse enforces strict type matching for array dimensions. You must explicitly wrap the array, for example using array(myArray) to add a nesting level.

Q: Does this error affect already-stored data?
A: No. The error is raised at insert or query time. Data that was previously written with the correct dimensions remains unaffected.

Q: How do I find which rows in my source file have the wrong number of dimensions?
A: Query the file directly with ClickHouse's file table function and use toTypeName() to inspect inferred types per row, or filter rows where casting fails using isNull(CAST(...)) patterns.

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.