The "DB::Exception: Too many rows in result set" error in ClickHouse is raised when a query attempts to return more rows than the max_result_rows setting permits. This is a protective measure -- the TOO_MANY_ROWS error code prevents queries from overwhelming clients or downstream systems with unexpectedly large result sets.
Impact
When this error fires, the query is aborted and no results are returned to the client. This can disrupt applications that depend on unbounded result sets, BI dashboards attempting to fetch large datasets, and ETL pipelines that move data out of ClickHouse without proper pagination.
Common Causes
- The
max_result_rowssetting is configured with a value lower than the number of rows the query produces - A query lacks a
LIMITclause and scans a table with more rows than expected - User profiles or quota settings impose row limits that the query exceeds
- Aggregation queries that produce a large number of groups, each becoming a result row
- Application code generating queries without proper result size constraints
Troubleshooting and Resolution Steps
Check the current limit:
SELECT name, value FROM system.settings WHERE name = 'max_result_rows';Add a LIMIT clause to constrain your result set:
SELECT * FROM my_table LIMIT 10000;Increase the limit for a specific query if a larger result set is genuinely needed:
SET max_result_rows = 1000000; SELECT * FROM my_table;Change the overflow behavior instead of throwing an error. The
result_overflow_modesetting controls what happens when the limit is exceeded:SET max_result_rows = 100000; SET result_overflow_mode = 'break'; SELECT * FROM my_table;With
break, ClickHouse returns the rows up to the limit and silently stops, rather than raising an error.Check user profile settings if limits are being enforced at the profile level:
SELECT * FROM system.settings_profile_elements WHERE setting_name = 'max_result_rows';Use pagination for large result sets by combining
LIMITwithOFFSETor keyset pagination:SELECT * FROM my_table WHERE id > last_seen_id ORDER BY id LIMIT 10000;Review quotas that may impose row-level limits on specific users:
SELECT * FROM system.quotas; SELECT * FROM system.quota_usage;
Best Practices
- Always use
LIMITclauses in ad-hoc queries and application code to prevent unexpectedly large results. - Set
max_result_rowsat the user profile level for interactive users, while allowing higher limits for ETL processes. - Use
result_overflow_mode = 'break'in dashboards where partial results are acceptable. - Implement keyset pagination rather than
OFFSETfor efficiently iterating over large datasets. - Consider using
INTO OUTFILEor streaming formats for bulk data exports rather than fetching large result sets through the standard query interface.
Frequently Asked Questions
Q: What is the default value of max_result_rows?
A: By default, max_result_rows is set to 0, which means there is no limit. If you are seeing this error, the limit has been explicitly configured either in the server settings, a user profile, or the session.
Q: Does max_result_rows count rows before or after aggregation?
A: It counts the rows in the final result set that would be sent to the client, meaning after all processing including aggregation, filtering, and sorting.
Q: Can I set different limits for different users?
A: Yes. You can configure max_result_rows in user profiles, allowing tighter limits for interactive users and more permissive settings for batch processing accounts.
Q: What is the difference between max_result_rows and max_rows_to_read?
A: max_result_rows limits how many rows are returned in the final result, while max_rows_to_read limits how many rows ClickHouse reads from storage during query execution. They serve different purposes and can be used together.