The indices.queries.cache.size setting in Elasticsearch controls the maximum amount of memory allocated for the query cache across all indices in the cluster. The query cache stores the results of filter clauses, enabling faster retrieval when the same filters are executed in subsequent queries.
- Default Value:
10%(10% of the JVM heap) - Possible Values: Percentage (e.g.,
10%) or absolute size (e.g.,512mb) - Recommendations: Keep at 10% for most workloads. Increase to 15-20% for filter-heavy workloads. Decrease if experiencing heap pressure or low cache hit rates.
This is a cluster-level setting that applies to all nodes. The query cache is shared across all indices, so the total memory used will not exceed this limit. When the cache reaches capacity, entries are evicted using a Least Recently Used (LRU) policy.
Example
To set the query cache size to 15% of heap in elasticsearch.yml:
indices.queries.cache.size: 15%
Or to set an absolute size of 1GB:
indices.queries.cache.size: 1gb
Note: This setting can only be configured in elasticsearch.yml and requires a node restart to take effect.
You might want to change this setting if:
- You're seeing high cache eviction rates indicating the cache is too small.
- Your workload is heavily filter-based with high filter reuse.
- You're experiencing heap pressure and the query cache hit rate is low.
Common Issues and Misuses
- Setting the value too high, causing heap pressure and increased GC activity.
- Setting it too low for filter-heavy workloads, leading to constant evictions and poor cache effectiveness.
- Not monitoring cache hit rates and eviction statistics before adjusting the size.
Do's and Don'ts
Do's:
- Monitor cache statistics using the
_stats/query_cacheAPI before adjusting. - Start with the default 10% and adjust based on observed hit rates and evictions.
- Consider your heap size when setting absolute values to ensure sufficient remaining heap.
Don'ts:
- Don't allocate more than 20% of heap without careful monitoring.
- Don't set this too low (below 5%) for production clusters with filter queries.
- Don't change this setting without analyzing cache effectiveness metrics first.
Frequently Asked Questions
Q: How do I know if I should increase the cache size?
A: Monitor the cache eviction rate using the stats API. If you see frequent evictions combined with high hit rates, increasing the cache size may improve performance.
Q: What's the difference between percentage and absolute size?
A: Percentage automatically scales with your heap size, while absolute sizes remain fixed. Percentage is generally recommended to ensure the cache scales appropriately with available heap.
Q: Can I set this per-index?
A: No, this is a cluster-level setting that applies to all indices. Individual indices can only enable or disable their participation in the cache using index.queries.cache.enabled.
Q: How can I monitor query cache usage?
A: Use the GET /_stats/query_cache API to view hit rates, miss rates, evictions, memory usage, and cache entry counts across your cluster.
Q: Does this setting affect the request cache?
A: No, this setting only controls the query cache (filter clause results). The request cache has its own separate memory allocation controlled by indices.requests.cache.size.
Q: What happens when the cache is full?
A: When the cache reaches the size limit, Elasticsearch uses an LRU (Least Recently Used) eviction policy to remove the oldest entries and make room for new ones.