Proper heap sizing is critical for Elasticsearch performance and stability. This guide provides recommendations for sizing the JVM heap based on your server resources and workload.
The Golden Rule
Heap should be about half of RAM but never above 32 GB.
This is the most important rule for Elasticsearch heap sizing. There are technical reasons for both parts of this rule.
Why 50% of RAM?
Elasticsearch relies heavily on the filesystem cache for performance. The operating system uses available RAM (the portion not allocated to heap) to cache recently accessed data from disk.
Example allocation for 64 GB RAM server:
- 31 GB for Elasticsearch heap
- 33 GB available for OS and filesystem cache
If you allocate too much to heap, you starve the filesystem cache, and Lucene segment access becomes slow.
Why Maximum 32 GB?
The JVM uses a technique called "compressed ordinary object pointers" (compressed oops) that allows it to reference more memory with 32-bit pointers. This optimization is available when heap is under ~32 GB.
With compressed oops (< 32 GB heap):
- Smaller object headers
- More efficient memory usage
- Better CPU cache utilization
Without compressed oops (> 32 GB heap):
- Object pointers double in size
- Effective memory capacity may actually decrease
- Performance regression is common
Recommended Heap Sizes
By Server RAM
| Server RAM | Recommended Heap | Filesystem Cache |
|---|---|---|
| 8 GB | 4 GB | 4 GB |
| 16 GB | 8 GB | 8 GB |
| 32 GB | 16 GB | 16 GB |
| 64 GB | 31 GB | 33 GB |
| 128 GB | 31 GB | 97 GB |
| 256 GB | 31 GB (or 64 GB*) | 225 GB |
*Only use 64 GB heap if you have more than 128 GB RAM and your workload specifically benefits from larger heap.
By Workload Type
| Workload | Heap Recommendation |
|---|---|
| Logging/Time-series | 16-31 GB (depends on shard count) |
| Full-text search | 8-16 GB (index size dependent) |
| Analytics/Aggregations | 16-31 GB (aggregation complexity dependent) |
| Mixed workload | 16 GB starting point |
Configuring Heap Size
Modern Elasticsearch (7.x+)
Create a custom options file (don't modify jvm.options directly):
# /etc/elasticsearch/jvm.options.d/heap.options
-Xms16g
-Xmx16g
Important: Set Min = Max
Always set -Xms and -Xmx to the same value:
# Correct
-Xms16g
-Xmx16g
# Wrong - can cause GC pauses during resize
-Xms4g
-Xmx16g
Using Environment Variables
For Docker or temporary testing:
export ES_JAVA_OPTS="-Xms16g -Xmx16g"
Verifying Configuration
Check Current Heap Size
GET /_nodes/stats/jvm?filter_path=nodes.*.jvm.mem.heap_max_in_bytes
Verify Compressed Oops
# Check Elasticsearch logs at startup for:
# "heap size [X], compressed ordinary object pointers [true]"
grep "compressed ordinary object pointers" /var/log/elasticsearch/*.log
Monitor Heap Usage
GET /_cat/nodes?v&h=name,heap.percent,heap.current,heap.max
Special Considerations
Elasticsearch 8.x on Large Memory Servers
Elasticsearch 8.x with 128+ GB RAM can potentially use larger heaps (up to 64 GB) with compressed class pointers. However, this requires careful testing.
Container Environments
In Docker/Kubernetes:
# docker-compose.yml
environment:
- "ES_JAVA_OPTS=-Xms16g -Xmx16g"
resources:
limits:
memory: 32Gi # At least 2x heap
Multi-Tenant Clusters
If running multiple Elasticsearch processes on one server (not recommended), divide resources accordingly and ensure each heap is set independently.
Signs of Wrong Heap Size
Heap Too Small
- Frequent GC pauses
- Heap usage consistently > 85%
- Circuit breaker exceptions
- OOM errors
Heap Too Large
- Slow query responses (poor filesystem cache)
- Long GC pauses (stop-the-world collections)
- Memory not being fully utilized
- Compressed oops disabled warnings
Sizing Checklist
- Heap is at most 50% of server RAM
- Heap does not exceed 31-32 GB
-
-Xmsequals-Xmx - Configuration is in
jvm.options.d/(not modifyingjvm.optionsdirectly) - Compressed oops is enabled (verified in logs)
- Heap usage in production stays below 85%
- GC pauses are acceptable (< 1 second)
Adjusting Heap Size
When changing heap size:
- Update configuration in
jvm.options.d/ - Perform rolling restart of nodes
- Monitor heap usage and GC behavior post-restart
- Verify compressed oops status in logs
# Rolling restart sequence
# 1. Disable allocation
PUT /_cluster/settings
{"transient": {"cluster.routing.allocation.enable": "none"}}
# 2. Stop node, update config, start node
# 3. Re-enable allocation
PUT /_cluster/settings
{"transient": {"cluster.routing.allocation.enable": "all"}}