The Elasticsearch rollover API creates a new write index when a current write target hits a size, age, or document-count condition. It is the primary mechanism for keeping time-series indices (logs, metrics, events) from growing unboundedly. You can call it manually with POST /<target>/_rollover, but in production it is almost always driven by ILM so it happens automatically. Rollover works against both data streams and write aliases, on Elasticsearch 7.x, 8.x, and 9.x.
This guide covers the API, the conditions you can specify, the difference between data stream and alias rollover, the auto-naming convention, and the integration with ILM.
How Rollover Works
A rollover target (an alias with is_write_index or a data stream) points at the current write index. When you call rollover, Elasticsearch:
- Checks the rollover conditions you specified, if any.
- If they are met (or no conditions were given), creates a new index.
- Switches the write target to the new index.
- Leaves the previous index in place, still readable.
The application keeps writing to the same name (logs, metrics, the data stream name). Only Elasticsearch knows which underlying index is currently receiving writes.
The API
# Roll over now, no conditions
POST /<rollover-target>/_rollover
# Roll over only if conditions are met
POST /<rollover-target>/_rollover
{
"conditions": {
"max_age": "7d",
"max_docs": 100000000,
"max_primary_shard_size": "50gb"
}
}
# Roll over to a specific new index name (alias-managed rollover only)
POST /my-alias/_rollover/my-index-000002
The response includes rolled_over: true if a rollover actually happened, plus the conditions that were met.
dry_run: true evaluates the conditions without performing the rollover. Useful for monitoring and dashboards.
POST /<rollover-target>/_rollover?dry_run=true
Rollover Conditions
You can specify any combination of:
| Condition | Triggers when |
|---|---|
max_age |
The index has existed longer than this |
max_docs |
The index has more than N documents |
max_size |
Total size of the primary shards exceeds this |
max_primary_shard_size |
Any one primary shard exceeds this |
max_primary_shard_docs |
Any one primary shard has more than N documents |
min_age / min_docs / min_* variants |
Must be met for rollover to happen |
If you specify several max_* conditions, any one of them is enough to trigger rollover. The min_* conditions are ANDed: all must be met. Mixing both gives you "roll over only after X, but no later than Y."
The most useful production combination is usually:
{
"conditions": {
"max_age": "7d",
"max_primary_shard_size": "50gb"
}
}
max_primary_shard_size is the more important one for shard sizing health. max_age is the safety net for low-traffic indices that would otherwise live forever.
Alias-Based Rollover
Set up an alias with is_write_index pointing at the first index:
PUT /logs-000001
{
"aliases": { "logs": { "is_write_index": true } }
}
Now POST /logs/_doc writes go to logs-000001. When you call:
POST /logs/_rollover
Elasticsearch creates logs-000002, switches is_write_index to the new index, and leaves logs-000001 readable as a member of the alias. Reads via logs still span both indices.
The Auto-Naming Rule
When you do not specify a target name, Elasticsearch uses this rule:
If the original index ends in a hyphen followed by a number, increment the number. Otherwise, append
-000002.
So logs-000001 becomes logs-000002. events becomes events-000002. For predictable naming, start with the -000001 suffix.
You can also use date math in the original index name to get date-based naming:
# Create logs-2026.05.13-000001 (date-math URL-encoded)
PUT /%3Clogs-%7Bnow%2Fd%7D-000001%3E
This produces logs-2026.05.13-000001, and a rollover the next day will produce logs-2026.05.14-000002 if you encode the date math the same way.
Data Stream Rollover
Data streams supersede the rollover-alias pattern and are the recommended approach for new time-series workloads. The data stream is the stable name; the backing indices are managed automatically.
# With a matching template (see the templates guide), creating the data stream is automatic:
PUT /_data_stream/logs-app
# Manual rollover
POST /logs-app/_rollover
Backing indices follow the pattern .ds-<stream>-<date>-<generation>. They are hidden by default in _cat/indices. The stream itself is the only name that application code or queries should refer to.
For data streams, rollover always succeeds (no is_write_index setup required) and the new backing index becomes the write index automatically.
Driving Rollover from ILM
In production, you rarely call _rollover by hand. Attach an ILM policy with a rollover action and Elasticsearch handles the cadence:
PUT /_ilm/policy/logs-30d
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d",
"max_primary_shard_size": "50gb"
}
}
},
"delete": {
"min_age": "30d",
"actions": { "delete": {} }
}
}
}
}
Attach the policy via the index template, and the cluster takes over: rollover when the conditions are met, deletion at 30 days. No human ever calls the API.
ILM evaluates rollover conditions on the indices.lifecycle.poll_interval cadence (default 10 minutes). Combined with cluster activity, that means a rollover triggered by max_primary_shard_size can lag the threshold by a few minutes. For tight cadence, lower the poll interval; for most workloads the default is fine.
Common Pitfalls
- No
is_write_indexon the alias. Rollover fails withrollover target [<alias>] does not point to a write index. Setis_write_index: trueon the initial member. - Multiple writers, one alias, no
is_write_index. Same failure mode. Exactly one underlying index must be the write target. max_sizevsmax_primary_shard_sizeconfusion.max_sizeis total primary store.max_primary_shard_sizeis the per-shard threshold. For shard-health, use the latter.- Naming pattern that breaks auto-increment. If your indices end with anything other than a hyphen-separated number, rollover cannot auto-increment and you must specify the new name explicitly.
- Forgetting to delete old indices. Rollover never deletes. Pair it with an ILM delete phase or a manual retention job.
How Pulse Helps With Rollover Health
Rollover misconfigurations rarely fail loudly; they fail silently as oversized shards, late rollovers, or indices that have not rolled in months. Pulse continuously monitors rollover cadence on Elasticsearch and OpenSearch clusters and surfaces aliases that have stopped rolling, data streams whose backing indices have grown past safe shard sizes, ILM policies stuck on the rollover step, and write aliases without is_write_index. Instead of waiting for a hot shard incident, teams running Pulse see the drift the moment it starts. Connect your cluster to Pulse and let it watch the rollover health for you.
Frequently Asked Questions
Q: Should I use a data stream or a rollover alias?
For new time-series workloads, use a data stream. For existing alias-based patterns, there is no urgency to migrate; both work fine on current Elasticsearch versions. Data streams are simpler and have first-class support in newer tooling.
Q: Does rollover delete the old index?
No. Rollover only creates the new write index and switches the pointer. Old indices stay readable until you delete them. Use an ILM delete phase for automatic cleanup.
Q: How big should I let each index get before rolling over?
A useful default is 50 GB per primary shard. For example, with 3 primary shards, set max_primary_shard_size: 50gb so each index tops out at 150 GB total. Going much above 50 GB per shard makes recovery and rebalancing slow.
Q: Why is my rollover not happening even though the conditions are met?
Three common causes: the is_write_index flag is missing on the alias; ILM is paused or polling slowly (indices.lifecycle.poll_interval); the index name does not end in -000001 style so auto-naming fails. Check GET /<index>/_ilm/explain for ILM-driven rollovers.
Q: Can I roll over manually while ILM is also managing the index?
Yes, but it is not usually necessary. A manual POST /<target>/_rollover advances the write index immediately, and ILM picks up from there. Use it for one-off corrections, not regular operation.
Q: How is rollover different from shrink, split, or reindex?
Rollover creates a new write index. Shrink reduces the shard count of an existing index. Split increases the shard count. Reindex copies documents to a new index, optionally transforming them. They solve different problems.