Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

Read more

Elasticsearch vm.max_map_count and mmap Errors

Elasticsearch relies on Apache Lucene's MMapDirectory to read index data through memory-mapped files. The mmap() system call maps index segments directly into the process's virtual address space, letting the kernel's page cache handle I/O instead of explicit read/write syscalls. This is fast - the CPU's MMU and TLB resolve virtual-to-physical translations in hardware, so random access into multi-gigabyte index files costs roughly the same as reading from a byte array.

Each Lucene segment produces multiple underlying files (term dictionaries, postings lists, stored fields, doc values, and more). Every one of those files gets its own memory mapping. A busy node with hundreds of shards and thousands of segments can easily create tens of thousands of individual mappings. The Linux kernel tracks these through Virtual Memory Areas (VMAs), and the total count allowed per process is governed by vm.max_map_count.

The Bootstrap Check

When Elasticsearch starts, it runs a series of bootstrap checks that verify the host is configured for production use. One of these checks reads /proc/sys/vm/max_map_count and compares it against the required minimum. If the value is too low, Elasticsearch refuses to start and logs:

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

The default on most Linux distributions is 65530. Elasticsearch versions 8.15 and earlier require at least 262144. Starting with Elasticsearch 8.16, the minimum was raised to 1048576 to accommodate larger indices and more aggressive use of memory-mapped I/O. This check only triggers in production mode - when Elasticsearch binds to a non-loopback address. In development mode (bound to localhost), it logs a warning but doesn't block startup.

Fixing on Linux

Set the value at runtime with sysctl:

sudo sysctl -w vm.max_map_count=1048576

This takes effect immediately but won't survive a reboot. To make it persistent, add the setting to /etc/sysctl.conf or drop a file in /etc/sysctl.d/:

echo "vm.max_map_count=1048576" | sudo tee /etc/sysctl.d/99-elasticsearch.conf
sudo sysctl --system

Verify the active value:

cat /proc/sys/vm/max_map_count

If you're running Elasticsearch from a systemd unit, the sysctl change applies at the host level - no service-specific override is needed for this particular parameter. The setting is global to the kernel, not per-process.

Docker Containers

Docker containers share the host kernel, so vm.max_map_count must be set on the Docker host, not inside the container. Before starting an Elasticsearch container:

sudo sysctl -w vm.max_map_count=1048576

With Docker Compose, you cannot set kernel parameters through the compose file itself. The host must be preconfigured. A common pattern for CI or ephemeral environments is to set the value in a startup script or cloud-init block before Docker starts.

On Docker Desktop for Mac and Windows, the Linux VM that runs containers may reset vm.max_map_count on restart. Docker Desktop 4.25+ lowered the default to 65530, which breaks Elasticsearch. You can work around this by creating or editing ~/.docker/daemon.json or running the sysctl inside the Docker VM:

docker run --rm --privileged alpine sysctl -w vm.max_map_count=1048576

This persists until the VM restarts.

Kubernetes Deployments

In Kubernetes, you can't set host-level sysctls from a regular container. The standard approach is a privileged init container that runs before Elasticsearch starts:

initContainers:
- name: sysctl
  image: busybox
  command: ["sysctl", "-w", "vm.max_map_count=1048576"]
  securityContext:
    privileged: true
    runAsUser: 0

This modifies the setting on the underlying node. Since vm.max_map_count is a host-level (not namespace-scoped) sysctl, it affects all pods on that node. Some managed Kubernetes providers (GKE, EKS, AKS) restrict privileged containers by default through PodSecurityPolicies or admission controllers. In those environments, you may need to pre-configure node pools with the correct sysctl using DaemonSets, node startup scripts, or custom node images.

If you're using the Elastic Cloud on Kubernetes (ECK) operator, the operator's documentation recommends the init container approach and provides ready-made configuration examples. Note that the init container needs runAsUser: 0 since writing to /proc/sys requires root.

What Happens When vm.max_map_count Is Too Low

If you bypass the bootstrap check (by running in development mode or on an older version that warns instead of failing), Elasticsearch will start but degrade unpredictably. When Lucene exhausts the available VMAs, any attempt to open a new segment - during indexing, merging, or shard recovery - throws an IOException wrapping an ENOMEM from the kernel. Symptoms include failed shard allocations, stalled merges, and partial search results.

The process doesn't run out of physical RAM. It runs out of address space mappings. A node with 64 GB of heap and plenty of free memory can still hit this limit if it has enough segments. Monitoring tools won't flag memory pressure because the issue is in the kernel's VMA accounting, not in JVM or system memory. If you see map failed or mmap failed in Elasticsearch logs while memory looks healthy, vm.max_map_count is the first thing to check.

Pulse - Elasticsearch Operations Done Right

Pulse can solve your Elasticsearch issues

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.