The error failed to parse date field is one of the most common mapping exceptions in Elasticsearch. It appears when the value in an incoming document does not match the date format defined in the index mapping. The fix depends on which specific mismatch you are dealing with - and there are several distinct failure modes that look similar on the surface but have different root causes.
Format Mismatches: The Usual Suspects
The default format for a date field is strict_date_optional_time||epoch_millis. This accepts ISO 8601 strings like 2024-11-15T08:30:00Z and integer epoch milliseconds like 1731657000000. It does not accept epoch seconds (10 digits instead of 13), floats like 1731657000.123, or date strings in non-ISO layouts like 15/11/2024.
A common scenario: one pipeline sends epoch milliseconds while another sends epoch seconds to the same index. The epoch_millis parser expects a 13-digit integer. A 10-digit epoch second value parses successfully but maps to a date in 1970 because it is interpreted as a millisecond timestamp. No error is thrown - the data is silently wrong.
The reverse mismatch is noisier. If your mapping specifies epoch_millis and a document arrives with an ISO 8601 string, you get an immediate mapper_parsing_exception:
"mapper_parsing_exception: failed to parse date field [2024-11-15T08:30:00Z]
with format [epoch_millis]"
When epoch values arrive in scientific notation (e.g., 1.731657E12 from a JSON serializer using floating point), parsing also fails. Epoch formats expect plain integers or longs, not exponential notation.
Strict vs Lenient Date Parsing
Formats prefixed with strict_ enforce rigid matching. strict_date_optional_time requires a full year, optional month and day, and if time is present, hours and minutes are mandatory. Without the strict_ prefix, date_optional_time is lenient - it will attempt to parse bare numbers as a year, which causes subtle problems when a numeric string like "12345" that was meant to be treated as a keyword gets coerced into a date.
This distinction matters most with dynamic mapping. When Elasticsearch encounters a new string field in a document, it checks whether the value matches any of the configured dynamic date formats. With lenient parsing, short numeric strings can be incorrectly detected as dates, creating a date mapping for a field that should be keyword or long. Once the mapping exists, subsequent documents with non-date values for that field will fail.
The default dynamic date formats use strict_date_optional_time specifically to prevent this. If you override dynamic_date_formats in your mapping, stick with strict variants unless you have a specific reason not to:
PUT /my_index
{
"mappings": {
"dynamic_date_formats": ["strict_date_optional_time", "yyyy/MM/dd"]
}
}
You can also disable date detection entirely with "date_detection": false if you prefer to define all date fields explicitly in your mapping.
The format Mapping Parameter
When you define a date field explicitly, the format parameter controls which patterns are accepted. Multiple formats can be specified with || as a separator. Elasticsearch tries each format in order and uses the first one that matches:
PUT /events
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss.SSSZ||yyyy-MM-dd||epoch_millis"
}
}
}
}
This field now accepts full ISO timestamps, date-only strings, and epoch milliseconds. The first format in the list is also used for display - when Elasticsearch returns the _source, dates are stored internally as milliseconds since epoch, but aggregation keys and formatted output use the first format.
A subtle trap: once an index is created, you cannot change the format of an existing date field. If you need to add a format, you must reindex into a new index with the updated mapping. Plan your format list upfront, or use a broad multi-format definition from the start.
Epoch seconds and epoch milliseconds cannot coexist safely in the same format string without careful handling. If you specify epoch_millis||epoch_second, a 10-digit value will match epoch_millis first (the wrong interpretation). Order matters - put epoch_second before epoch_millis if both are possible, and document which format each producer uses.
date_nanos vs date
The date field type stores timestamps with millisecond resolution internally as a 64-bit long representing milliseconds since epoch. The date_nanos type stores nanosecond-resolution timestamps, also as a 64-bit long but representing nanoseconds since epoch. This limits date_nanos to a range of approximately 1970 to 2262.
PUT /traces
{
"mappings": {
"properties": {
"span_start": {
"type": "date_nanos",
"format": "strict_date_optional_time_nanos||epoch_millis"
}
}
}
}
The default format for date_nanos is strict_date_optional_time_nanos - note it does not include epoch_millis by default, unlike the regular date type. If your ingestion pipeline sends epoch milliseconds to a date_nanos field without adding epoch_millis to the format, every document will fail with a parse error.
Mixing date and date_nanos fields across indices that share an alias or data stream creates problems in queries. Range filters and aggregations behave differently depending on the underlying resolution. A date_histogram with calendar_interval: 1ms works on date but has different precision characteristics on date_nanos. If you are querying across indices where some use date and others use date_nanos for the same logical field, expect rounding inconsistencies in aggregation buckets.
Dynamic Date Detection Pitfalls
When date_detection is enabled (the default), Elasticsearch examines string values in new fields against dynamic_date_formats to decide whether to create a date or keyword mapping. The default patterns are strict_date_optional_time and yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z. Any string field whose first indexed value happens to match one of these patterns gets locked into a date mapping.
This creates a common production failure: the first document indexed into a new dynamic field contains a value like "2024/01/15". Elasticsearch maps the field as date. Later documents contain non-date strings for the same field and fail with mapping exceptions. By the time you notice, thousands of documents may have been rejected.
Two defenses work well in practice. First, use explicit mappings for fields you care about and set dynamic: strict or dynamic: false to prevent surprise mappings. Second, if you need dynamic mapping, use index templates with dynamic_templates to control how specific field name patterns are mapped:
{
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": { "type": "keyword" }
}
}
]
}
This forces all dynamically detected strings to keyword and prevents accidental date coercion. You can then add explicit date mappings for fields that actually contain dates. The tradeoff is that you lose automatic date detection, but the predictability is worth it in any index receiving data from multiple sources with varying schemas.
One final note: epoch_millis and epoch_second cannot be used as dynamic_date_formats. Numeric values sent as JSON numbers bypass string-based date detection entirely and are mapped as long or float. Only string representations of dates trigger dynamic detection.