An HTTP 400 from Elasticsearch means the request is structurally wrong - the server understood the protocol but cannot make sense of what you sent. Unlike 5xx errors that point to server-side problems, a 400 is always a client-side issue. Elasticsearch includes detailed error information in the response body, and reading it carefully is the fastest path to a fix.
Every 400 response follows the same JSON structure: a root error object containing type, reason, and often caused_by with a nested chain of exceptions. The type field is your starting point. Common values include json_parse_exception, parsing_exception, mapper_parsing_exception, illegal_argument_exception, action_request_validation_exception, and strict_dynamic_mapping_exception. Each maps to a distinct category of problem.
Malformed JSON
The simplest 400 to diagnose is json_parse_exception. Elasticsearch cannot parse the request body as valid JSON. The error message includes the exact character position where parsing failed - something like Unexpected character (',' (code 44)): was expecting double-quote to start field name at [Source: ...; line: 3, column: 12].
Common causes: a trailing comma after the last field in an object, single quotes instead of double quotes, unescaped characters in string values, or accidentally sending an empty body. The bulk API is especially strict - each line must be valid JSON terminated by a newline, including the final line.
# This will fail - trailing comma
curl -X POST "localhost:9200/my-index/_search" -H 'Content-Type: application/json' -d'
{
"query": {
"match": { "title": "test" },
}
}'
# Validate your JSON before sending it
echo '{"query":{"match":{"title":"test"},}}' | jq .
# parse error (invalid json): ...
Pipe your request body through jq . or python -m json.tool before sending it to Elasticsearch. This catches syntax errors instantly and saves a round trip.
Invalid Query DSL
A parsing_exception means the JSON is syntactically valid but Elasticsearch does not recognize the query structure. The error reason typically looks like [match_all] malformed query, expected [END_OBJECT] but found [FIELD_NAME] or Unknown key for a START_OBJECT in [query]. This happens when parameters are placed at the wrong nesting level or a query clause name is misspelled.
# Wrong - "size" is inside "query" where it doesn't belong
curl -X POST "localhost:9200/my-index/_search" -H 'Content-Type: application/json' -d'
{
"query": {
"match": { "title": "test" },
"size": 10
}
}'
# Correct - "size" is at the top level
curl -X POST "localhost:9200/my-index/_search" -H 'Content-Type: application/json' -d'
{
"query": {
"match": { "title": "test" }
},
"size": 10
}'
Another frequent trigger: confusing the shorthand and full forms of queries. {"term": {"status": "active"}} works, but {"term": {"status": {"value": "active", "boost": 1.5}}} is the expanded form. Mixing both styles - for example {"term": {"status": "active", "boost": 1.5}} - produces a parsing error because Elasticsearch expects either a raw value or a nested object, not both at the same level.
Mapping Conflicts and Type Errors
A mapper_parsing_exception fires during indexing when a document field does not match the expected mapping type. If a field is mapped as long and you send a string that cannot be coerced to a number, Elasticsearch rejects the document. The error message spells it out: failed to parse field [count] of type [long] in document [doc-1]. Preview of field's value: 'not-a-number'.
{
"error": {
"type": "mapper_parsing_exception",
"reason": "failed to parse field [timestamp] of type [date]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "failed to parse date field [yesterday] with format [strict_date_optional_time||epoch_millis]"
}
},
"status": 400
}
The related strict_dynamic_mapping_exception occurs when an index has "dynamic": "strict" set in its mapping and you try to index a document with a field that is not explicitly defined. This is a deliberate safety mechanism - the mapping refuses unknown fields. Either add the field to the mapping with a PUT mapping request, or switch dynamic to true or runtime if you want Elasticsearch to accept new fields automatically. A subtler variant happens when a field is sometimes sent as a string and sometimes as a nested object. Once Elasticsearch maps it as text, sending an object for that field will fail. Consistent document structure is the only real fix.
Missing or Wrong Content-Type
Starting with Elasticsearch 6.x, all requests with a body require a Content-Type: application/json header. Without it, you get a 400 with content_type_header_exception and the message Content-Type header [application/x-www-form-urlencoded] is not supported. Older curl versions and some HTTP libraries default to application/x-www-form-urlencoded when posting data.
The bulk API requires Content-Type: application/x-ndjson (though application/json is also accepted). If you are hitting the bulk endpoint through a proxy that rewrites headers, or through a framework that sets its own content type, double-check what actually arrives at Elasticsearch. The _nodes/hot_threads or a packet capture can help verify.
# Will fail on ES 6.x+ without explicit Content-Type
curl -X POST "localhost:9200/my-index/_doc" -d '{"title":"test"}'
# Correct
curl -X POST "localhost:9200/my-index/_doc" \
-H "Content-Type: application/json" \
-d '{"title":"test"}'
Reading Error Responses Effectively
Elasticsearch nests exceptions. The top-level type and reason give you the category, but the real detail is often buried in caused_by, which can be several levels deep. Use jq to extract it cleanly.
# Pretty-print just the error chain
curl -s -X POST "localhost:9200/my-index/_search" \
-H "Content-Type: application/json" \
-d '{"query":{"bad":{}}}' | jq '.error | {type, reason, caused_by}'
For bulk operations, the response contains per-item error details in the items array. A bulk request can partially succeed - some documents indexed, others rejected. Always check errors: true in the bulk response and iterate over items to find the failures. Logging the full response body from failed bulk calls saves hours of guesswork when debugging intermittent 400s from data pipelines.
When you cannot reproduce a 400 locally, enable slow log or audit logging to capture the exact request body that Elasticsearch received. Proxies, client libraries, and serialization layers can all silently transform your request before it reaches the cluster.