HTTP 400 Bad Request means the server cannot parse your request — malformed syntax, invalid headers, or oversized cookies. Here is how to diagnose and fix it.
HTTP 400 Bad Request is the most generic of the client-error codes. It tells you: the server received your request, attempted to parse it, and gave up because something was structurally wrong. Unlike 422 (which means the server parsed your request fine but rejected its contents) or 404 (which means the URL does not exist), 400 means the server could not even understand what you were asking. Common triggers include malformed JSON in a POST body, an invalid Host header, oversized cookies that exceed the server's header buffer, characters outside the URL spec in the path, or an HTTP method the server does not recognize. 400 is the server saying 'I have no idea what you just sent me.' That generic-ness is both useful (it covers many edge cases) and frustrating (it tells you nothing about the root cause), which is why proper logging is essential.
In our portfolio monitoring data, the top causes of 400 responses are, in order: (1) Cookie bloat — third-party tools like ad pixels, analytics, marketing automation, and session managers progressively pile up cookies until the combined Cookie header exceeds the server's default buffer (typically 8KB on Nginx, 8190 bytes on Apache). The user sees a 400 'Request Header or Cookie Too Large' error and cannot access the site until they clear cookies. (2) Malformed JSON in API POST bodies — usually caused by a front-end JS bug where an undefined value gets serialized as the literal string 'undefined' instead of being omitted. (3) Invalid characters in URLs — usually emoji, smart quotes, or non-ASCII characters that were not URL-encoded before being included in a link. (4) Missing required headers — APIs that require X-Auth-Token or X-API-Key and reject requests that omit them. (5) HTTP method mismatch — sending a POST to an endpoint that only accepts GET, which most servers handle with 405 Method Not Allowed but some misconfigure as 400. (6) Reverse-proxy strictness — Cloudflare, AWS ALB, and similar edges reject requests with non-standard headers that the origin server would have accepted.
Step 1: Determine whether the 400 is intermittent (specific users only) or universal (everyone). Intermittent 400s are almost always cookie bloat or stale session data. Universal 400s are almost always a deployment regression or upstream config change. Step 2: For intermittent: ask the affected user to clear cookies for your domain and try again. If that fixes it, you have a cookie-size issue — increase your server's header buffer (see config snippets below) or audit which cookies are bloating. Step 3: For universal: check the most recent deployment. Did anyone change the API contract, add a required header, or tighten validation? Roll back and re-test. Step 4: Capture a full request with `curl -v` or browser devtools (Network tab → right-click → Copy as cURL). Reproduce locally. The server logs should show what specifically failed. Step 5: If logs are silent on the actual cause, increase log verbosity temporarily — most web servers default to logging only the status code, not the failure reason. Step 6: For cookie bloat specifically, audit your tag manager. Each marketing tool typically adds 1-3 cookies; once you cross 30+ third-party cookies, you are at risk.
Here is a working Nginx snippet that addresses the most common 400 scenarios. Drop this into your server block and reload with `sudo nginx -t && sudo systemctl reload nginx`.
# Increase header buffer to handle cookie bloat (default is 8KB) http { large_client_header_buffers 8 32k; client_header_buffer_size 8k; client_max_body_size 50m;
# Log the actual reason for 400 responses log_format detailed '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' 'rt=$request_time uct="$upstream_connect_time" ' 'reason="$request_completion"'; error_log /var/log/nginx/error.log notice; } ```
After applying, validate with `curl -I https://yourdomain.com/path` — you should see the corrected status header. If you are running behind Cloudflare or a similar CDN, remember to purge cache and check that the upstream-only response matches what edge users will see.
For Apache (still common on legacy WordPress and Magento hosts), the equivalent goes in your `.htaccess` or virtualhost config:
# Increase header limits to handle cookie bloat LimitRequestFieldSize 32768 LimitRequestLine 16384 LimitRequestFields 200
# Verbose logging to capture the cause of 400 responses LogLevel warn ErrorLog /var/log/apache2/error.log
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %{cookie}n %D" detailed CustomLog /var/log/apache2/access.log detailed ```
Reload with `sudo apachectl graceful`. On shared hosting where you cannot reload, the `.htaccess` change takes effect on the next request — but you should still flush any opcode cache (LiteSpeed, OPcache) to be sure.
400 responses on URLs Googlebot tries to crawl will cause those URLs to be dropped from the index. This is most damaging when 400s leak into your sitemap — Google fetches sitemap URLs aggressively, and a sitemap full of 400s signals a broken site. The other risk is selective 400s: some bot-detection setups return 400 to requests they consider suspicious based on user-agent or fingerprinting heuristics. If your bot-detection rules accidentally classify Googlebot as suspicious, Google will see 400s and demote your pages. Whitelist Googlebot's IP ranges (published at https://developers.google.com/search/apis/ipranges/googlebot.json) explicitly in your edge rules. The deepest SEO risk with 400 is silent failure — unlike 5xx errors which trigger Search Console alerts, 400 is treated as 'your client did something wrong' and is not flagged. Set up your own monitoring that alerts when the 400 rate exceeds a baseline.
The most expensive 400 incident in our portfolio history involved an Ottawa e-commerce client whose Cloudflare Bot Fight Mode was upgraded silently in a 2024 platform update. The upgrade tightened heuristics and started returning 400 to roughly 12% of legitimate Googlebot requests. Google Search Console did not alert (it does not flag 4xx errors aggressively). Indexed pages dropped from 4,200 to 1,800 over six weeks. Organic traffic dropped 47%. We caught it during a quarterly audit by cross-referencing the Cloudflare access log with known Googlebot IP ranges. Disabling the over-aggressive rule restored crawling within days, but the recovery took three months. Lesson: never trust your CDN's 'set it and forget it' bot rules without monitoring how they affect search engine bots specifically. Our SEO audit services include a quarterly check on bot-detection settings for exactly this reason.
If the fix above does not resolve the 400 response, the issue is usually one layer deeper than the web server: an upstream application, a misconfigured load balancer, or an origin shield rule on the CDN. At that point a full HTTP trace (via `curl -v` or a Charles/Wireshark capture) is the fastest path to root cause. If you would rather hand the diagnosis to a senior engineer, book a free call and we will walk through the request path with you. For broader site health, see our SEO services and our free PageSpeed audit tool.
It means the server received your request but could not parse it — malformed syntax, oversized headers, invalid characters in the URL, or a missing required field. The server is saying 'I do not understand what you sent me.'
Almost always cookie bloat. Third-party tools (analytics, ads, session managers) progressively add cookies until the total Cookie header exceeds the server's buffer limit. Clearing cookies for that domain typically fixes it immediately.
Yes, when Googlebot encounters 400 responses on URLs it tries to crawl, those URLs get dropped from the index. The most insidious case is over-aggressive bot detection that returns 400 to legitimate Googlebot requests — always whitelist Googlebot IPs explicitly.
400 means the request itself is malformed — the server cannot even attempt to look for the resource. 404 means the request was structurally fine but the resource at that URL does not exist.