All three metrics in the 'Good' threshold (LCP <2.5s, INP <200ms, CLS <0.1) at the 75th percentile of mobile users over the trailing 28 days. About 40% of websites achieve this in 2026 — passing all three is a meaningful competitive edge.
Google's Core Web Vitals "passing" criteria, exactly as Google defines them in 2026:
- All three metrics must hit the **Good** threshold: - **LCP:** under 2.5 seconds - **INP:** under 200 milliseconds - **CLS:** under 0.1 - At the **75th percentile** of users (so 75% of your real users must experience Good values for that metric) - Measured over the **trailing 28 days** of field data - Mobile measurements take priority (mobile-first indexing has been universal since 2023)
**Why 75th percentile matters:**
Google uses 75th percentile (not mean or median) because they want to ensure the experience is good for the bulk of users, not just typical users. A site with 70% of users having a 1.5s LCP and 30% having an 8s LCP fails (75th percentile is in the 8s group), even though the median is 1.5s.
This means your slowest users matter as much as your fastest. Common causes of poor 75th-percentile scores:
- Users on older Android devices (still 30%+ of Canadian mobile traffic) - Users on flaky 4G connections (rural and suburban areas) - Users on first-page-load (no browser cache) - Users hitting specific edge cases (payment confirmation pages with heavy third-party scripts, etc.)
**The 2026 industry benchmarks:**
From HTTP Archive's Web Almanac and Google's CrUX data, percentage of websites passing all three CWV at 75th percentile:
- **Mobile: ~40%** (up from 33% in 2024) - **Desktop: ~58%** - **Top 1,000 most-visited sites (mobile): ~62%** - **Bottom decile of sites (mobile): ~12%**
**By industry, top performers:**
- News and media: ~52% pass - E-commerce: ~38% pass (hurt by heavy third-party scripts and large product images) - B2B SaaS marketing sites: ~48% pass - Local services: ~44% pass - Government / healthcare: ~31% pass (legacy CMSs, accessibility-heavy markup)
**The ranking impact:**
Google has explicitly stated CWV is a "tiebreaker" ranking signal — it doesn't override content quality, but it differentiates similar-quality pages. Practical impact:
- **For high-competition queries:** failing CWV can cost 1–3 ranking positions - **For mid-competition queries:** failing CWV is rarely the determining factor - **For low-competition queries:** CWV is essentially irrelevant; content wins
But here's the second-order effect: poor CWV directly damages conversion rate. Cloudflare's 2024 study found a 7% conversion drop per additional second of load time. So even if CWV doesn't move your ranking, it moves your revenue.
**The "all three Good" target:**
In 2024, Google updated CWV to require all three metrics in Good — previously you could pass with two of three. Practical implication: a site with great LCP and CLS but poor INP no longer "passes" CWV. Most sites that newly fail in 2024–2026 fail on INP (the new INP metric is harder to pass than the old FID).
**Lab data vs field data:**
- **Lab data** (Lighthouse, PageSpeed Insights "Lab" tab): synthetic, controllable, repeatable. Good for debugging. - **Field data** (Search Console, PageSpeed Insights "Origin" tab, CrUX): real-user, what Google actually uses for ranking.
Google ranks based on field data. A perfect Lighthouse score with terrible field data = your real users have poor experience. Always check both.
- **What is INP and how do I fix poor INP scores?** — Interaction to Next Paint — measures how quickly your page responds to user input. Should be under 200ms (good) or under 500ms (acceptable). Replaced FID in March 2024. Most pages with poor INP have heavy JavaScript event handlers or excessive third-party scripts blocking the main thread. - **How do I fix a poor Largest Contentful Paint (LCP) score?** — LCP should be under 2.5 seconds on mobile. Five fixes that work for 90% of sites: (1) optimize and preload your hero image, (2) eliminate render-blocking resources above the fold, (3) use a CDN, (4) enable HTTP/2 or HTTP/3, (5) reduce server response time (TTFB) under 600ms. - **Lab vs field data — which one does Google actually use?** — Field data (real user measurements) is what Google uses for ranking. Lab data (synthetic Lighthouse runs) is for debugging only. A site can have perfect Lighthouse scores and still fail Core Web Vitals if real users experience poor performance. - **Why does my React/Vue/Angular SPA have poor Core Web Vitals?** — SPAs ship large JavaScript bundles that block the main thread during hydration. The browser must download, parse, compile, and execute the bundle before the page becomes interactive. Solutions: code-splitting, server-side rendering, partial hydration, or migrating to a meta-framework (Next.js, Nuxt, Remix, Astro).