Field data (real user measurements) is what Google uses for ranking. Lab data (synthetic Lighthouse runs) is for debugging only. A site can have perfect Lighthouse scores and still fail Core Web Vitals if real users experience poor performance.
**Lab data** is generated by automated tools (Lighthouse, WebPageTest, Calibre) running synthetic page loads in a controlled environment. Reproducible, isolated, useful for diagnostics.
**Field data** is collected from real Chrome users via the Chrome User Experience Report (CrUX) — anonymized telemetry from actual page visits. Aggregated to the 75th percentile and reported in 28-day windows.
**What Google uses for ranking: field data, exclusively.**
Google has stated this in official documentation and at multiple Google I/O events. The Search Console > Core Web Vitals report, the PageSpeed Insights "Origin" data, and the CrUX dashboard all show field data — that's what powers the CWV ranking signal.
**Why lab data can mislead:**
Lab tests run in a controlled environment that may not reflect your actual user mix:
- **Lab uses a single device class** (typically simulated mid-tier mobile). Real users span the full range from new iPhone Pros to 4-year-old budget Androids. - **Lab uses a single network throttle** (typically simulated 4G). Real users span 5G, fiber, 4G, 3G, flaky WiFi. - **Lab tests cold loads only.** Real users hit warm caches, returning visits, prerendered/cached pages. - **Lab doesn't capture user interactions.** INP is meaningless in lab tests because it requires actual clicks; lab tools approximate it via TBT (Total Blocking Time). - **Lab geography is fixed.** Real users hit your servers from everywhere.
**Common scenario where lab and field diverge:**
A site with:
- Perfect Lighthouse score (95+ Performance) - Field data showing 60% of users with poor LCP
Usually means: the developer optimized for the test conditions Lighthouse simulates, not for the actual user mix. Often happens when the developer's test conditions are a fast laptop on fiber Internet — and the real user is on a 3-year-old Android on 4G.
**The reverse scenario:**
- Mediocre Lighthouse score (60–70 Performance) - Field data showing 80%+ users with good LCP
Usually means: real users disproportionately hit cached pages, prerendered content, or are on much faster devices than Lighthouse's simulated mid-tier mobile. Field data is what matters for ranking, so this site is fine.
**Where to find each:**
**Field data sources:**
- **Search Console > Core Web Vitals** (your aggregate field data) - **PageSpeed Insights** (per-URL field data, top of report) - **CrUX Dashboard** at CrUX.dev (custom queries) - **CrUX BigQuery export** (advanced; raw monthly data per URL)
**Lab data sources:**
- **Lighthouse** (in Chrome DevTools or the standalone Node CLI) - **PageSpeed Insights** (Lighthouse tab, bottom of report) - **WebPageTest** (more detailed than Lighthouse, customizable test conditions) - **Calibre, SpeedCurve, SiteSpeed.io** (commercial monitoring with synthetic + field options)
**The honest workflow:**
1. **Use field data to identify problems.** Search Console flags pages failing CWV. 2. **Use lab data to diagnose specific issues.** Lighthouse and DevTools show you exactly what's slow on a given page. 3. **Fix and verify in lab.** 4. **Wait 28+ days and verify in field.** Field data updates over a 28-day rolling window, so changes don't appear immediately.
**Common mistake:** declaring victory after lab metrics improve, then assuming field will follow. It usually does — but always verify, because edge cases (mobile users you didn't test on, geographic regions you forgot about) can mean field data lags or never recovers.
- **What is INP and how do I fix poor INP scores?** — Interaction to Next Paint — measures how quickly your page responds to user input. Should be under 200ms (good) or under 500ms (acceptable). Replaced FID in March 2024. Most pages with poor INP have heavy JavaScript event handlers or excessive third-party scripts blocking the main thread. - **How do I fix a poor Largest Contentful Paint (LCP) score?** — LCP should be under 2.5 seconds on mobile. Five fixes that work for 90% of sites: (1) optimize and preload your hero image, (2) eliminate render-blocking resources above the fold, (3) use a CDN, (4) enable HTTP/2 or HTTP/3, (5) reduce server response time (TTFB) under 600ms. - **What's a good Core Web Vitals score in 2026?** — All three metrics in the 'Good' threshold (LCP <2.5s, INP <200ms, CLS <0.1) at the 75th percentile of mobile users over the trailing 28 days. About 40% of websites achieve this in 2026 — passing all three is a meaningful competitive edge. - **Why does my React/Vue/Angular SPA have poor Core Web Vitals?** — SPAs ship large JavaScript bundles that block the main thread during hydration. The browser must download, parse, compile, and execute the bundle before the page becomes interactive. Solutions: code-splitting, server-side rendering, partial hydration, or migrating to a meta-framework (Next.js, Nuxt, Remix, Astro).