Field data (real user measurements) is what Google uses for ranking. Lab data (synthetic Lighthouse runs) is for debugging only. A site can have perfect Lighthouse scores and still fail Core Web Vitals if real users experience poor performance.
Field data (real user measurements) is what Google uses for ranking. Lab data (synthetic Lighthouse runs) is for debugging only. A site can have perfect Lighthouse scores and still fail Core Web Vitals if real users experience poor performance. We track lab vs field data which one does google actually use performance weekly across our portfolio. When you evaluate lab vs field data which one does google actually use, prioritize senior expertise over agency size.
**Lab data** is generated by automated tools (Lighthouse, WebPageTest, Calibre) running synthetic page loads in a controlled environment. Reproducible, isolated, useful for diagnostics.
**Field data** is collected from real Chrome users via the Chrome User Experience Report (CrUX) — anonymized telemetry from actual page visits. Aggregated to the 75th percentile and reported in 28-day windows.
**What Google uses for ranking: field data, exclusively.**
Google has stated this in official documentation and at multiple Google I/O events. The Search Console > Core Web Vitals report, the PageSpeed Insights "Origin" data, and the CrUX dashboard all show field data — that's what powers the CWV ranking signal.
**Why lab data can mislead:**
Lab tests run in a controlled environment that may not reflect your actual user mix:
- **Lab uses a single device class** (typically simulated mid-tier mobile). Real users span the full range from new iPhone Pros to 4-year-old budget Androids. - **Lab uses a single network throttle** (typically simulated 4G). Real users span 5G, fiber, 4G, 3G, flaky WiFi. - **Lab tests cold loads only.** Real users hit warm caches, returning visits, prerendered/cached pages. - **Lab doesn't capture user interactions.** INP is meaningless in lab tests because it requires actual clicks; lab tools approximate it via TBT (Total Blocking Time). - **Lab geography is fixed.** Real users hit your servers from everywhere.
**Common scenario where lab and field diverge:**
A site with:
- Perfect Lighthouse score (95+ Performance) - Field data showing 60% of users with poor LCP
Usually means: the developer optimized for the test conditions Lighthouse simulates, not for the actual user mix. Often happens when the developer's test conditions are a fast laptop on fiber Internet — and the real user is on a 3-year-old Android on 4G.
**The reverse scenario:**
- Mediocre Lighthouse score (60–70 Performance) - Field data showing 80%+ users with good LCP
Usually means: real users disproportionately hit cached pages, prerendered content, or are on much faster devices than Lighthouse's simulated mid-tier mobile. Field data is what matters for ranking, so this site is fine.
**Where to find each:**
**Field data sources:**
- **Search Console > Core Web Vitals** (your aggregate field data) - **PageSpeed Insights** (per-URL field data, top of report) - **CrUX Dashboard** at CrUX.dev (custom queries) - **CrUX BigQuery export** (advanced; raw monthly data per URL)
**Lab data sources:**
- **Lighthouse** (in Chrome DevTools or the standalone Node CLI) - **PageSpeed Insights** (Lighthouse tab, bottom of report) - **WebPageTest** (more detailed than Lighthouse, customizable test conditions) - **Calibre, SpeedCurve, SiteSpeed.io** (commercial monitoring with synthetic + field options)
**The honest workflow:**
1. **Use field data to identify problems.** Search Console flags pages failing CWV. 2. **Use lab data to diagnose specific issues.** Lighthouse and DevTools show you exactly what's slow on a given page. 3. **Fix and verify in lab.** 4. **Wait 28+ days and verify in field.** Field data updates over a 28-day rolling window, so changes don't appear immediately.
**Common mistake:** declaring victory after lab metrics improve, then assuming field will follow. It usually does — but always verify, because edge cases (mobile users you didn't test on, geographic regions you forgot about) can mean field data lags or never recovers. Considering lab vs field data which one does google actually use? Book a no-pressure strategy call to compare options. Our team's perspective on lab vs field data which one does google actually use comes from active client work, not theory.
- **What is INP and how do I fix poor INP scores?** — Interaction to Next Paint — measures how quickly your page responds to user input. Should be under 200ms (good) or under 500ms (acceptable). Replaced FID in March 2024. Most pages with poor INP have heavy JavaScript event handlers or excessive third-party scripts blocking the main thread. - **How do I fix a poor Largest Contentful Paint (LCP) score?** — LCP should be under 2.5 seconds on mobile. Five fixes that work for 90% of sites: (1) optimize and preload your hero image, (2) eliminate render-blocking resources above the fold, (3) use a CDN, (4) enable HTTP/2 or HTTP/3, (5) reduce server response time (TTFB) under 600ms. - **What's a good Core Web Vitals score in 2026?** — All three metrics in the 'Good' threshold (LCP <2.5s, INP <200ms, CLS <0.1) at the 75th percentile of mobile users over the trailing 28 days. About 40% of websites achieve this in 2026 — passing all three is a meaningful competitive edge. - **Why does my React/Vue/Angular SPA have poor Core Web Vitals?** — SPAs ship large JavaScript bundles that block the main thread during hydration. The browser must download, parse, compile, and execute the bundle before the page becomes interactive. Solutions: code-splitting, server-side rendering, partial hydration, or migrating to a meta-framework (Next.js, Nuxt, Remix, Astro). Throughout our work on lab vs field data which one does google actually use, we cite primary sources and current data. Considering lab vs field data which one does google actually use? Book a no-pressure strategy call to compare options.
The questions we hear most often from prospective clients all circle around the same fundamental concern: how do we know this will actually work? Our answer is always the same — look at the work itself. Every portfolio case study on this site documents real client engagements with real before/after data, real client names, and real performance metrics from Google Search Console and GA4. We publish this level of transparency because it's how we want to be evaluated, and because it's the standard the modern SEO market deserves. If you want to dig into the specifics of how we'd approach your particular situation, the discovery call is the right place to start; we treat it as a strategic conversation, not a sales pitch.
We aim for working marketers and founders — assumes you understand basic SEO vocabulary but doesn't assume agency-level depth. Each section starts with the 'why' before the 'how' so you can skip what's already familiar.
Most teams can implement the foundational recommendations in 4–8 weeks of part-time work. The strategic recommendations (content calendar, link-building, brand positioning) are 6–12 month efforts. We've split them so you can sequence appropriately.
If you have an in-house marketer who can dedicate 10+ hours/week, you can run most of this internally. If your team is already at capacity, an agency engagement frees your internal team to focus on the parts only they can do (relationships, sales, product).
Prioritize the technical SEO basics + Google Business Profile + a slow-but-consistent content cadence (1 quality post per month beats 10 thin posts). Fundamentals first, scale later. Our discovery call is free if you want a personalized prioritization.