Original research from Ottawa SEO Inc. We ran 72 Canadian-focused SEO queries through gpt-5.2 between 2026-04-20 and 2026-04-22 and tracked which sources the model cited. This report shows which 15 domains dominate AI-generated answers, what that means for your SEO program, and how Canadian businesses can earn citations.
The way buyers find Canadian businesses is splitting in two. Traditional Google results still drive most clicks, but AI-generated answers — Google's AI Overviews, ChatGPT Search, Perplexity, and the Gemini answer panel — are increasingly the *first* surface a buyer encounters. When AI answers a question, it cites a small handful of sources. Earning those citations is the new top-of-funnel. Losing them is invisible decay.
We couldn't find a single public study tracking which domains AI models cite when answering Canadian SEO queries. So we built one. Between **2026-04-20** and **2026-04-22** we ran **72 prompts** through **gpt-5.2** covering Canadian local SEO, technical SEO, AI search optimization, and tactical questions Canadian business owners actually ask. We logged every domain cited in every answer. The full dataset is open — the methodology and aggregate results are below.
**Model.** gpt-5.2 via the OpenAI API with web-search enabled, default temperature.
**Query set.** 72 prompts across two collection runs (2026-04-20 and 2026-04-22). Queries were tiered: tier-1 generic SEO questions any buyer might ask ("how do I improve local SEO"), tier-2 Canadian-specific questions ("best SEO agency Toronto"), and tier-3 tactical/diagnostic questions ("why is my Google Business Profile not showing in maps").
**Capture.** For each query we recorded the response and parsed every URL the model cited. Domains were normalized (stripped subdomains, lowercased) so that *blog.example.com* and *www.example.com* count as one entity.
**Limits.** This is a snapshot of one model on two specific dates. AI answers are non-deterministic — the same prompt can return different sources hours apart. Numbers below should be read as directional, not gospel. We are publishing snapshots monthly and will track movement over time.
**Reproducibility.** The query list lives at `private-data/ai-overview-tracker/queries.json` in our public repository, and the raw snapshot JSONs are timestamped. Anyone can rerun the study against their own model and compare.
Across **72 successful queries** spanning **2 collection days**, gpt-5.2 cited **27 unique domains** a total of **214 times**. The distribution is heavily concentrated: the top 15 domains alone account for the majority of citations, and the top 3 alone capture roughly **39.7%** of share-of-voice.
The leaders by citation count:
1. **developers.google.com** — 33 citations (15.4% share-of-voice) 2. **clutch.co** — 26 citations (12.1% share-of-voice) 3. **moz.com** — 26 citations (12.1% share-of-voice) 4. **support.google.com** — 22 citations (10.3% share-of-voice) 5. **ahrefs.com** — 18 citations (8.4% share-of-voice) 6. **searchenginejournal.com** — 12 citations (5.6% share-of-voice) 7. **google.com** — 10 citations (4.7% share-of-voice) 8. **semrush.com** — 9 citations (4.2% share-of-voice) 9. **searchengineland.com** — 8 citations (3.7% share-of-voice) 10. **backlinko.com** — 6 citations (2.8% share-of-voice) 11. **webfx.com** — 6 citations (2.8% share-of-voice) 12. **upcity.com** — 5 citations (2.3% share-of-voice) 13. **designrush.com** — 4 citations (1.9% share-of-voice) 14. **brightlocal.com** — 3 citations (1.4% share-of-voice) 15. **searchenginepeople.com** — 3 citations (1.4% share-of-voice)
Four patterns jump out:
1. **Google's own properties dominate.** Between developers.google.com and support.google.com, Google itself is the single largest source of "expert" content the model trusts. Anything you publish that mirrors and links back to Google's own documentation has an outsized chance of being cited. 2. **Established SEO publishers are entrenched.** Moz, Ahrefs, Search Engine Journal, and similar outlets accumulated nearly two decades of authority before LLMs were trained. They start every query with a structural advantage. 3. **Directories punch above their weight.** Clutch.co repeatedly appears as the go-to for "best agency" type queries. For Canadian agencies, a Clutch profile (with verified reviews) is likely worth more in the AI era than it ever was in traditional SEO. 4. **Net-new Canadian-specific authority is thin.** Of the top-15, almost all are US- or globally-headquartered. There is a real opening for Canadian-domain publishers to claim share-of-voice on Canada-specific queries — particularly bilingual (EN/FR) content.
| Rank | Domain | Citations | Share of voice | |---:|---|---:|---:| | 1 | developers.google.com | 33 | 15.4% | | 2 | clutch.co | 26 | 12.1% | | 3 | moz.com | 26 | 12.1% | | 4 | support.google.com | 22 | 10.3% | | 5 | ahrefs.com | 18 | 8.4% | | 6 | searchenginejournal.com | 12 | 5.6% | | 7 | google.com | 10 | 4.7% | | 8 | semrush.com | 9 | 4.2% | | 9 | searchengineland.com | 8 | 3.7% | | 10 | backlinko.com | 6 | 2.8% | | 11 | webfx.com | 6 | 2.8% | | 12 | upcity.com | 5 | 2.3% | | 13 | designrush.com | 4 | 1.9% | | 14 | brightlocal.com | 3 | 1.4% | | 15 | searchenginepeople.com | 3 | 1.4% |
*Share-of-voice is calculated against the 214 total citation slots logged across the 2 snapshots of this study.*
If you're a Canadian business reading this and your domain isn't in the top-15, you are not alone — almost no Canadian-domain sites are. Here's what the data implies you should actually do:
**1. Treat AI citations as a distinct ranking surface.** Stop assuming that ranking #1 in Google = being cited by the AI. The two are correlated but not identical. AI models reward content that is *quotable, structured, and sourced*. Long quotable definitions, well-formatted tables, clear "if you remember nothing else..." takeaways, and explicit numerical claims with sources tend to surface in AI answers more reliably than equivalent content that ranks #1 in classic SERPs.
**2. Build E-E-A-T entity signals.** Models lean on author bios, organizational footprints, and cross-site mentions when deciding what to cite. A page on an unknown domain with one author and no biographical context is unlikely to be cited even if it's the best-written piece on the topic. Invest in real, verifiable author pages, organization schema, and earned mentions on the sites already in the top-15.
**3. Pursue Clutch and high-trust directory profiles aggressively.** For commercial-intent queries (best X agency in Y), the model frequently routes to directories before publisher content. A complete, review-backed Clutch profile is one of the highest-leverage moves an agency can make right now.
**4. Build a "data study" pipeline.** The single content type that disproportionately earns AI citations is original research with concrete numbers. (This very report is an attempt to walk that talk.) Publishing two or three serious data studies per year tends to outperform publishing 50 tactical blog posts in citation outcomes.
**5. For Canadian businesses specifically — claim the geo-modified entity space.** "Canadian SEO statistics", "Canadian PPC benchmarks", "GST/HST e-commerce SEO" — these are queries where US-based sites have very weak content and a Canadian publisher with strong on-page signals can leap into the top-3 with relatively modest investment.
Transparency: in this baseline run, ottawaseo.com was cited **0 times** out of 72 successful queries. That is the honest starting point we are tracking *forward* from. We're publishing this number deliberately — partly as accountability to ourselves, partly because most agencies that publish AI-citation studies bury their own performance.
Our stated goal for the next twelve months: claim a share-of-voice position in the top-15 for Canadian-geo SEO queries by 2027-Q4. We will publish quarterly snapshots showing whether we are closing the gap or not.
This study covers **one model on 2 days** with **72 prompts**. It is the first dataset of its kind we are aware of for Canadian-context SEO queries — but it is a baseline, not a settled result.
What this study does **not** measure:
- Google AI Overviews specifically (different model, different retrieval pipeline). We will add a parallel AI-Overviews tracker in the next round. - Click-through behaviour from cited answers. A citation is exposure, not necessarily traffic. - French-language queries. Coming in the Q3 2026 update. - Vertical-specific queries (legal, medical, ecommerce). These behave very differently and deserve their own study.
We will publish a refreshed version of this report monthly, with the dataset expanding each cycle. If you want notification when the next edition lands, subscribe to our newsletter.
Free to cite, quote, embed in slides, or republish — credit "Ottawa SEO Inc., State of AI Search Citations in Canada, 2026-04-22" with a link back to this page (/blog/state-of-ai-search-citations-canada-2026/). Journalists, analysts, and educators may use the underlying numbers without restriction. If you want the raw JSON snapshots, contact us and we'll send them.
If you cite this report on your own site, please link back so we can find your coverage and reciprocate where appropriate.
gpt-5.2, run between 2026-04-20 and 2026-04-22, covering 72 Canadian-context SEO prompts.
AI Overviews uses a different retrieval pipeline than the OpenAI API. We are building a parallel AI-Overviews tracker and will publish that separately rather than mixing the datasets.
Publish quotable, structured, sourced content; build verifiable author and organization entity signals (schema, real bios, cross-site mentions); claim high-trust directory profiles like Clutch; and prioritize original-research data pieces over tactical how-to volume.
Yes — monthly snapshots, with quarterly major-version updates that add new query tiers (French-language, vertical-specific, AI Overviews comparison).
Yes. Email us via the contact page and we'll send the timestamped snapshot JSONs. The query list is already public in our repository.