llms.txt is a Markdown file at /llms.txt that lists your highest-value pages with one-line descriptions, grouped into sections. There is also a longer variant — /llms-full.txt — that includes the full text of those pages so an LLM can ingest them in one fetch.
It is not a robots-style policy file (that is robots.txt + Google-Extended), and it is not currently used by Google for ranking. It is a hint to LLM-based agents and copilots about which content you most want to be cited from.
The most common failure mode is treating llms.txt as a sitemap dump. Do not list every page. Curate. The whole point is that you are telling the model which 30-100 URLs you want to be cited from when an agent reads your site.
Group by topic, not by URL structure. An LLM scanning your llms.txt is trying to figure out what you are an authority on; help it by clustering aggressively.
Write descriptions in declarative, fact-dense language. 'Step-by-step plumbing license guide for Ontario contractors' beats 'Our great guide for plumbers in Ontario.'
On a static site (Vite, Next.js export, Hugo, Eleventy): drop the file in the public/ directory and deploy. Done in two minutes.
On WordPress: use a small plugin or a child-theme functions.php hook to serve the file via a custom rewrite rule. The Yoast SEO and RankMath teams both ship llms.txt features as of mid-2025.
On Shopify: use a theme template or an app like Logeix to generate it from your product taxonomy.
On a headless CMS: generate at build time from your content collection. We do this on ottawaseo.com — the file is regenerated on every deploy from our content store.
Not yet — Google has not committed to using it for ranking signals. But Google-Extended (the AI Overview crawler) reads the file, and we have anecdotal evidence from clients that pages listed in llms.txt are recrawled more frequently.
If your top pages are reasonably short (under 50K tokens combined), yes — it lets an agent ingest your authoritative content in one shot. If your archive is huge, stick with llms.txt and let the engines crawl the URLs.
Fetch it manually with curl, validate the Markdown, and run our AI Citability Checker on your domain — it tests for llms.txt presence and parses the file for structural issues.
On every meaningful content change. We regenerate on every deploy. At minimum, monthly.