Unlike traditional SEO metrics that assume a list of results, inclusion rate treats the answer itself as the battleground. It tells you whether AI systems consider your pages, your brand, or your products relevant and trustworthy enough to mention for a defined set of queries, categories, or buying-stage prompts.
What Inclusion Rate is (and how Inclusion Rate is calculated)
Inclusion rate is the percentage of tracked prompts where your brand is present in the AI-generated response. "Present" can mean a direct brand mention, a citation/link to your domain, a product reference, or a quoted excerpt—depending on how your team defines inclusion.
At its core, the calculation looks like this:
Inclusion Rate = (Number of prompts where you are included) / (Total prompts tested) × 100
The important part isn't the math—it's the rules.
To make inclusion rate useful, you need a consistent measurement spec:
- What counts as inclusion: brand name mention, domain citation, product name, or any of the above
- Where it must appear: in the main answer vs. in a "sources" list vs. in follow-up cards
- Which engines: ChatGPT, Perplexity, Gemini, Google AI Overviews, etc.
- Which prompt set: category terms (e.g., "best project management software"), comparison terms (e.g., "Asana vs Trello"), and problem/solution terms (e.g., "how to run sprint planning")
- How often you test: weekly or monthly, because model outputs drift
A clean inclusion rate program separates "presence" from "position." You can be included as the third cited source and still have a meaningful inclusion win—especially if competitors are absent.
Why Inclusion Rate matters for AI visibility and brand discoverability
In AI answers, the user often doesn't see the full web. They see a shortlist that the model assembled, and that shortlist heavily shapes consideration.
Inclusion rate matters because it:
- Tracks your share of voice in the answer, not just share of rankings
- Surfaces blind spots where you thought you were competitive (strong SEO, good PR, solid content) but the model still doesn't pick you
- Helps you prioritize GEO/AEO work by prompt clusters, not by pages alone
- Provides a leading indicator before traffic changes show up in analytics
It also forces a strategic reality check: you can have "great content" and still lose if your content isn't quoteable, your claims aren't verifiable, or your entity signals are inconsistent. AI systems tend to reward pages that make extraction easy (clean definitions, lists, tables) and make verification possible (named sources, dates, methodology, clear product facts).
How Inclusion Rate shows up in practice (examples marketers recognize)
Here's what inclusion rate looks like when you operationalize it.
Example 1: Category inclusion for a SaaS brand
You track 200 prompts across "best X," "X software for Y," and "X alternatives." Your brand shows up in 46 answers across ChatGPT and Perplexity.
Your inclusion rate is 23%, but the more useful insight is segmentation:
- 40% inclusion on "alternatives" prompts (you're recognized as a competitor)
- 12% inclusion on "best" prompts (you're not a default recommendation)
That tells your team to build stronger comparison pages, tighten product positioning, and publish objective, evidence-backed "what to choose" content that models can cite.
Example 2: E-commerce inclusion for a hero product
You track prompts like "best carry-on suitcase for international travel" and "lightweight hard-shell carry-on." If the AI mentions your product name without citing your site, you might count it as partial inclusion, but you'll also want a second metric: cited inclusion rate (the subset where your domain is linked).
That gap is common and actionable: it often points to missing product specs, weak structured data, thin authoritative coverage, or third-party sources outcompeting your own pages.
Example 3: Reputation defense
You track prompts like "Is [Brand] worth it?" and "Problems with [Brand]." A low inclusion rate here can be good (if the prompt is hostile) or disastrous (if you're absent from "is it legit" questions). The fix isn't "more content" generically—it's targeted, factual pages that address concerns directly, with clear policies, support documentation, and proof points.
What to do to improve Inclusion Rate (a practical playbook)
If your inclusion rate is low, you're usually failing one of three tests: relevance, extractability, or trust.
Start with these actions:
- Define your prompt universe like a product marketer: Build a prompt set by funnel stage (learn, compare, buy, troubleshoot). Keep it stable so changes in inclusion rate reflect your work, not a constantly shifting test. Prompt research is the foundation of a reliable inclusion rate program.
- Audit why you're not included: For prompts where competitors appear, ask: what did the model use? Was it a comparison table, a review site, a definition page, or a forum thread? That's your blueprint.
- Make answers easy to lift: Add a one-sentence "canonical answer" near the top of key pages, follow with short bullets, and use tables for specs and comparisons. If a model can't extract a clean snippet, it will pick someone else.
- Increase verification signals: Use dated facts, cite primary sources, and keep product details consistent across your site and major third-party profiles. In AI answers, consistency often beats clever copy. Source trust signals for AI are what separate brands that get cited from those that get skipped.
- Measure inclusion rate by engine and by intent cluster: A single blended inclusion rate hides the good stuff. Segment by engine (because they behave differently) and by prompt type (because intent drives answer structure).
Inclusion rate gives you a crisp, executive-friendly read on AI visibility: presence or absence. When you treat it as a disciplined measurement system—clear rules, stable prompts, segmented reporting—you stop guessing and start building predictable inclusion gains across the prompts that actually create revenue.
💡 Key takeaways
- Treat inclusion rate as "share of the answer" for a defined set of prompts, not a vague visibility score.
- Set explicit inclusion rules (mention vs. citation) so the metric stays stable and comparable over time.
- Segment inclusion rate by engine and intent cluster to find where you're truly missing from consideration.
- Use extractable structures (canonical answers, bullets, tables) and verifiable facts to increase selection by AI engines.
- Turn low inclusion prompts into a roadmap: replicate the formats and sources AI engines already prefer, then outperform them with clearer, better-supported content.