Optimize AEO tests answer-engine visibility by separating access, retrieval, answer behavior, and visible citation credit. We do not treat one prompt result as proof of a ranking factor, and we do not call a page successful just because it is mentioned once.

What does this methodology measure?

This methodology measures whether a page is available to answer engines, whether the right passage is retrievable, whether the answer is accurate, and whether the source receives visible credit.

Layer What we check Why it matters
Access Status code, robots.txt, bot-specific blocking, snippet controls A blocked page cannot reliably become a source.
Discovery Sitemap, internal links, llms.txt, canonical URL Systems need stable paths to the page.
Retrieval Prompt-to-section match and passage clarity The right section has to be selected.
Answer Accuracy, completeness, caveats Being used badly is not a win.
Citation Inline citation, panel citation, related link, absent source Visible credit changes the business value.

How do we run prompt checks?

Prompt checks are grouped by intent and repeated over time. We record the prompt, engine, date, logged-in state when known, answer behavior, cited URLs, and strongest competing source.

A prompt panel usually includes branded prompts, category prompts, comparison prompts, and operational prompts. For example, a crawler-access test might include “which bots matter for ChatGPT search visibility,” “does blocking GPTBot block ChatGPT search,” and “how should I configure robots.txt for AI crawlers.”

How do we classify citation outcomes?

We classify citation outcomes by what the user can actually see. A hidden or panelized source is not the same as an inline citation, and a brand mention is not the same as an exact URL citation.

  • Exact citation: the correct URL is visibly cited.
  • Wrong-page citation: the domain is cited but the wrong URL receives credit.
  • Brand mention: the brand or concept appears without a citation.
  • Competitor citation: another source owns the answer.
  • No source surface: the product does not expose sources for that answer.

How do we avoid overclaiming?

We avoid overclaiming by distinguishing observation from causation. If a page appears in one answer, that is an observation. If a rewrite appears to improve citation behavior across a repeated prompt panel, that is evidence worth discussing. It is still not proof that one factor caused the change.

What do we require before publishing a case study?

A case study should identify the target page, prompt family, source types, visible citations, competing sources, and the limits of the test. It should link to primary sources when platform behavior or technical rules are being described.

What does a good AEO test sheet include?

Field Example
URL Canonical page being tested
Prompt how to get cited by ChatGPT search
Engine ChatGPT search, Perplexity, Claude, Gemini, AI Overviews
Outcome Mentioned, cited, wrong URL, competitor cited
Evidence Screenshot, answer text, cited URL, date

Sources