This first Optimize AEO research pass is a source-candidate baseline. It does not claim to measure final AI citations across every answer engine. Instead, it measures the first layer that answer engines often depend on: which public source types are already visible for the same prompt families we plan to test in ChatGPT, Perplexity, Google AI features, Claude, and Copilot.

The purpose is practical. Before asking whether Optimize AEO is cited, we need to know what kinds of pages are currently visible for the prompts we care about. If the source landscape is mostly official documentation, a thin opinion post will not be enough. If the source landscape includes focused comparison pages and tool pages, our site can compete by building better versions of those assets.

Short finding

The first baseline suggests five source types matter most for AEO prompt families: long-form guides, official documentation, focused comparison pages, actual tool pages, and research/methodology pages. Glossary-style definitions are useful, but they need to be connected to deeper pages if they are going to compete for broad prompts.

What we tested

We reviewed live search-visible source candidates for eight prompt families: AEO definition, AEO vs SEO comparison, GPTBot vs OAI-SearchBot, llms.txt vs robots.txt, free AEO tools, how to get cited by answer engines, AI citation tracking, and empirical citation research. The point was to classify source types, not to declare final ranking winners.

Prompt family Visible source pattern Implication for Optimize AEO
AEO definition Long-form marketing guides and specialist AEO guides Our definition page needs depth, examples, author trust, and internal links.
AEO vs SEO Broad AEO guides often absorb comparison intent Our comparison page should stay sharper and more practical than broad guides.
GPTBot vs OAI-SearchBot Official crawler documentation is the strongest source class Our page should explain official docs clearly rather than trying to replace them.
llms.txt vs robots.txt Comparison articles and skeptical reference guides both appear Balanced caveats are an advantage because the topic is hype-prone.
Free AEO tools Actual tool pages appear beside tool roundups The local tools workbench deserves dedicated landing pages and examples.
How to get cited Playbook-style implementation guides appear We need engine-specific and workflow-specific how-to pages.
AI citation tracking Measurement frameworks appear The Citation Tracker should be tied to a serious methodology page.
Empirical research Academic and framework papers appear The research hub should cite studies and publish repeatable methods.

Dataset

The source-candidate rows are available as a CSV: download the first AEO source-candidate baseline.

What surprised me

The biggest useful surprise is that tool pages can show up for tool intent. That sounds obvious, but it matters. Many AEO sites talk about tools without shipping any. If answer systems and search systems can see a real utility page, the page has a stronger job than a generic listicle.

The second surprise is how important caveats are for llms.txt. Some pages frame llms.txt as the new robots.txt for AI. Others are more skeptical and point out that major platforms have not universally confirmed support for third-party llms.txt files. The skeptical pages are important because they reduce uncertainty. For AEO, accurate caveats are not weakness; they are source quality.

What this means for our site

Optimize AEO should not try to win by publishing more generic AEO explainers. The site needs a source cluster for every major prompt family: a hub, a comparison page, a tool or template, a glossary entry, a methodology note, and a research observation. That is the pattern most likely to give answer systems multiple ways to understand the site.

  • Definition prompts need the Answer Engine Optimization page, glossary anchors, and methodology support.
  • Comparison prompts need pages like AEO vs SEO, llms.txt vs robots.txt, and GPTBot vs OAI-SearchBot.
  • Crawler prompts need official-doc-aware explanations and a crawler policy tool.
  • Tool prompts need real local utilities, not just descriptions.
  • Measurement prompts need the Citation Tracker and research protocol pages.

Actions from this pass

This baseline creates the next content backlog:

  1. Build engine-specific pages for ChatGPT citations, Perplexity citations, Google AI Overviews, and Copilot visibility.
  2. Create a stronger citation-tracking guide that links the tracker, methodology, and research dataset.
  3. Add glossary terms for citation selection, citation absorption, source candidate, and source-type classification.
  4. Run the next panel directly inside answer engines and log visible citations with the AI Citation Tracker.
  5. Publish a results page that separates source candidates from actual cited URLs.

Limitations

This is a baseline, not a final citation study. Search-visible source candidates are not the same as answer-engine citations. Some answer engines use live search, some use search indexes, some expose citations inconsistently, and some give different answers by user, region, or product surface.

That limitation is exactly why this page exists. It gives us a clean starting map before we run direct answer-engine prompt panels.

Sources reviewed