Optimize AEO Research is where we turn answer-engine optimization from advice into evidence. The goal is simple: test how answer engines retrieve, mention, and cite pages, then publish the method, results, limits, and next experiments.

This section is built for people who need more than generic AI SEO guidance. It will collect prompt panels, crawler audits, citation studies, before-and-after page rebuilds, and practical field notes from running Optimize AEO as a living test site.

Current research tracks

Track Question Status
Answer-engine citations Which page types get cited most often? Protocol published
Crawler access Which crawlers can fetch public source pages? Ongoing audit
Glossary links Do anchored definitions improve source clarity? Instrumented on site
Local tools Do utility pages earn source visibility? Ready to test

What we are trying to learn

The central research question is whether a site can become more citation-worthy by improving source architecture: crawl access, canonical pages, glossary links, structured page sections, source maps, and original evidence. That question needs repeated observation because answer engines do not expose a clean ranking report.

Instead of guessing, we will watch how specific pages behave. Does a comparison page get cited for comparison prompts? Does a tool page get surfaced for tool prompts? Does a glossary anchor help clarify entity meaning? Does a crawler-policy page get used when the prompt asks about GPTBot, OAI-SearchBot, or PerplexityBot?

How the research works

Each study starts with a narrow question, a prompt set, a list of engines, and a logging template. We record the answer, cited URLs, citation surface, exactness of the citation, competing sources, and notes about whether the answer used a page as evidence or merely mentioned the brand.

The research will be published with limitations. Answer engines change quickly, prompts drift, logged-in experiences vary, and different users may see different answers. That is why the method matters as much as any individual result.

What counts as a useful observation?

A useful observation is specific enough to change publishing behavior. “The site did not appear” is less useful than “Perplexity cited an official documentation page for crawler-policy prompts, while ChatGPT returned a general answer with no visible source.” The second observation tells us which page type may be missing, which source class is winning, and which surface needs more testing.

Every observation should connect back to a practical decision: deepen a page, create a comparison, improve internal links, update llms.txt, adjust robots.txt, add evidence, or stop chasing a prompt family that does not produce visible citations.

Research assets

  • Prompt panels: reusable sets of prompts grouped by intent.
  • Citation tracker exports: CSV logs of engines, prompts, cited URLs, and result types.
  • Source-type tables: breakdowns of whether answers cite tools, guides, official docs, forums, or comparison pages.
  • Before-and-after page notes: records of what changed on a page before citation behavior is rechecked.
  • Limitations: notes on sample size, engine availability, personalization, and timing.

Start here

What makes this useful

The strongest AEO advice should come from repeated observation. If a comparison page gets cited more than a glossary page, that matters. If OAI-SearchBot can fetch a page but the answer still ignores it, that matters. If a source panel cites a weak URL instead of the canonical page, that matters too.

This research section exists to collect those observations in public and turn them into better publishing decisions.

Planned studies

Study Why it matters
Which pages get cited? Shows whether tools, guides, comparisons, or official docs win for AEO prompts.
Do glossary links change retrieval clarity? Tests whether anchored definitions improve the site’s internal source graph.
Can local tools attract AI visibility? Tests whether browser-only utilities create source-worthy pages.
How do AI crawler rules affect visibility? Separates access problems from content-quality problems.