Most of what gets sold as AEO is just SEO with new vocabulary. Take the old playbook, add some question-form headings, sprinkle FAQ schema, mention ChatGPT in the meta description, and wait for citations to roll in. I’ve watched teams do exactly this for the past year, and the citations don’t come. Then they publish more pages.
The problem isn’t that those teams are working too little. It’s that the framing is wrong. AEO isn’t SEO for AI. The two disciplines share infrastructure but answer different questions, and pretending they’re the same is the most common mistake in the field right now.
This is an opinion piece. I’ll mark where I’m making predictions versus where I’m citing evidence. Disagree where you want.
tl;dr
The lazy version of AEO says: take old SEO, add question headings, sprinkle schema, wait for citations. That’s not a strategy. AEO is source architecture — deciding what your brand should be cited for, building the page that deserves it, and making sure answer engines can reach, understand, and verify it. The work is harder, less faddish, and produces less measurable output in the short term. That’s why most teams are skipping it.
What SEO got right (and AEO inherits)
I want to be clear about something before criticizing the field: classic SEO is not dead, and AEO inherits real fundamentals from it.
Google says explicitly that AI features rely on Search systems and that pages need to be eligible for Search with snippets to appear as supporting links in AI experiences. Translation: if your page can’t rank in classic search, it can’t be cited in Google AI Overviews. The infrastructure layer — crawlability, indexability, internal linking, schema, page speed, mobile rendering, content structure — still matters. SEO got that part right.
What AEO inherits and keeps:
- Make pages crawlable
- Make important content available as text, not buried behind JavaScript
- Use structured data accurately
- Build internal links that express authority
- Keep content current with visible dates
This is the SEO foundation. Skip it and AEO can’t work. Anyone telling you SEO is obsolete is selling panic, not analysis.
But this is the foundation, not the building. The building is different.
The shift: from pages to claims
Classic SEO trained teams to think in pages and rankings. AEO forces teams to think in claims and sources.
That sounds like a small distinction until you watch an answer engine do this:
- Mention your brand without citing you
- Cite your help doc instead of your product page
- Use a third-party article to explain your own product
- Recommend a competitor while your page ranks well in classic search
- Pull a passage that’s technically accurate but commercially useless
None of that is captured by classic SEO metrics. Position, impressions, clicks, traffic — these still matter, but they don’t tell the whole story. In answer engines, the unit of visibility is often the claim, not the page.
The question is no longer “did we rank?” It’s: when a system answers this question, what source should it trust enough to cite — and is that source us?
If your answer is “we don’t know,” you don’t have an AEO problem yet. You have a measurement problem. You can’t optimize for outcomes you can’t see.
Schema is not a personality transplant
This is the section that’s going to annoy people who sell schema-as-AEO.
Schema is genuinely useful. Used honestly, it clarifies page type, authorship, dates, products, reviews, events, and entity relationships. It’s necessary infrastructure. I’m not against schema.
But schema doesn’t make a vague page authoritative. This is where AEO advice gets silly: take a thin page, add FAQ schema, call it answer-engine optimized. That isn’t optimization. That’s decoration.
Two things to understand here:
The schema landscape has moved. Google restricted FAQ rich results to government and health sites in 2023, then tightened further with their March 2026 core update. HowTo rich results are deprecated entirely. The “add FAQ schema for AEO” advice that circulated heavily in 2024 isn’t really a thing anymore — those rich results aren’t displaying for most sites.
Schema’s role is shifting from rich-result trigger to entity verification signal. What matters for AI Mode source selection is accurate Article markup that matches your visible content. The role isn’t “trigger a fancy SERP feature.” It’s “help the engine verify what your page actually claims.” If your schema and your visible content tell different stories, the engine trusts visible content. Schema that lies about the page is worse than no schema at all.
If the visible page doesn’t answer the question directly, markup isn’t the problem. If the page makes claims without evidence, markup isn’t the problem. If the page hides the useful explanation under marketing copy, markup isn’t the problem. Markup amplifies clarity that’s already there. It can’t create clarity from absence.
The source-of-truth page is the missing asset
Most companies don’t have a content problem. They have a source-of-truth problem.
They have:
- Blog posts that half-explain the product
- Help docs that answer narrow questions
- Product pages that avoid specifics
- Case studies that make claims but hide the method
- Comparison pages that read like sales collateral
- PDFs containing the real evidence, disconnected from the web pages people actually find
Then they wonder why answer engines cite someone else.
The answer is often embarrassing: the third-party page is clearer than yours. If a review site explains your pricing, feature differences, and limitations more directly than your own site does, an answer engine has a reason to use it. If a Reddit thread contains the only plain-language explanation of a workflow, it may become more useful than your polished page.
This is the move that separates teams that succeed at AEO from teams that don’t. The successful teams identify, page by page, which question their site should be the canonical source for — and then make those pages worth the job. (For the workflow on this, see How to ship pages that answer engines can cite.)
The unsuccessful teams just publish more.
Crawler policy is editorial policy now
Crawler access used to feel like a technical or legal decision. Someone in DevOps wrote robots.txt, someone in legal worried about training data, and content teams kept publishing.
In AEO, crawler policy is also editorial policy. OpenAI documents separate bots for different purposes. So does Anthropic. A blunt “block AI” policy can have consequences far beyond training opt-outs — including removing your visibility from ChatGPT search and blocking Claude users from retrieving your pages when they ask Claude to read them.
This doesn’t mean every site should allow every AI crawler. That would be just as lazy as blocking everything. It means the decision needs nuance. Do you want to appear in ChatGPT search? Do you want Claude to retrieve your docs when a user asks? Do you want to opt out of training where a platform provides that control? These are separate questions with separate answers. (For the technical details, see the robots.txt vs llms.txt reference.)
The editorial implication: AEO teams need to be in the room when crawler policy is set. Otherwise, content teams will optimize pages that infrastructure quietly makes unreachable. I’ve seen this happen at three different companies in the past year — perfectly written content, well-structured, schema in place, blocked at the bot level.
The best AEO work looks boring from the outside
If you came here expecting tactics, this is going to disappoint you.
The best AEO work doesn’t look like a hack. It looks like disciplined publishing:
- Clear source-of-truth pages
- Methodology pages for data claims
- Comparison pages that admit tradeoffs
- Documentation that answers actual implementation questions
- Author and date signals that make freshness obvious
- Internal links that show which page owns which claim
- Prompt panels that test whether answer engines describe the brand accurately
- Logs that separate mentions from citations
This isn’t glamorous. It’s also much harder to fake than a 2,000-word “ultimate guide to AEO.”
The teams that win AEO won’t be the teams that publish the most AI-flavored content. They’ll be the teams that make the web’s job easier — identify the right answer, support it with evidence, and keep it current.
That’s a less profitable framing for consultants. It’s a more honest one for the field.
The question that fixes everything
The most revealing AEO question I hear is usually phrased like this: “How do we get ChatGPT to cite us?”
It’s the wrong question. A better one is: what would make us the best source for this answer?
That question changes the work. It might lead to a better page. It might lead to original research. It might lead to a public methodology. It might lead to fixing documentation. It might lead to digital PR because the answer engine needs external corroboration. Sometimes it leads to the uncomfortable conclusion that you shouldn’t be cited yet.
That’s fine. AEO should have standards. If a brand has no proof, no clear page, no accessible documentation, and no external validation, the honest answer isn’t “add FAQ schema.” The honest answer is build something worth citing.
The publications and brands that take this seriously will be cited. The ones chasing tactics will keep publishing pages that don’t move and wondering why.
What this means for the field
I’ll close with three predictions, marked clearly as opinion.
1. The AEO-as-SEO framing will get worse before it gets better. SEO agencies have a financial incentive to position AEO as a checkbox extension of what they already sell. Conferences will run “AEO tracks” that are 80% schema and 20% content tactics. This isn’t dishonest exactly — it’s just the simplest path through an existing market.
2. The teams that quietly do the harder work will accrue real moats. Source architecture compounds. A library of clear, methodology-backed source-of-truth pages, with consistent entity signals across owned and third-party content, is much harder for competitors to replicate than another batch of “ultimate guide” content. The compounding takes longer to show up — six months to two years before differences become obvious in citation patterns.
3. The industry will eventually invent a new acronym. AEO is the current term. GEO (“Generative Engine Optimization”) is competing for the same space. Some new term will emerge that captures the actual work better. I don’t have strong feelings about the vocabulary. I have strong feelings about the work.
What to do Monday morning
- Pick one question your buyers actually ask before choosing a vendor — the kind of question that comes up in sales calls, demo objections, support tickets.
- Identify the page on your site that should be the canonical source-of-truth for that answer. If no page exists, make one. If one exists, rewrite the first 300 words so the answer is direct.
- Add evidence next to the claim: docs, data, screenshots, methodology, customer proof, or a comparison table.
- Check whether the page is crawlable by the search and retrieval bots you actually want to allow. Not just
Googlebot—OAI-SearchBot,Claude-SearchBot, the user-triggered agents. - Run the question across three answer engines and log mentions, citations, and accuracy in three separate columns.
- If a third-party source gets cited instead, ask whether it’s clearer than your page. If it is, fix your page before blaming the engine.
- Repeat in two weeks with the same prompts.
AEO isn’t SEO for AI. It’s the discipline of becoming the best source for an answer. That’s a slower, harder, more honest version of the work.
If you want to do the slow version, the rest of this publication is here for it. If you want the quick version, there are plenty of vendors selling it.
Sources
- AI features and your website (Google, accessed 2026-05-07)
- Overview of OpenAI Crawlers (OpenAI)
- Does Anthropic crawl data from the web? (Anthropic)