Schema is not an AEO shortcut, at least not on pages that are already getting cited. Ahrefs tracked 1,885 pages that added JSON-LD between August 2025 and March 2026 and found no clear citation uplift across Google AI Overviews, Google AI Mode, or ChatGPT.

TL;DR

Ahrefs published one of the clearer public AEO experiments we have because it tried to isolate a single change instead of celebrating a correlation. The result was blunt: pages that added schema did not gain meaningful AI citation growth, even though AI-cited pages were much more likely than non-cited pages to have schema in the first place.

That does not make schema useless. It makes schema a supporting layer with other jobs to do, while answer-engine visibility still appears to depend more on crawlability, visible answers, and whether a page is already in the retrieval set.

Why is this a useful AEO case right now?

This is a useful AEO case because it tests one of the loudest claims in the market with real before-and-after data. In Ahrefs' May 11, 2026 write-up, the team reported that AI-cited pages were almost three times more likely than non-cited pages to use JSON-LD, then tested whether adding JSON-LD actually changed citation outcomes.

That distinction matters more than the correlation headline. AEO advice is full of tactics that appear to work because better-run sites do them, not because the tactic itself moved the answer engine. Ahrefs tried to separate those two things.

The case also matters because Google's own documentation does not promise a schema-specific path into AI answers. Google says the same foundational SEO practices still apply to AI features, that pages must be indexed and snippet-eligible to appear as supporting links, and that there are no additional technical requirements for AI Overviews or AI Mode.

What did Ahrefs actually test?

Ahrefs tested pages that added JSON-LD and compared them with similar pages that did not. In the study, the team identified 1,885 pages that introduced JSON-LD between August 2025 and March 2026, matched them against about 4,000 control pages, and measured citation changes in the 30 days before and after the change.

The methodology is the strongest part of the case. Instead of saying "schema pages rank better in AI," Ahrefs ran several analyses, including a matched difference-in-differences model designed to control for platform-wide changes happening at the same time. That matters because Google AI Overviews, AI Mode, and ChatGPT were all moving during the study window.

The pages were not random weak pages either. Ahrefs says every page in the dataset already had at least 100 AI Overview citations in February 2025. In other words, this is not a test of whether schema helps unknown pages get discovered for the first time. It is a test of whether schema boosts pages that are already in circulation.

What happened after those pages added schema?

Almost nothing happened after those pages added schema. Ahrefs' main model found a 4.6% relative decline in AI Overview citations, a 2.4% lift in AI Mode, and a 2.2% lift in ChatGPT, with the AI Mode and ChatGPT changes small enough that Ahrefs treats them as statistically indistinguishable from zero.

The 4.6% AI Overview decline is not the part to overreact to. Ahrefs is careful on that point and notes that both treated pages and control pages were already declining, so the gap was statistically real but still small and not clean proof that schema hurt anything.

The broader takeaway is more useful than the platform-by-platform numbers. If adding JSON-LD were a major AEO lever, we would expect treated pages to pull away from their controls. They did not.

Why does this result fit Google's own guidance?

This result fits Google's guidance because Google has been explicit that AI visibility does not require a new machine-readable file or a special AI schema layer. In its AI features documentation, Google says there are no additional technical requirements for AI Overviews or AI Mode, and also says there is no special schema.org structured data that pages need to add for those features.

Google's structured data documentation points to a different job for schema. Google says structured data helps it understand page content and can enable richer search results. The same documentation cites case studies from Rotten Tomatoes, Food Network, Rakuten, and Nestle showing stronger search-result engagement after structured data work. That is real value, but it is not the same claim as "schema will get you cited by ChatGPT or AI Overviews."

Put differently, schema can still be worth shipping for rich results, entity clarity, and cleaner machine-readable metadata. The Ahrefs case just argues that those benefits should not be sold internally as a direct AI citation growth plan.

What is replicable, and what is circumstantial?

The main replicable lesson is the testing pattern, not the exact percentages. Any team with enough citation data can copy the structure: pick a treatment group, pick matched controls, freeze other edits if possible, and compare changes after 30 days instead of assuming the platform trend is your win.

The circumstantial part is the specific population Ahrefs studied. These were already-visible pages on sites large enough to appear heavily in AI Overview citations. Smaller sites, brand-new URLs, and pages with zero citations may behave differently because discovery and indexing problems dominate before extraction problems do.

That means you should not read this case as "schema never matters for AEO." You should read it as "schema alone did not create measurable citation growth for already-cited pages in this public dataset."

What does the case not prove?

The case does not prove that schema is irrelevant to first-time retrieval. Ahrefs explicitly says its data cannot answer whether schema helps pages that are not yet being seen by AI systems get crawled, parsed, or indexed into the consideration set.

The case also does not prove that every schema type behaves the same way. The study pooled schema together, which means Article, FAQPage, Product, HowTo, and organization markup were not split into separate causal tests.

It also does not settle the timing question. Ahrefs measured a 30-day post-treatment window. If a certain markup type only pays off after a longer crawl and reprocessing cycle, this study would miss that.

How should AEO teams use schema after this?

AEO teams should keep schema, but they should demote it from miracle tactic to infrastructure layer. If a page is missing basic structured data that supports rich results or cleaner entity understanding, fix that. But do not report "AI citation optimization" as complete because you added JSON-LD.

The higher-priority work is still visible answer formatting. Google's AI features guidance highlights crawl access, internal linking, textual availability of important content, and structured data that matches visible text. Those are closer to the retrieval path we covered in How Answer Engines Discover, Retrieve, and Cite Pages than a belief that hidden markup will rescue a weak page.

For most teams, the better operating model is:

Layer What it is for What this case suggests
Schema Rich results, machine-readable entities, cleaner metadata Worth doing, but not a proven citation-growth lever by itself
Visible copy Direct answers, evidence, comparisons, freshness More likely to affect retrieval and extraction
Site signals Crawlability, indexing, links, internal discovery Still foundational for getting into the answer set

What would a sober schema implementation look like?

A sober schema implementation treats markup as support for pages that already deserve to win. That means your JSON-LD should reflect visible content, use the most relevant type, and stay synchronized with on-page facts instead of becoming a sidecar project owned by nobody.

For example, a product or guide page can still carry clean JSON-LD without turning the markup into the centerpiece of the AEO strategy:

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Adding Schema Did Not Lift AI Citations",
  "datePublished": "2026-05-12",
  "author": {
    "@type": "Organization",
    "name": "OptimizeAEO"
  }
}

That kind of markup is useful when it accurately describes the page. It becomes useless when teams expect it to compensate for vague headings, missing evidence, or content that never states the answer plainly enough to be extracted. If you need a better model for the visible page layer, How to Ship Pages That Get Cited is the more relevant playbook.

What to do Monday morning

1. Pull 10 pages that already get some AI citations and 10 similar pages that do not. 2. Audit whether each page already has valid structured data and whether that markup matches visible text. 3. Fix markup gaps where they are obvious, but do not change copy, links, and templates at the same time if you want a clean read. 4. Track citation changes for at least 30 days across the engines you can measure. 5. Spend more effort rewriting the visible answer block than debating another schema property. 6. Report schema as technical hygiene unless your own controlled test shows a stronger effect.

Sources