AI visibility work is becoming less about one clever content tactic and more about repeatable measurement. The useful stories this week all point the same way: track prompts, citations, source types, and visibility drift before deciding what to publish.
TL;DR
- AI Overview datasets are getting large enough to make query-intent segmentation mandatory. Informational queries still dominate, but commercial coverage is expanding.
- Ahrefs and Octopus Energy show where AEO measurement is going: multi-market, prompt-backed reporting that non-SEO teams can understand.
- The March 2026 core update analysis from Amsive, covered by Search Engine Journal, gives AEO teams a specific pattern to test: source owners may be gaining where aggregators lose.
AI Overview data keeps saying query type matters
The best current AI Overview analysis is not one number. It is the pattern across datasets. TryAnalyze's May 5 roundup of AI Overview research pulls together Ahrefs, Pew Research, Semrush, BrightEdge, Conductor, and SE Ranking data, and the practical takeaway is that AI Overviews are uneven by intent.
The article reports that AI Overviews remain heavily informational, while branded, local, and short queries trigger them less often. It also notes that longer queries trigger AI Overviews more often, and that commercial queries became more visible across 2025.
For AEO teams, this means a single "AI visibility score" is too blunt. Split the prompt set into direct-answer, comparison, category, branded, local, and bottom-funnel questions. If you do not segment by intent, a change in query mix can look like a strategy win or loss when it is really a measurement artifact.
The core update story is also an AEO story
Search Engine Journal's May 3 coverage of Amsive's March 2026 core update analysis is not an AI Overview study, but it matters for AEO. The reported pattern was clear: aggregators and user-generated platforms lost US search visibility in the snapshot, while first-party brand sites, government domains, and content originators gained.
The cautious reading is important. The data does not prove what Google changed, and SISTRIX visibility is not the same thing as traffic. Still, the direction is worth testing inside AI search. If answer engines use search indexes, web corpora, or retrieval layers influenced by classic search quality, weaker aggregator pages may become less reliable citation targets.
Practitioners should audit whether their AI visibility depends on third-party category pages. If your own page is the source of truth, but answer engines cite a listicle, directory, or forum thread instead, the fix may be stronger first-party evidence plus better off-site corroboration.
Octopus Energy makes AI visibility a global reporting problem
Ahrefs' May 5 case study on Octopus Energy is useful because it is not just a tool launch story. Octopus Energy needed to monitor AI visibility across different countries, brand histories, and lines of business. Before Brand Radar, the process involved manual extraction from ChatGPT, AI Overviews, and other AI-search platforms.
That is the real AEO problem for complex companies. A brand can be visible in the UK, invisible in Germany, described under an old acquired brand in another market, and cited from outdated third-party pages in a fourth. That cannot be solved by adding an FAQ block to one page.
The lesson is to make AI visibility reporting explainable outside SEO. Mentions, citations, impressions, competitors, and cited domains are easier for executives and regional teams to use than a black-box score.
Prompt methodology is becoming the heart of AEO tooling
Ahrefs' Brand Radar methodology page is worth reading because it names a measurement issue most AEO dashboards hide. Good prompt sets need both real demand and semantic coverage. Ahrefs says it uses its keyword database, People Also Ask data, and semantic fanout to create questions that are then run across AI platforms.
The AEO takeaway is not that one vendor has solved measurement. It is that prompt design is now a strategic decision. If your prompts come only from keyword research, you may miss sales objections. If they come only from internal brainstorming, you may miss real search demand. If they come only from AI-generated expansion, you may track questions nobody asks.
Use three inputs: keyword data, sales/customer language, and semantic fanout. Then keep the panel stable long enough to measure drift.
Perplexity's API work keeps moving search into workflows
Perplexity's API changelog continues to show an answer engine becoming infrastructure. The current docs highlight structured search results, Agent API changes, embeddings, asynchronous Sonar Deep Research, and integration patterns for developer workflows.
That matters because AEO measurement will not stay in dashboards. Teams will pipe search results, prompt outputs, cited URLs, and visibility gaps into scripts, CRMs, editorial workflows, and reporting systems. The companies that win will not only check where they appear. They will use that evidence to trigger specific work.
For example, a recurring query panel can write gaps into a content backlog, flag competitor citations for PR review, or notify product marketing when an answer misstates positioning. The measurement layer becomes useful when it creates work the team can actually do.
What to watch this week
Watch the difference between citations and mentions. A brand mention without a link still shapes buyer perception, but a citation tells you which page or source the answer trusted.
Watch whether your best classic SEO pages are also your best AI-search pages. If the overlap is low, the issue may be content structure, citation-worthy evidence, or off-site authority.
Watch whether regional prompts produce different brand narratives. Multi-market companies should not assume the US answer is the global answer.
What to do Monday morning
1. Build a 30-prompt panel split by intent: informational, comparison, commercial, branded, and support. 2. Track mentions and citations separately for each answer engine. 3. Tag every cited source as first-party, third-party editorial, UGC, review site, documentation, or government/official source. 4. Compare AI citations against classic top-10 rankings for the same prompts. 5. Give each gap an owner: content, technical SEO, PR, product marketing, documentation, or sales enablement.
Sources
- AI Overviews Insights: Data From 590M Searches
- Google Core Update Data Shows Sharp Drop In Aggregator Rankings
- What's Hot, What's Not: AI Search Changes In Q1 2026
- How Octopus Energy uses Ahrefs Brand Radar to monitor AI visibility across global markets
- Ahrefs Brand Radar Methodology
- Perplexity API Platform Changelog