AI search work this week is shifting from "how do we get cited?" to "how do we prove visibility, demand, and retrieval are connected?"
TL;DR
- Perplexity pushed Computer deeper into enterprise workflows with Microsoft Teams, Workflows, and Spaces skills. That matters because AEO is moving closer to task execution, not just answer pages.
- Search Engine Land's May 4 cluster made the same strategic argument three ways: brand authority, citations, and AEO tools are now part of one measurement problem.
- Google commentary and post-core-update analysis both point away from one-size-fits-all AI optimization. Some queries still want full SERPs, and some categories may be shifting toward first-party or official sources.
Answer engine product changes
Perplexity made Computer more like a repeatable work layer
Perplexity's May 4 changelog is worth treating as an AEO signal because it moves Perplexity further from "citation search engine" toward "workflow system with search inside it." Computer is now available in Microsoft Teams, Workflows are available for guided repeatable tasks, and Space skills can package specialized capabilities with shared files.
For AEO, the interesting part is not the Teams integration by itself. It is the shape of the examples Perplexity chose: research a vendor, create a cited report from Databricks usage, run a website audit, and generate repeatable research briefs. Those are exactly the kinds of commercial discovery tasks where source selection can become part of downstream work.
Practitioners should start logging where their brand appears in task-oriented prompts, not only in direct comparison prompts. "Best CRM for startups" is one query. "Research three CRM vendors for a Series A SaaS company and draft a buying brief" is a different retrieval problem.
OpenAI's recent ChatGPT changes keep lowering the friction around answer behavior
OpenAI's release notes in the last week did not announce a new search product, but the April 28 model picker change still matters for AEO testing. Model choice and thinking effort are now closer to the user prompt on web for Plus, Pro, and Business users. That makes it easier for users to vary how much reasoning they want before an answer is generated.
The AEO implication is simple: prompt tests should record the model and effort level when available. If an answer engine exposes controls that change depth or reasoning, your citation audit is incomplete without that setting. A brand may be omitted in a quick default answer but appear in a more deliberative comparison or research prompt.
The April 30 security release is less directly about AEO, but it reinforces a broader pattern: answer engines are becoming account-based work environments. More signed-in, secured, personalized usage means marketers should expect less clean separation between "search behavior" and "workspace behavior."
Practitioner findings and experiments
Brand authority is replacing content volume as the AEO argument
Andrew Holland's Search Engine Land piece on May 4 argues that AI search exposes a weakness in old topical-authority programs: publishing more pages is not the same as becoming a source the market recognizes. The useful distinction is between topical coverage, which is what a brand says about itself, and authority, which is what the wider web says about the brand.
That maps cleanly to answer engines. AI citations can show that a system retrieved your page, but human citations, mentions, reviews, category reports, and brand search are stronger signals that the market has associated your entity with a problem.
The action here is uncomfortable but practical. Audit your "AEO content calendar" for assets that would still matter if they never ranked. Original data, comparison research, expert commentary, customer evidence, and public tools have a better claim to AEO value than another lightly rewritten definition page.
AEO tooling is becoming a workflow, not a dashboard
Adam Tanguay's May 4 Search Engine Land article is useful because it treats AI assistants as research instruments, not magic ranking tools. The suggested use cases are concrete: competitive landscape research, content gap analysis, prompt testing, entity coverage audits, and structured content drafting.
That is where AEO tooling seems to be going. The best workflows do not stop at "your visibility score is 37." They capture the prompt, engine, answer, cited domains, entities mentioned, missing proof, and recommended page or off-site action. Then they re-run the same prompt set later so drift can be measured.
Teams should build a small prompt panel before buying a large platform. Ten category prompts, five brand prompts, and five comparison prompts are enough to show whether the work is measurement, content, digital PR, or product positioning.
AI visibility is being reframed as influence before the query
Greg Jarboe's May 4 Search Engine Land piece connects AI visibility to influence that happens before a search. The argument is that users do not arrive at ChatGPT, Gemini, Perplexity, or Google with blank minds. They have already been shaped by media, communities, social content, reviews, email, video, and prior search.
That is a useful correction for AEO teams that are over-focused on the answer box. If answer engines retrieve and synthesize from the web, the web's prior associations matter. Brand mentions in credible places, original research, and consistent entity signals are not soft brand work. They are retrieval inputs.
The practical move is to tag every AEO initiative by surface: owned page, third-party article, review site, community discussion, video, data study, or partner ecosystem. If the whole plan lives on the blog, it is probably too narrow.
Citation and SERP behavior
Google's "browsy queries" framing gives AEO teams a better intent split
Search Engine Journal reported on Liz Reid's comments about how users move between Google Search, AI Mode, and Gemini. The useful phrase is "browsy queries": discovery-stage searches where a user may prefer the full SERP instead of a direct AI answer.
This matters because AEO work often treats every informational query as a future AI answer. That is too blunt. A long, complex, follow-up-heavy query may suit AI Mode. A shopping or destination-discovery query may still benefit from a visible SERP with multiple options.
The next audit should split prompts into at least three buckets: direct-answer queries, comparison/research queries, and browsy discovery queries. The win condition is different in each. For direct answers, citation may matter most. For comparison prompts, being included and described accurately matters. For browsy queries, classic SERP visibility and rich snippets may still carry more weight.
Google's March update analysis is a warning for aggregator-dependent AEO
Search Engine Journal summarized Amsive's analysis of the March 2026 core update, which compared visibility from March 27 to April 8 across more than 2,000 domains. The reported pattern: aggregators and user-generated platforms lost visibility in several categories, while first-party brand sites, government domains, and content originators gained.
This does not prove what Google changed. The article is careful about that. It does give AEO teams a pattern to test: if classic Search is tilting toward source owners in some categories, answer systems that use search results or web corpora may also become less friendly to thin aggregator pages.
Practitioners should check whether their AI-search visibility depends on someone else's category page. If the source of truth is your product, your research, your policy, or your data, publish the cleanest retrievable version on your own domain and make sure third-party coverage points back to it.
What to watch this week
Watch whether workflow prompts cite different sources than search prompts
Run paired tests. Ask "best tools for AEO" and then ask "create a 30-day plan for improving AI visibility for a B2B SaaS company." Compare cited domains, mentioned brands, and the kind of evidence used. Workflow prompts may reward practical examples and vendor pages differently than simple search prompts.
Watch whether brand evidence beats blog coverage
For one category, map the cited sources in ChatGPT, Perplexity, Google AI Overviews, and Gemini against four evidence types: original research, product pages, review sites, and generic blog posts. If generic content is losing ground, shift budget before the dashboard forces the issue.
Watch query intent before calling something an AEO failure
If Google keeps surfacing full SERPs for browsy queries, a missing AI citation may not be the failure. The failure may be weak organic presence, weak imagery, thin review coverage, or no distinctive brand signal in the discovery set.
Sources
- Improved Computer Models and Enterprise Updates – May 4, 2026
- Why brand authority beats topical authority in AI search
- 7 tools for doing AEO right now
- Why AI visibility starts before search and ends with citations
- Google: Browsy Queries May Favor Full SERPs, Not AI
- Google's March Core Update Shifted Visibility Away From Aggregators
- ChatGPT – Release Notes