This article is part of the AI Accelerator: Live AI Training for Ad and Marketing Professionals, a workshop series designed by U of Digital and our AI experts to help marketing and advertising teams prepare for an AI-driven future. This piece builds on Workshop 4: The Future of AI in Advertising, where we look at how AI is changing the consumer journey, reshaping marketing channels, and introducing new roles for AI agents.
If you’re looking to get a headstart in 2026 with AI, our new AI Essentials course is live. Take a look.
The article explores the evolving field of AI visibility, how brands appear in AI-powered platforms such as ChatGPT, Perplexity, Microsoft Copilot, and Google Gemini. It covers:
- What AI visibility is: The difference between mentions and citations, and how AI systems surface brands in answers.
- Mentions vs. citations: Why mentions drive discoverability and stability, while citations can boost authority but are volatile.
- Why brands should track AI visibility: Ensuring your brand shows up where consumer decisions are increasingly shaped.
- Where to focus optimization: Introducing approaches like Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO), and common LMO strategies.
- AI search vs. traditional search: Why AI is more volatile, less predictable, and demands new strategies.
- Balancing AI visibility and SEO: Why search won’t be dead in the near future, and how AI visibility complements existing SEO strategies.
- Practical guidance: FAQs on managing AI visibility, tools for optimization, and the best platforms to measure performance today.
What is AI visibility?
AI visibility is often narrowly defined, focusing mainly on how a brand’s content, products, or offerings appear in AI-powered search and answer engines, such as ChatGPT, Perplexity, Microsoft Copilot, Google Gemini, and Grok. Looking ahead, this focus will continue to expand. Brands will start tracking how AI agents choose content, including which brands they pick, promote, or act upon when autonomously completing tasks on behalf of users. In other words, beyond monitoring presence in AI answer engines, AI summaries, AI API responses, and AI recommendation systems, brands will soon need to assess how AI agents select and prioritize content. This will represent a new frontier in brand visibility monitoring. However, for the purpose of this explainer, we’ll focus on brand visibility within the major AI answer engines, such as ChatGPT, Perplexity, Bing Copilot, and Google Gemini, where the majority of users are already turning to search, ask questions, and consume information.
When users interact with AI answer engines, they encounter brand visibility on two fronts: mentions embedded in the narrative and citations that anchor the response.
What is a mention in AI answer engines?
A mention occurs when your brand name is directly surfaced in the AI’s response. It signals awareness and inclusion but doesn’t necessarily drive traffic, since there may be no clickable link back to your site. Mentions still matter, though, because they influence user perception, shape trust, and build brand authority in the conversational layer of search.
Not all mentions are equal, though. In Google’s AI Overview interface, they are presented in two ways:
- Unlinked mentions (not underlined). This is when your brand name appears in the AI-generated text, but without a clickable source. These mentions mainly boost awareness and credibility, but they rarely send any traffic to your site because there’s no outbound path. In effect, they work more like digital PR than like SEO.
- Underlined mentions. These are interactive: clicking on an underlined brand name in the AI Overview takes the user to a new Google Search results page, not directly to your site. Google explained that this design came from testing: users often want to refine or re-run a search after seeing an AI Overview, so the links help them “explore further” by triggering a fresh search query. From a brand’s perspective, these mentions still don’t deliver direct referral traffic, but they keep your name in front of the user and may lead to additional branded searches later.
The SE Ranking study shows just how common these underlined mentions are. In a sample of 141,507 AI Overview appearances across five U.S. states, SERanking found that about 43% (over 61,000 instances) included underlined links sending users back to Google’s own search results. The remaining 57% contained no underlined links at all. This heavy reliance on underlined links pointing back into Google’s own ecosystem underscores a broader strategy: keeping users within the company’s walled garden. Rather than acting as a gateway to the open web, AI Overviews often loop searchers deeper into Google’s results pages. And this isn’t an isolated observation; other data points reinforce the same trend:
- Only 8% of searches with a Google AI summary trigger a click on a traditional result, vs around 15% when no summary is shown [Pew Research Center]
- When Google shows an AI summary, only ~1% of visits click a source link inside that summary [Pew Research Center]
- On average, those AI Overviews with Google links contain 4–6 links pointing back to Google [SE Ranking]
- Users now make on average 10 clicks within Google before leaving for other domains [Momentic Marketing]
- Organic click‐through rate for the #1 result drops from about 7.3% to 2.6% when an AI Overview appears [DemandSage]
How do citations work in AI responses?
A citation in AI search is a direct, clickable link to a webpage that the AI used as a source when generating its answer. This link could be as formal as your company website or as simple as a Wikipedia page. It can also point to community spaces, such as a Reddit thread where your brand is being discussed. Citations that include your webpage are especially powerful because they can drive referral traffic, strengthen credibility, and connect users directly with your content. Being cited means the AI has recognized your site as a trusted source worthy of attribution. To make sense of how mentions and citations differ, Tomek Rudzki, Co-Founder at ZipTie.ai, uses a great analogy that simplifies the relationship between the two:
But how does this happen? Many AI Answer Engines use a process called Retrieval Augmented Generation (RAG). Instead of relying only on what the model was trained on, RAG actively retrieves relevant external sources in real time, such as articles, reports, or websites, and uses them to build its response. Those retrieved resources become the foundation for the answer, and the engine surfaces them as citations so users can verify the information at its original source.
For AI search visibility, which matters more: brand mentions or citations?
An argument in favor of mentions: discoverability
Mentions and citations both matter, but if you’re competing for visibility, mentions win. Here’s why: most users don’t even notice citations in AI-generated answers. They’re often buried at the bottom of the page or tucked into a sidebar. To get value from a citation, you’re depending on the user to scroll, slide, search, and click.
But a mention is right there in the response itself, part of the story the user is already reading. You don’t have to work for attention; it happens naturally. That makes it less like a footnote and more like a billboard. You see it, even if you weren’t looking for it.
As AI search changes the way people discover brands, mentions function more like native ads than traditional placements. They are integrated directly into the experience, shaping perception in the moment. The more your brand is part of the response, the stronger your presence becomes.
Another argument in favor of mentions: citation volatility
Citations are becoming increasingly volatile as AI platforms like ChatGPT adjust their citation weighting. A single adjustment recently caused referral traffic to collapse by -52% in under a month, while a few dominant sites (Reddit, Wikipedia, TechRadar) surged by +53% combined, taking over 22% of all citations.
This shows that branded sites relying on citations face unpredictable swings beyond their control: today they may be cited, tomorrow they may disappear. Mentions, by contrast, are more stable: they’re embedded directly in the foundational models, not subject to sudden algorithmic shifts in citation weighting.
Foundational models, such as GPT-3, PaLM, FLAN-T5, or LaMDA, are trained once on vast datasets and then remain largely fixed. Once training is complete, they are not continually updated; instead, it carries a knowledge cutoff date beyond which the user has no awareness of new information. This means their outputs can only be influenced during the training phase itself. After deployment, no amount of investment can alter what a model like GPT-4 produces unless its creators decide to retrain it with additional data.
In practice, this means that any effort to shape their responses must focus on influencing the next generation of foundational models. And once brand mentions are included in the datasets these new models are trained on, they remain embedded in the model’s outputs, locked into the model until the following version is trained.
As citation patterns narrow to a few dominant platforms, the more durable strategy is to prioritize brand mentions that persist in model outputs over time.
Why should brands track AI visibility?
As AI reshapes the way people search, discover, and interact online, brands need to pay close attention to how they appear within these emerging platforms. The dynamics extend far beyond the idea that AI will simply mirror Google’s search model. Ads will almost certainly play a role in consumer-facing AI, but organic visibility matters just as much, if not more. A mention of your brand within an AI-generated response can carry significant weight, often feeling more credible than a paid placement. In fact, it may be more advantageous to be surfaced alongside a competitor’s ad than to be missing entirely. This is why tracking brand visibility in AI isn’t just about awareness; it’s about ensuring your brand consistently appears in the places where decisions are being made.
Users can begin their journey from many different entry points: traditional search engines, social channels, or even directly within marketplaces. What’s new is that they can now start, and sometimes complete, their entire path within AI tools before making a purchase. Each serves a different purpose in the decision-making process, making it more difficult for any one player to dominate the entire journey. For brands, this fragmentation creates a new imperative: to understand and measure how often and in what context they are appearing in AI-generated responses.
The type of visibility AI creates also requires a different lens. Search platforms like Amazon or Google thrive when queries indicate strong purchase intent. AI queries, by contrast, are often exploratory in nature. Users turn to AI to clarify ideas, gather insights, or complete tasks such as drafting content. In these cases, the relevance of a brand is less about immediate conversion and more about shaping awareness, credibility, and influence in the early stages of a journey. Walmart’s Retail Rewired 2025 research shows that shoppers already see clear value in AI assistants. Nearly half (48%) believe these tools improve the retail experience, compared to just 26% who feel they detract from it. The top use cases are exploratory rather than transactional: 35% want AI to summarize product reviews, 33% want a single search across retailers, and 23% want smart filtering from photos or videos.
Generative AI can therefore be seen as an upper-funnel channel where presence itself has strategic value. The more frequently a brand is surfaced in trusted responses, the more likely it is to occupy a place in the user’s consideration set later on. For marketers, tracking this presence becomes a way to benchmark competitiveness and identify opportunities for brand reinforcement long before a customer makes a purchase decision.
In many cases, AI-driven traffic acts as a natural qualifier, filtering in audiences who already trust the AI’s recommendation and are therefore more primed to convert. Ahrefs, for example, has found that even though the share of visitors from AI search remains small, their impact on conversions is disproportionately strong.
This shift marks the start of a new era of discoverability. While monetization models are still evolving, the role of visibility in AI is already taking shape, and brands that establish an early foothold will be better positioned as generative AI becomes a standard gateway to information.
At the same time, the ripple effects are already being felt across the media and advertising landscape. According to Similarweb, major publishers including CNN, Business Insider, and HuffPost saw traffic fall by 30–40% year-over-year in July, as AI tools increasingly answered user questions without driving clicks to their sites.
This is creating a supply crunch for demand-side platforms (DSPs), with players such as The Trade Desk and Viant adjusting their strategies. Viant notes that 45% of its ad spend is now directed toward connected TV instead of traditional display inventory. Programmatic buying is undergoing a fundamental recalibration, as long-standing models of audience targeting and lead prediction are breaking down in the face of less predictable search behavior. Some analysts expect this could spark bidding wars for what remains of premium inventory, pushing CPMs higher and concentrating more advertising dollars within walled gardens like Meta and Google’s owned properties.
The decline in publisher traffic and the inflation of programmatic inventory both point to a single trend: attention is moving. As discovery shifts from the open web into AI-driven experiences, visibility is being redistributed rather than lost. For brands, tracking that redistribution is the only way to stay aligned with where consumers actually see and trust information.
Where should brands put their focus when it comes to AI optimization?
The reality is that AI optimization touches nearly every aspect of how content is discovered. Mentions and citations now surface in AI-generated summaries, chatbot responses, AI search, and agent-driven answers. Marketers are already swimming in acronyms such as AIO, AEO, GEO, LLMO/LLMEO, and AISO, each promising to capture a piece of this new landscape.
Here’s a sample of the acronym soup you’re likely to find, whether on landing pages or making the rounds on Reddit:
But let’s focus on the few that stuck. The terms that matter most today are GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization). While the language may differ, with some using GEO, AIO, or LLME interchangeably, the focus is on the same principle: ensuring brands show up where AI delivers answers. Myles Younger, Chief Growth Officer at U of Digital, gave the clearest breakdown we’ve seen on the subtle difference between GEO and AEO. Myles pointed out that while AEO is still tied to the world of traditional search engines (think Google’s zero-click answers, “People Also Ask,” or featured snippets), GEO resides in the new ecosystem of large language models, such as ChatGPT, Claude, Gemini, and Meta AI. Here’s how he laid out other differences:
| AEO | GEO |
|---|---|
| Lives in the context of traditional search | Live in the context of all these newfangled LLMs: ChatGPT, Claude, Gemini, Meta AI, etc. |
| Partly deterministic | Entirely probabilistic |
| Potentially (and ideally) zero-click | Potentially zero-click |
AEO: Still partly deterministic, relying on site metadata and algorithmic signals of content authority and popularity, essentially much of the traditional search, crawling, and indexing process.
GEO: Entirely probabilistic, shaped by the responses and reasoning of LLMs.
AEO: Potentially (and ideally) zero-click in nature, meaning users don’t need to visit a site or page to get the information they need.
GEO: Also potentially zero-click. LLMs can similarly remove the need to click through to reference sites, videos, posts, or pages.
Besides these different types of optimisations, the brands should differentiate their GEO for ChatGPT, Perplexity, Gemini, and Claude. Each platform has its own algorithm doing its thing, shuffling sources in its own way, which is why you end up with different answers.
Brands need a multi-dimensional strategy for AI optimization. Beyond the various types of AI optimization, companies must differentiate their approach across ChatGPT, Perplexity, Google AI Overviews, Gemini, and Claude. Each platform has its own algorithm, which shuffles and prioritizes sources in unique ways, resulting in different answers for the same query. Brands should measure both how often they’re cited across the different AI platforms and which sources dominate those citations, since together these insights guide smarter content and GEO strategies. ChatGPT and Google AI Overviews typically cite only a handful of brands (average 3–4), concentrating visibility on dominant market leaders with the highest authority. Gemini surfaces a moderate number (average 8), mixing top players with secondary brands. Perplexity provides the widest coverage (average 13), returning longer lists that include both leading brands and smaller niche players.
Another factor brands must account for when measuring GEO impact is the audience reach of each AI engine. Brands need to move beyond a flat view of rankings and citations; it’s not enough to ask “Where and how often does my brand show up on each AI engine?” because not all placements carry the same weight. A top spot in a smaller engine like Perplexity doesn’t have the same reach or influence as a mid-tier ranking on ChatGPT, which commands a far larger audience. LLMO Metrics applies this exact method, as Nico Bignu explains:
But beyond brand counts, AI platforms differ significantly in how they source information and what kinds of content they reward:
- ChatGPT relies heavily on authoritative sources, like Wikipedia, highlighting its preference for encyclopedic knowledge and structured factual content, while also incorporating social discourse from Reddit and news media.
- Google AI Overviews draws from blog-style articles, mainstream news, and forums, with YouTube playing a central role in its sourcing—reflecting Google’s integration of video and multimedia into search.
- Google Gemini builds on this pattern by placing a deeper reliance on YouTube and incorporating multimodal content, professional reviews, and structured data, reflecting its broader integration within Google’s ecosystem.
- Perplexity amplifies community-driven knowledge, drawing on Reddit as a core source, while also emphasizing fresh, well-sourced factual content and guides. Its sourcing approach reflects a balance between peer-to-peer insights and high-authority niche content.
- Claude stands apart by emphasizing analytical precision, clarity, and structured reasoning, rewarding content that demonstrates logical depth and interpretability.
| Platform | Key Patterns | Top Cited Sources |
| ChatGPT | Authority, facts, encyclopedic knowledge, expertise, brand and tone onsistency acoss the web, non-commercial sources > ecommerce pages | Wikipedia, Reddit, Forbes, G2, TechRadar + established news media |
| Google AI Overviews | Specific landing pages >homepages, forum & community discussions, LinkedIn takes, well-structured listicles | Blog-style articles,mainstream news + Reddit, YouTube, Quora, LinkedIn, Gartner |
| Google Gemini | Professional reviews, mirroring of Google SearchAI Overviews, YouTube videos, deep content, broad web coverage | Blogs, news sites, YouTube |
| Perplexity AI | Community discussions, peer-to-peer information, UGC, high-authority niche content, comparisons, guides. | Blog/editorial content, news + Reddit, YouTube, Gartner, Yelp, LinkedIn, Forbes |
Across platforms, there are clear shared tendencies in how AI systems source and reward content. All engines favor structured data, comprehensive content, authoritative domains, UGC, and strong E-E-A-T signals (experience, expertise, authority, trust). Content that is crawlable, easy to read, and conversationally structured consistently performs well. AIs also reward consistency in tone and branding, along with comparisons, listicles, and guides that are logically organized and easy to parse. Structured data and schema markup play an essential role here. Microsoft has confirmed that Copilot relies on them to better interpret content, and experts point out that Gen AIs overall tend to favor fresh updates as checkpoints against their training data. A striking commonality is the prominence of Reddit across ChatGPT, Google AI Overviews, and Perplexity, not only because of its peer-to-peer community discussions, but also because Reddit’s format makes it easier for models to extract information efficiently and ration computing power.
While it’s tempting to chase complex strategies across multiple platforms, the foundation of AI visibility is still your own digital real estate. AI systems repeatedly turn to brand-owned properties as their most reliable sources of truth. Ann Smarty, Brand Ambassador at Peec AI, puts it plainly:
The same patterns come through in hands-on case studies and the broader meta-research. Organic Labs’ meta-analysis supports this, drawing on over 10,000 LLM responses, 19 independent studies, and 6 real-world cases to demonstrate which strategies are most effective. Below, you’ll see how these strategies rank:
The fundamentals of LLM optimization are clear: maintain consistent content across the web, keep it structured, comprehensive, authoritative, credible, and easy to process, while adapting to each platform’s sourcing philosophy. The AI visibility providers are already warning about the importance of editorial and brand consistency. Malte Landwehr, CPO & CMO at Peec AI, summed it up well:
Preston Boling from Scrunch AI doubles down, emphasizing that AI visibility isn’t an SEO checkbox; it’s about shaping how your brand is understood everywhere online.
At the same time, misattribution and hallucination in AI citations reinforce the need for distinct, structured brand content. A thorough study by The Tow Center for Digital Journalism at Columbia University reveals that ChatGPT-4o’s search feature misidentifies or misattributes publisher content in the vast majority of tests, even when the publishers have licensing agreements or explicitly allow crawler access. Out of 200 sampled quotes from 20 publishers, ChatGPT was wrong or only partly correct 153 times. It admitted it didn’t know the answer only seven times, preferring instead to hallucinate confident but inaccurate citations.
And it’s not just misattributed content that poses a problem. Beyond incorrectly attaching publisher information, AI systems can generate entirely false destinations. Andrei Țiț, Head of Product Marketing at Ahrefs, makes this clear with his warning about hallucinated pages:
Ahrefs’ research makes it clear which AI tool contributes most to this problem: ChatGPT is the worst offender. Their study found that 1.01% of all URLs clicked from ChatGPT responses returned 404 errors, compared to just 0.15% from Google Search. When looking at all links cited (not just clicked), the gap widens even further: 2.38% of ChatGPT’s cited URLs led to 404s, nearly three times higher than Google’s 0.84%. This shows that hallucinated pages aren’t just an occasional issue; they occur at a measurable and significantly higher rate in AI assistants than in traditional search engines.
All these findings highlight just how fragile brand visibility can be in AI-powered search when misinformation and hallucinations are left unchecked. It’s not simply about being seen, it’s about being represented accurately. This challenge is central to the work of Nico Bignu, Cofounder & COO at LLMO Metrics, who has been exploring how to close these gaps in AI visibility:
These distortions undermine the credibility of original reporting and demonstrate how easily brand messaging can be diluted or reshaped when ingested by large language models. That’s why brands must invest in content that is not only well-researched and multimodal but also deliberately structured, clearly branded, and with unified messaging, so it stands a stronger chance of surviving the model’s reinterpretation.
ChatGPT also stands out for its fluctuating referral traffic. According to Profound, ChatGPT referral traffic has dropped by more than 50% since late July. The timing isn’t random; it aligns with a significant shift in how citations are being distributed.
Instead of pulling from a broad mix of sites, ChatGPT is consolidating around a handful of platforms that provide straight, utility-driven answers. Reddit citations jumped +87%, Wikipedia surged over 60%, and together with TechRadar, those three domains alone now capture more than one-fifth of all ChatGPT citations. In other words, millions of potential brand referrals are getting absorbed by just a few “answer-first” giants.
Additionally, relying on sites like Reddit and Wikipedia not only boosts perceived answer quality but also reduces compute costs for OpenAI. Fewer scattered sources mean less processing power needed to fetch, rank, and synthesize results. So the consolidation creates a double incentive: cheaper infrastructure and stronger answers in one move.
OpenAI is rewarding content that directly answers user intent. Reddit threads with real comparisons beat brand pages that just push “Book a demo.” Wikipedia’s structured, factual content beats marketing copy optimized for conversions.
Brands need to rethink their approach. If you want visibility in ChatGPT (and by extension Perplexity, Gemini, Claude, Copilot, etc.), you can’t just publish sales-driven content. You need comparison guides, FAQs, and customer-language explainers content that answers the actual question in a way users (and the models) can immediately understand.
The volatility is absolute; a single dial shift caused a 52% collapse in referral traffic within a month. However, the long-term direction is clear: AI ecosystems are consolidating around sources that give definitive and quick answers. Brands that adapt to that shift will still surface. The rest risk disappearing downstream of OpenAI’s experiments.
Not only has referral traffic shifted sharply, but ChatGPT’s alignment with traditional search has also undergone a dramatic change. While most of the industry has been debating whether AI might replace Google, ChatGPT itself has been leaning more on Google, quietly stepping back from Bing in the process. Between April and July, ChatGPT-Google alignment jumped from 12% to 33%, while Bing alignment fell from 26% to just 8%.
That’s a massive swing and awkward move, considering Microsoft’s multibillion-dollar stake in OpenAI. Still, it’s not a full handoff: only a fraction of ChatGPT citations map exactly to Google URLs. The majority remain independently sourced, though domain-level overlap has grown to about half.
That increased reliance on Google, however, carried a price: every adjustment on Google’s side now ripples directly into ChatGPT’s citation mix. When Google tweaks how its search results can be crawled, the ripple effects can hit citation patterns overnight. In September, Google quietly removed the “?num=100” parameter, a tool long used by data brokers to pull deeper Google result sets. OpenAI doesn’t crawl Google directly; it buys that data from third-party providers, many of which depended on the now-retired parameter.
Once the parameter disappeared, those providers were limited to the top handful of Google results, leaving sources like Reddit, much of whose content ranks beyond the first 20 results, far less visible in ChatGPT’s answers. The change drove a noticeable drop in Reddit citations even as Wikipedia’s share rose, not because Reddit’s engagement or content changed, but because the RAG pipeline around it did.
At the time of this article’s publication, neither OpenAI nor Google has publicly announced any change specific to Reddit. What appears to have changed is the mix of live retrieval calls: ChatGPT is now running fewer Retrieval-Augmented Generation (RAG) crawls to Reddit. While OpenAI maintains an API deal with Reddit, that access is believed to be used primarily for training data, not for powering real-time reasoning in answers. OpenAI optimizes its live retrieval around lower-cost, easier-to-refresh, and consistently structured sources.
For brands, the Reddit case highlights how fragile AI referral channels can be. A single parameter update outside a brand’s control can tilt the entire citation mix. You can invest heavily in creating the kind of user-generated, comparison-rich content that AI models love, and still watch your citations evaporate if the economic or technical calculus changes upstream. It’s a reminder that success in AI visibility today hinges not only on strong content but also on an awareness of the shifting infrastructure and partnerships that determine which sources get surfaced. The takeaway isn’t to avoid AI visibility, but to build strategies resilient enough to withstand these swings: diversify across platforms, strengthen owned media that can’t be algorithmically sidelined, and treat AI citations as valuable but inherently unstable bonuses rather than the foundation of your visibility strategy.
As the landscape keeps changing underfoot, the brands finding steady ground aren’t the ones waiting for a rulebook — they’re the ones writing it. The next phase of AI visibility will belong to marketers who treat uncertainty as a testing ground, leaning on data and experimentation to uncover what truly works.
In Bogdan Babiak’s words, CMO at SE Ranking, the winning formula is simple:
Taking that idea a step further, Thomas Peham, CEO & Co-Founder at Otterly.AI, stresses that success in AI visibility depends on breaking down silos and aligning every marketing function behind a unified strategy.
How different is AI search from traditional search?
To understand the mechanism behind AI optimization, you first need to recognize a fundamental difference many overlook: AI search doesn’t operate like traditional search.
In the world of Google, rankings tend to be relatively stable. You can monitor positions, track algorithm updates, and count on consistency week to week. Even in 2024, when Google rolled out four core updates, two spam updates, and thousands of smaller search adjustments, the system still behaved in a way that felt predictable. Most of those micro-changes were so slight they went unnoticed, allowing marketers to work with relatively steady benchmarks. AI platforms, by contrast, are built on volatility rather than predictability.
When you ask ChatGPT, Gemini, Claude, or Google AI Overviews the same question twice, you won’t always get the same answer. Each time you ask a question, the models predict the next best word while adding controlled randomness to avoid repetitive or uniform answers.
Even when you ask the same question, the sources AI platforms cite can shift dramatically over time. To illustrate the scale of the challenge, Profound analyzed citation patterns across major platforms between June 11–13 and July 11–13, 2025, running roughly 80,000 open-ended prompts per platform. The study tracked citation drift, the percentage of domains that appeared in July responses but not in June for identical prompts.
The findings were striking: 40–60% of domains changed in just one month. In other words, half of the sites that show up in AI answers today may disappear from them tomorrow. Over longer horizons, the shifts become even more dramatic, with 70–90% of cited domains changing from January to July.
This design of probabilistic AI models, combined with the element of controlled randomness, ensures variety, brings in fresh perspectives, and adapts to new information.
For marketers, the implications are profound. Traditional SEO operates within relatively stable benchmarks: Google may push four or five core updates a year, along with thousands of minor changes, but rankings still provide a consistent reference point. AI optimization is different. In AI search, benchmarks shift too quickly to make snapshot measurements, which means GEO strategies must focus less on chasing positions and more on building durable signals of authority, trust, and adaptability. Success instead depends on continuous data collection, aggregating signals for statistical significance, adapting to platform-specific drift, and tracking directional trends over time.
AI visibility does not depend on keyword rankings, backlinks, or technical SEO. True AI optimization now hinges on how well teams coordinate across channels, merging SEO, PR, content, social, and product strategy into a unified approach.
Another layer to this difference is the format bias of LLM-based AI search. Because they are text-first systems, the way information is written, structured, and labeled has an outsized impact on whether it surfaces in AI answers. Visuals, videos, and even metadata matter less unless they are transformed into text that the model can parse and reuse. This is why practitioners emphasize the importance of treating text not just as content, but as the primary channel for optimization in AI search. Nico Bignu from LLMO Metrics adds that the dominance of text shapes how every other format (video, audio, or image) is interpreted.
Should Brands Prioritize AI Visibility Over SEO?
The internet is full of self-styled “AI specialists” and opportunistic vendors pushing the idea that AI search is a zero-sum game against Google. Many of them have vibe-coded their AI optimization products on Lovable, framing SEO as obsolete, and pitching themselves as the only way forward.
But that narrative doesn’t hold up against the data. Traditional search isn’t dead — in fact, it still has room to grow. While a ChatGPT answer might occasionally steal a click from a publisher, the bigger picture is expansion, not substitution of search. As Semrush (working with Datos’ clickstream data) points out, when new users adopt ChatGPT, their Google search activity actually rises. On average, users increase from about 10.5 Google searches per week to 12.6, while adding around five ChatGPT sessions. Far from cannibalizing search, AI is broadening the ecosystem and layering on top of it.
Looking at overall usage trends gives an even clearer picture. Across millions of U.S. devices, more than 95% of users still rely on traditional search engines each month, a figure that has barely shifted in the past two and a half years. At the same time, AI tool usage has grown sharply, rising from 8% in early 2023 to 38% by mid-2025.
Among heavy users(people who run more than 10x searches a month), traditional search is holding strong. The share of U.S. desktop devices in this group inched up from 84% in early 2023 to 87% in early 2025, showing steady growth in an already ingrained habit.
AI tools are seeing their own lift. The share of heavy users increased from 3% in January 2023 to 21% by June 2025, with the fastest gains occurring early, and the growth of AI search is now starting to level off.
But this two-sided landscape isn’t just about how often people search; it’s also about the quality of those visits. Siege Media’s 2024–2025 GA4 benchmark data, based on engagement data from 304 client properties collected between January 2024 and July 2025, found that traffic from ChatGPT averaged a 63.16% engagement rate, slightly higher than organic search at 62.09%. These AI-driven visitors usually arrive further along in the journey: they’ve already received a high-level answer from the AI and click through when they need deeper context or expert detail. That makes them highly qualified, but also harder to predict since AI referrals swing more month to month. And ChatGPT isn’t the only powerful source of this kind of traffic. The same study also shows that channels such as paid shopping and affiliates attract highly motivated users.
Traditional search remains the foundation of stable, predictable growth, while AI visibility and other high-intent channels add an extra layer of opportunity. This dual reality is prompting even AI platforms to reassess how they engage with users outside of the AI chats. Perplexity has taken an inventive approach to boosting its visibility in Google search. For trending news topics, the platform automatically generates public webpages, which are now being indexed by Google and appearing in search results. When users click on these results, they are taken to a summary page with the option to ask follow-up questions in the chatbot.
This method differs from ChatGPT’s earlier experiment, where indexed links came from user-shared chats. Perplexity’s approach is more structured, relying on programmatically generated pages rather than user content. And it’s not just Perplexity leaning into SEO. OpenAI itself is signaling the same intentions. The company recently posted a $310,000–$393,000 content strategist role for ChatGPT.com, looking for someone with strong SEO and growth instincts. The move directly challenges the narrative that search optimization is dead in the age of AI. If Perplexity and ChatGPT.com are investing in SEO, shouldn’t brands?
This balance between long-term SEO stability and the new demands of AI visibility isn’t about chasing hacks; it’s about doubling down on fundamentals while adapting to new platforms. In practice, the brands that thrive are those that take a disciplined approach: investing in content that builds trust, leaning into authentic engagement, and staying consistent even as algorithms shift. That’s the very approach Kelsey Platt, SVP of Business Development at Evertune, says is winning today.
Final thoughts
The takeaway is clear: the future of content strategy is hybrid. SEO remains the anchor for predictable, long-term growth, while AI search and other high-intent channels create new ways to reach motivated audiences. The real challenge for brands is learning how to balance these forces and adapt as the consumer journey becomes more fragmented with the emergence of AI chats, AI recommendation layers, and other new marketing channels.
This theme continues in AI Accelerator, a multimodal, multiseries course focused on AI in marketing and advertising. In Workshop 4: The Future of AI in Advertising, Kelsey Platt from Evertune will demonstrate to brand managers, marketing professionals, and advertising experts how to measure AI’s impact on brand perception, adopt innovative marketing strategies, collaborate with agents, and stay ahead as tools and user behavior evolve.
By connecting the lessons of AI visibility with the broader shifts in advertising, brands can see how today’s hybrid content strategies are just the beginning of a larger AI transformation.
FAQs
What is the best way for brands to manage AI visibility measurement?
Brands can measure AI visibility in several ways:
- hire an GEO specialist to work alongside your SEO, content, and marketing teams, keeping AI visibility measurement in-house with full control, while leveraging tools like Ahrefs and Profound (though this requires training and ongoing expertise).
- outsource to AI-specialized agencies that focus on AEO/GEO and tie results to business goals
- work with independent consultants who offer flexible, hands-on audits and optimization
- partner with traditional marketing agencies that bring AI expertise and can integrate visibility into wider campaigns
- go for the hybrid option, which is to train internal SEO specialists to use AI visibility platforms while staying agile
The best approach depends on whether the brand prioritizes control, expertise, or scalability.
What are the tools for AI optimization?
There are two main options: AI visibility and SEO platforms (like Ahrefs, Semrush, or SE Ranking) for tracking mentions, share of voice, and performance, and specialized AI visibility tools (like Profound, Peec, or Evertune) that monitor citations, sentiment, and competitor benchmarks across AI engines.
What are the best AI visibility and GEO tools right now?
The space is already overcrowded for such a nascent technology, with many experimental or “vibe-coding” projects floating around. Here’s a roundup of the platforms brands consider the most reliable picks: Profound, Peec AI, Otterly AI, AthenaHQ, Scrunch AI, Addlly AI, Mentions, LLMO Metrics, ZipTie.dev, Hall, Bluefish AI, Ahrefs Brand Radar, Azoma, Waikay, Gumshoe, Octolens, Relixir, Xfunnel, Anvil, Goodie, Evertune, and RankScale.