notes on AI, growth, and the journey from 0ân
Measure Traffic from LLM Platforms
A practical guide to tracking traffic from ChatGPT, Gemini and other LLMs in GA4 so you can measure AI-driven visibility and optimize your content strategy.
.avif)
As of February 2025, over 400 million people use ChatGPT every week. Meanwhile, about 1 in 4 Google searches result in an AI Overview. A growing share of search is happening inside large language model (LLM) platforms, and that shift is rewriting how users find and interact with your brand.
Unlike traditional search traffic, LLM-driven interactions donât lead to pages of blue links. Instead, users get concise, conversational answersâoften synthesized, cited, and resolved in a single exchange. LLM-driven traffic behaves fundamentally differently from traditional search traffic. If youâre measuring it the same way, youâre likely missing what matters.
Brands need a new measurement model that reflects how LLM platforms filter, frame, and forward users toward final destinations.
In this guide, weâll break down how LLM traffic works, why itâs different from traditional search, and how to adapt your analytics for this discovery flow.
The compressed AI search funnelÂ
In LLM-driven search experiences like ChatGPT, Perplexity, and Googleâs AI Overviews, the traditional multi-step funnelâdiscovery, evaluation, and selectionâis often compressed into a single interaction. Rather than clicking through a list of links to answer their questions, users engage directly with synthesized responses that surface key sources and highlight recommended brands. Visibility and trust are established within the AIâs response window before a user even visits a website.
Hereâs a high-level overview of how user dynamics change based on the search interface.
â
In summary, the LLM search funnel is more compressed and orchestrated by the AI. Traditional search funnels often branch out (many clicks, sites, and queries), whereas LLM funnels tend to either resolve on the spot or hand the user off to one definitive source. This leads to key differences for LLM search:Â
- Fewer impressions/clicks for publishers, but those clicks can be more qualified.
- Longer in-platform engagement (chatting).
- Potentially faster conversions when they do occur (since the AI might deliver users at a later stage of intent).
Why measuring AI-driven traffic matters
In traditional search, traffic volume was part of the brand story. Even high-funnel visitsâusers skimming blog posts or casually exploring your categoryâsignaled awareness and potential future intent. But that model doesnât cleanly apply in an LLM-first world.
Platforms like ChatGPT, Perplexity, Claude, and Googleâs AI Overviews act more like filters than discovery engines. They summarize, compare, and cite a narrow set of sources. By the time someone clicks through to your site, theyâve often already consumed a condensed version of your value proposition. In many cases, the visit isnât part of discovery, itâs part of validation or decision-making. That shift can lead to:
- Shorter, more focused sessions
- Fewer pageviews, but clearer intent
- Lower traffic volumes but sometimes higher value per visit
Itâs critical to recognize that LLM traffic doesnât follow a uniform pattern. User behavior varies significantly based on query type, industry, and funnel stage. Several independent analyses reflect this complexity:
- A 2025 Salt Agency analysis found that LLM traffic generally had lower conversion rates than traditional organic; however, for some verticals and query types, LLM referral conversion rates were higher than traditional organic search.Â
- In another small-scale analysis, Knotch reported that although LLM traffic made up just 0.13% of total site visits, it accounted for 0.28% of conversionsâover 2Ă its traffic share, compared to a 1.7Ă ratio for traditional search.
The key is not to treat this traffic like general organic sessions. LLM-driven traffic behaves differently and should be measured differently. Segmenting it reveals how the funnel is evolving and where new opportunities for intent-driven engagement may be emerging.
Rethink your traffic goals
To adapt your analytics to this compressed funnel, shift focus from volume to performance quality. Track metrics that reveal how AI-origin traffic behaves:
- Number of AI referrals. How many site visits are coming from platforms like ChatGPT, Perplexity, Claude, or Google AI Overviews?
- Top landing pages for AI traffic. Which pages are most frequently cited or clicked on from AI-generated responses? Are users arriving through product pages, FAQs, or third-party reviews?
- Conversion rate per AI session. Are visitors from AI platforms converting at higher rates than your other organic traffic?
- % of AI sessions that trigger meaningful actions. Track purchases, signups, demo requests, or other KPIs that reflect intent fulfillment.
- Average time to conversion. How quickly do these users complete a meaningful action after landing on your site?
How to model and measure this funnel
To make LLM traffic measurable and actionable, treat it as a distinct acquisition source with its own funnel characteristics.
- Segment AI referrals
Start by creating a custom traffic segment in your analytics platform. Tag sessions from known LLM referrers:
- chat.openai.com (ChatGPT browser experience)
- perplexity.ai
- searchgenerativeexperience.google.com or bard.google.com
- Other tools that include direct citations or embedded links (e.g., browser extensions, Claude citations)
This lets you isolate performance, behavior, and conversion patterns specific to AI-origin traffic.
- Define funnel stages based on behavior, not pageviews
Traditional funnels track how users move across pages. But LLM-driven traffic doesnât navigate in the same way because it arrives filtered and more focused. While the stages still resemble entry, engagement, and decision, the user psychology and velocity are fundamentally different.
LLMs collapse the exploration phase by doing the comparison work for the user. That means your funnel shouldnât model how far users go, it should model what they came to do and how quickly they do it.
â Entry (pre-qualified arrival): A small percentage of users click a citation from an AI platform. This is often the only link theyâll click. These sessions typically begin mid-funnel on a product page, comparison article, or feature explainer after the LLM has already done the filtering.
â Engagement (rapid validation): The user scans for quick signals that match what the AI described (credibility, pricing clarity, or social proof). Theyâre not exploring broadly; theyâre validating narrowly.
â Decision (convert or exit): If your content aligns with the AI summary and meets the userâs expectations, conversion can happen in-session. If not, they may bounce or return later via branded or direct search. Use UTM parameters and session stitching to track these cross-touchpoint journeys.
- Use funnel analysis tools to track drop-off or acceleration
Funnel exploration tools can surface key differences in how LLM traffic behaves:
- Do AI visitors skip site navigation entirely and go straight to CTAs?
- Is time-on-page lower, but conversion rate higher?
- Are bounce rates misleading because sessions are short but successful?
For example, you may find that Perplexity-referred users convert 2x faster than SEO traffic but only view one page.
- Compare AI traffic side-by-side with traditional organic and paid
Finally, benchmark LLM-driven sessions against your other acquisition channels:
Is it outperforming branded search for conversion rate? For example, Claude sessions might convert to demo requests in a single visit, while branded organic traffic takes multiple touchpoints.
Does it generate higher revenue per session? You might find that Perplexity users spend less time on site, but have a higher checkout rate or larger cart size.
Are average order values higher or lower? This can indicate whether AI platforms are driving more qualified leads or simply different buyer segments.
This kind of benchmarking helps justify content updates. More importantly, it gives you the clarity to deploy budget more efficiently, doubling down on the pages, channels, and formats that AI platforms are elevating. Hereâs a theoretical example: A SaaS brand found that AI-referred visitors from chat.openai.com had a 32% higher free trial conversion rate than both paid search and email traffic. As a result, they reallocated budget toward optimizing the pages most frequently cited by ChatGPT, including adding social proof above the fold and simplifying their pricing explanation.
Final thought
This isnât just a new referrer category, itâs a new acquisition archetype. AI platforms have redefined how users arrive, what they expect, and how fast they act. The brands that win will be the ones who measure this shift early and build for it intentionally.
The future of search is unfolding; donât get left behind
Gain actionable insights in real-time as we build and apply the future of AI-driven SEO
.avif)
Protect Your Brand in the Age of AI Search
A strategic guide on protecting your brand in the AI search era, showing why human oversight and clear brand identity matter as AI-generated results shape user perceptions.