notes on AI, growth, and the journey from 0→n
Measure Your AI Search Visibility Score
A new framework for measuring your AI search visibility score—helping brands quantify how often and how well they show up in AI-generated search results.
.png)
AI platforms like ChatGPT, Perplexity, Claude, and Gemini are quickly becoming the front door to search. But unlike traditional search engines, where users could scan results, click around, and form their own conclusions, AI search generates a single, synthesized answer. That answer might include your brand. Or it might not.
These platforms stitch together responses from high-authority media, brand websites, and user-generated content. Your presence or absence in these moments is shaped by signals you can influence, but only if you know where you stand.
This guide will help you audit your brand’s visibility, diagnose what’s limiting your presence, and take action to show up more often, more consistently, and in the right context.
What is an LLM visibility score?
In AI-powered search, there’s no fixed #1 result. Visibility is fluid. Your brand might appear in some responses but be left out in others, and when your brand does show up, its placement can vary widely within the output: at the top, buried in the middle, or mentioned briefly at the end.
An LLM “visibility score” measures how frequently and consistently your brand appears in AI-generated responses across major large language model (LLM) platforms for a given search intent. Ultimately, it’s the percentage of responses in which your brand is mentioned, out of all tested prompt/platform combinations.
Visibility is assessed across three dimensions:
- By platform: How often is your brand mentioned across key generative platforms like ChatGPT, Perplexity, Claude, Gemini, and others?
- By intent cluster: How consistently your brand shows up across variations of a core query. For example, an intent cluster for dining in NYC could include prompts like “Where should I eat in NYC?” or “What are the top-rated restaurants in New York City?.”
- By generation variability: AI responses can differ even when the exact same prompt is entered multiple times. This metric tracks how reliably your brand appears across repeated generations of the same question.
Calculating your brand visibility score
Here’s a simplified example to illustrate what that looks like:
Brand Visibility Across LLMs for “What’s the Best Men’s Running Shoe?” Query
*Note: These are fabricated scores for illustrative purposes only.
Each score represents the percentage of AI responses in which a brand was mentioned across multiple variations of the core query (“What’s the best men’s running shoe?”) and multiple generations per variation. In this example, the visibility score would be calculated by running:
- 10 semantically similar prompt variations (e.g., “Best running shoes for men 2025,” “Shoes for marathon training,” and “Most popular men’s running shoes”
- Across 4 LLM platforms (ChatGPT, Perplexity, Gemini, Copilot)
- With 3 generations per prompt (to account for AI response variability)
That totals 10 prompts × 4 platforms × 3 generations = 120 responses per brand.
In the table above, a visibility score of 0.86 for Nike under ChatGPT means Nike was mentioned in 86% of the 30 ChatGPT generations across those 10 prompt variations.
daydream helps companies with this process, tracking visibility scores over time and layering in deeper diagnostics like:
- Sentiment analysis (positive, neutral, negative mentions)
- Hallucination rate (how often the AI gives incorrect information about your brand)
- Sentiment deviation (how tone shifts depending on query phrasing)
How to act on your brand visibility score
Different prompt types require different strategies to influence how often your brand appears in generative search. To act on your visibility score, start with the most important question: What type of query are you analyzing?
Step 1: Start with the prompt type
Before you try to “fix” your visibility score, you need to understand what kind of query you’re trying to influence.
Is it a head term + product recommendation-style prompt?
These are high-volume searches like “best credit cards for travel” or “best running shoes.” These queries typically surface a narrow set of high-authority, third-party sources (think major media outlets or top-ranking industry sites). A recent analysis from Scrunch AI shows that 90% of LLM responses to non-branded queries come exclusively from third-party domains. That means if your brand name isn’t in the query, your owned content (like a blog post or product page) likely won’t appear.
In these cases, LLMs tend to cite sources that already have domain-wide authority and third-party credibility. Instead of creating intent-matching content to gain visibility, you’ll need to earn your way in. To illustrate this, let’s take a look at a query that would be considered a “head term” in traditional SEO:

The query “what are the best credit cards for airline miles?” didn’t deliver branded content from Capital One or Chase, even though these brands were recommended. Instead, the cited sources were third-party publications like 10X Travel, Forbes, and NerdWallet. For a credit card like American Express to make it into the list, they’d need to pitch their card directly to these outlets for inclusion in their “top cards for air miles” lists.
If you're wondering, “Why wouldn’t a high-domain-authority brand like American Express (DA: 91) be cited directly?” you’re asking the right question. The reality is: it’s not just about your domain authority. It’s about how LLMs weigh perceived impartiality and source diversity. As the Scrunch analysis (noted above) demonstrated, LLMs tend to prefer third-party publishers when responding to broad, non-branded prompts, likely because these sources are seen as more objective.
So even if American Express offers one of the best cards for airline miles, the model is more likely to cite a roundup from NerdWallet or Forbes than a self-authored product page. To show up in the answer, Amex needs to be mentioned within those third-party sources, which means leaning on PR, pitching, and strategic partnerships.
Is it a head term + broad, informational prompt?
These are queries like “latest research on generative AI” or “how climate change is impacting agriculture.” LLMs tend to surface objective information from academic papers, government reports, nonprofit organizations, and top-tier journalism.
To earn visibility here, your brand must act like a source, not a marketer. Think original research, proprietary data, or expert commentary that others cite. For example, American Express publishes original research on global travel trends. When an individual searches “what’s the latest research on global travel trends?” American Express is among the top-cited results alongside BBC and Ustravel.

Is it a long-tail or use-case-driven prompt?
These prompts are more specific and often reflect a clear need or situational constraint — for example, “credit cards that allow same-day spending” or “business cards with no hard credit check.”
This is where branded content has an advantage.
- Third-party publishers rarely go deep on use cases. They focus on broad comparisons or roundup-style content, not niche use cases or edge-case functionality.
- You’re the best source to answer the question directly. When someone searches for something your product actually does, and your content is structured to match the way that question is asked, you're not just relevant — you're uniquely qualified.
Let’s take a look at a more specific, long-tail prompt relevant to the credit card example above.

The query “Which cards allow me to apply and spend the same day?” still delivers content from third-party publishers like The Points Guy and WalletHub, but also includes Capital One’s branded page because it directly addresses the user’s intent.
LLMs are trained to surface the most direct, helpful, and trustworthy answer. So when your content clearly addresses a high-intent, specific query, you increase your chances of being included in the response.
Step 2: Diagnose the gaps
Now that you understand the type of query you're analyzing, use your LLM visibility scores to assess where and why your brand might be missing.
Step 3: Monitor over time
LLM visibility isn’t static. It’s constantly evolving as platforms sharpen their semantic understanding.
Set up a regular cadence (e.g. monthly or quarterly) to re-run your LLM visibility tests using tools like Scrunch, Profound, or a custom framework. Over time, track:
- Net visibility change: Is your brand appearing in more responses overall?
- Platform-specific trends: Are you gaining visibility on one LLM but losing on another?
- Query-level improvements: Are your owned/earned efforts driving more consistent mentions for the prompts you care about?
- Sentiment shifts: Is the tone of AI-generated mentions improving?
- Recommendation rates: Are LLMs actively recommending your brand more often?
Build a strategy that lasts
As AI continues to reshape how people search, evaluate, and make decisions, the most enduring brands will be those that evolve with it, not just react to it. By establishing a baseline today, you’re not just improving how you show up now, you’re building a search strategy that’s durable, adaptable, and built to outlast change.
If you’re ready to start building an SEO strategy for the future, let’s chat.
The future of search is unfolding; don’t get left behind
Gain actionable insights in real-time as we build and apply the future of AI-driven SEO
.png)
Protect Your Brand in the Age of AI Search
A strategic guide on protecting your brand in the AI search era, showing why human oversight and clear brand identity matter as AI-generated results shape user perceptions.