SHARE
Link copied!

https://www.withdaydream.com/library/measure-your-ai-search-visibility-score

On this page
daydream journal

notes on AI, growth, and the journey from 0→n

Measure Your AI Search Visibility Score

A new framework for measuring your AI search visibility score—helping brands quantify how often and how well they show up in AI-generated search results.

Jun 12
 ・ 
No items found.
Measure Your AI Search Visibility Score

AI platforms like ChatGPT, Perplexity, Claude, and Gemini are quickly becoming the front door to search. But unlike traditional search engines, where users could scan results, click around, and form their own conclusions, AI search generates a single, synthesized answer. That answer might include your brand. Or it might not.

These platforms stitch together responses from high-authority media, brand websites, and user-generated content. Your presence or absence in these moments is shaped by signals you can influence, but only if you know where you stand.

This guide will help you audit your brand’s visibility, diagnose what’s limiting your presence, and take action to show up more often, more consistently, and in the right context. 

What is an LLM visibility score? 

In AI-powered search, there’s no fixed #1 result. Visibility is fluid. Your brand might appear in some responses but be left out in others, and when your brand does show up, its placement can vary widely within the output: at the top, buried in the middle, or mentioned briefly at the end.

An LLM “visibility score” measures how frequently and consistently your brand appears in AI-generated responses across major large language model (LLM) platforms for a given search intent. Ultimately, it’s the percentage of responses in which your brand is mentioned, out of all tested prompt/platform combinations.

 Visibility is assessed across three dimensions:

  • By platform: How often is your brand mentioned across key generative platforms like ChatGPT, Perplexity, Claude, Gemini, and others?
  • By intent cluster: How consistently your brand shows up across variations of a core query. For example, an intent cluster for dining in NYC could include prompts like “Where should I eat in NYC?” or “What are the top-rated restaurants in New York City?.”
  • By generation variability: AI responses can differ even when the exact same prompt is entered multiple times. This metric tracks how reliably your brand appears across repeated generations of the same question.

Calculating your brand visibility score

Here’s a simplified example to illustrate what that looks like:

Brand Visibility Across LLMs for “What’s the Best Men’s Running Shoe?” Query

ChatGPT Perplexity Gemini Copilot
Nike 0.86 0.97 0.99 0.91
ASICS 0.79 0.88 0.80 0.67
Adidas 0.54 0.49 0.66 0.70
Hoka 0.23 0.32 0.44 0.55
Saucony 0.22 0.28 0.31 0.28

*Note: These are fabricated scores for illustrative purposes only. 

Each score represents the percentage of AI responses in which a brand was mentioned across multiple variations of the core query (“What’s the best men’s running shoe?”) and multiple generations per variation. In this example, the visibility score would be calculated by running:

  • 10 semantically similar prompt variations (e.g., “Best running shoes for men 2025,” “Shoes for marathon training,” and “Most popular men’s running shoes” 
  • Across 4 LLM platforms (ChatGPT, Perplexity, Gemini, Copilot)
  • With 3 generations per prompt (to account for AI response variability)

That totals 10 prompts × 4 platforms × 3 generations = 120 responses per brand.

In the table above, a visibility score of 0.86 for Nike under ChatGPT means Nike was mentioned in 86% of the 30 ChatGPT generations across those 10 prompt variations.

daydream helps companies with this process, tracking visibility scores over time and layering in deeper diagnostics like:

  • Sentiment analysis (positive, neutral, negative mentions)
  • Hallucination rate (how often the AI gives incorrect information about your brand)
  • Sentiment deviation (how tone shifts depending on query phrasing)

How to act on your brand visibility score

Different prompt types require different strategies to influence how often your brand appears in generative search. To act on your visibility score, start with the most important question: What type of query are you analyzing? 

Step 1: Start with the prompt type

Before you try to “fix” your visibility score, you need to understand what kind of query you’re trying to influence.

Is it a head term + product recommendation-style prompt? 

These are high-volume searches like “best credit cards for travel” or “best running shoes.” These queries typically surface a narrow set of high-authority, third-party sources (think major media outlets or top-ranking industry sites). A recent analysis from Scrunch AI shows that 90% of LLM responses to non-branded queries come exclusively from third-party domains. That means if your brand name isn’t in the query, your owned content (like a blog post or product page) likely won’t appear.

In these cases, LLMs tend to cite sources that already have domain-wide authority and third-party credibility. Instead of creating intent-matching content to gain visibility, you’ll need to earn your way in. To illustrate this, let’s take a look at a query that would be considered a “head term” in traditional SEO:

The query “what are the best credit cards for airline miles?” didn’t deliver branded content from Capital One or Chase, even though these brands were recommended. Instead, the cited sources were third-party publications like 10X Travel, Forbes, and NerdWallet. For a credit card like American Express to make it into the list, they’d need to pitch their card directly to these outlets for inclusion in their “top cards for air miles” lists. 

If you're wondering, “Why wouldn’t a high-domain-authority brand like American Express (DA: 91) be cited directly?” you’re asking the right question. The reality is: it’s not just about your domain authority. It’s about how LLMs weigh perceived impartiality and source diversity. As the Scrunch analysis (noted above) demonstrated, LLMs tend to prefer third-party publishers when responding to broad, non-branded prompts, likely because these sources are seen as more objective.

So even if American Express offers one of the best cards for airline miles, the model is more likely to cite a roundup from NerdWallet or Forbes than a self-authored product page. To show up in the answer, Amex needs to be mentioned within those third-party sources, which means leaning on PR, pitching, and strategic partnerships. 

Is it a head term + broad, informational prompt? 

These are queries like “latest research on generative AI” or “how climate change is impacting agriculture.” LLMs tend to surface objective information from academic papers, government reports, nonprofit organizations, and top-tier journalism.

To earn visibility here, your brand must act like a source, not a marketer. Think original research, proprietary data, or expert commentary that others cite. For example, American Express publishes original research on global travel trends.  When an individual searches “what’s the latest research on global travel trends?” American Express is among the top-cited results alongside BBC and Ustravel.

Is it a long-tail or use-case-driven prompt? 

These prompts are more specific and often reflect a clear need or situational constraint — for example, “credit cards that allow same-day spending” or “business cards with no hard credit check.”

This is where branded content has an advantage.

  1. Third-party publishers rarely go deep on use cases. They focus on broad comparisons or roundup-style content, not niche use cases or edge-case functionality.
  2. You’re the best source to answer the question directly. When someone searches for something your product actually does, and your content is structured to match the way that question is asked, you're not just relevant — you're uniquely qualified.

Let’s take a look at a more specific, long-tail prompt relevant to the credit card example above. 

The query “Which cards allow me to apply and spend the same day?” still delivers content from third-party publishers like The Points Guy and WalletHub, but also includes Capital One’s branded page because it directly addresses the user’s intent. 

LLMs are trained to surface the most direct, helpful, and trustworthy answer. So when your content clearly addresses a high-intent, specific query, you increase your chances of being included in the response. 

Step 2: Diagnose the gaps

Now that you understand the type of query you're analyzing, use your LLM visibility scores to assess where and why your brand might be missing. 

DIAGNOSING GAPS FOR HEAD TERMS + RECOMMENDATION QUERIES
Example: “Best running shoes for men”
Gap What It Means How to Fix It
Low visibility across all models LLMs don’t consistently associate your brand with this category. Build presence in authoritative, third-party sources LLMs trust: think “best of” lists, expert roundups, and top editorial coverage. Focus on the sources most frequently cited by LLM platforms.

For example, if Nike doesn’t appear for “best running shoes,” they need a deeper presence in publications that LLMs trust (e.g., Runner’s World or Wirecutter). That means pitching products, highlighting test results, and ensuring inclusion in top gear guides.
High variability across clusters You appear in some prompts (e.g., “best running shoes”) but not in similar ones (e.g., “top fitness shoe brands”). Expand coverage by mapping semantic clusters around key head terms and ensuring consistent brand association across all of them. Syndicate your messaging and differentiators across PR, partnerships, and UGC.

Nike may show up for “best running shoes” but not “top fitness shoe brands.” They could close this gap by broadening PR angles and expanding third-party mentions across more fitness contexts.
Inconsistency across generations Your brand appears in some AI outputs, but not consistently. Your authority is borderline. Focus on reinforcing signals that AI models use to determine “default” brand recommendations — authoritative mentions, reviews, press coverage, and schema-optimized cornerstone content.

DIAGNOSING GAPS FOR HEAD TERMS + INFORMATIONAL QUERIES
Example: “Latest research on running injuries”
Gap What It Means How to Fix It
Low visibility across all models Your brand doesn’t have authoritative, comprehensive content on the broader topic. Create in-depth, well-researched content such as whitepapers, research summaries, expert articles, or educational guides.

Example: For “latest research on running injuries,” publish detailed studies, expert interviews, or data analyses that comprehensively summarize current findings.
High variability across clusters Your content covers some parts of the broad topic but misses important subtopics or related questions. Map the semantic landscape of the topic and build content hubs covering all key aspects, questions, and trends. Link these pieces internally and update regularly.
Inconsistency across generations Your content is sometimes surfaced but lacks consistent authority signals recognized by AI. Strengthen signals of expertise by referencing credible sources, adding expert authorship, maintaining updated content, and applying schema markup for articles and FAQs.

DIAGNOSING GAPS FOR LONG-TAIL TERMS AND SPECIFIC USE-CASE QUERIES
Example: “Best running shoes for flat feet”
Gap What It Means How to Fix It
Low visibility across all models Your brand hasn’t created content tailored to this specific need. Create targeted product pages or buying guides focused on that user profile (e.g., “Nike’s best running shoes for flat feet”). Use schema, reviews, and semantic relevance.
High variability across similar long-tail prompts You rank for some variants but not others. You may have shallow or narrowly optimized content. Broaden your long-tail coverage and optimize for semantic neighbors (e.g., “arch support” + “flat feet”).
Inconsistent generations AI doesn’t always surface your products, even for the same specific query. Strengthen product associations with these queries using consistent internal linking, user reviews, Q&A content, and clearly mapped use cases in product descriptions.

Step 3: Monitor over time

LLM visibility isn’t static. It’s constantly evolving as platforms sharpen their semantic understanding.

Set up a regular cadence (e.g. monthly or quarterly) to re-run your LLM visibility tests using tools like Scrunch, Profound, or a custom framework. Over time, track:

  • Net visibility change: Is your brand appearing in more responses overall?
  • Platform-specific trends: Are you gaining visibility on one LLM but losing on another?
  • Query-level improvements: Are your owned/earned efforts driving more consistent mentions for the prompts you care about?
  • Sentiment shifts: Is the tone of AI-generated mentions improving?
  • Recommendation rates: Are LLMs actively recommending your brand more often?

Build a strategy that lasts

As AI continues to reshape how people search, evaluate, and make decisions, the most enduring brands will be those that evolve with it, not just react to it. By establishing a baseline today, you’re not just improving how you show up now,  you’re building a search strategy that’s durable, adaptable, and built to outlast change.

If you’re ready to start building an SEO strategy for the future, let’s chat. 

The future of search is unfolding; don’t get left behind

Gain actionable insights in real-time as we build and apply the future of AI-driven SEO

Protect Your Brand in the Age of AI Search
Insights
Jun 12

Protect Your Brand in the Age of AI Search

A strategic guide on protecting your brand in the AI search era, showing why human oversight and clear brand identity matter as AI-generated results shape user perceptions.

No items found.
The Case Against llms.txt: Why the Hype Outpaces the Reality
Insights
Jun 5

The Case Against llms.txt: Why the Hype Outpaces the Reality

A critical look at llms.txt, a proposed standard for AI visibility that’s high-effort, low-impact, and not widely adopted.

daydream team
 
daydream team
What is LLMs.txt?
Insights
May 22

What is LLMs.txt?

A Technical Overview of a Proposed Standard for AI Visibility

Thenuka Karunaratne
 
Thenuka Karunaratne

Build an organic growth engine that drives results

THE FASTEST-GROWING STARTUPS TRUST DAYDREAM