notes on AI, growth, and the journey from 0ân
Make AI Engines Trust (and Cite) Your Content
A practical guide to optimizing your content for AI discoverability, authority, and attribution.

Youâve probably seen the headlines: âI Quit Google Search for AIâand Iâm Not Going Back.â Tools like ChatGPT, Perplexity, and Claude are rapidly reshaping how people search, surfacing direct answers over traditional lists of links. As users shift their habits, large language models (LLMs) are becoming the new gatekeepers of content visibility.
For brands and publishers, this isnât just another channel to optimize for, itâs a tectonic shift in how content is discovered. If your content isnât showing up in AI-generated answers, it might as well not exist for an entire generation of users.
This guide outlines practical, research-backed tactics to improve your visibility in generative outputs, from credibility-building techniques to fluency enhancements. But more importantly, it offers a new lens on content strategy: one that goes beyond keyword-chasing to consider how your entire content ecosystem is structured for both humans and machines.
Before enlisting tactics, focus on your foundation
Itâs tempting to jump straight into content optimization tactics. But LLM visibility starts with how your content shows up structurally and strategically. There are two core areas to get right before you fine-tune individual pages:
1. Make sure LLMs can crawl and parse your site
If AI systems canât access or interpret your content, you wonât make it into the training, indexing, or retrieval layersâno matter how polished your content is. Start with three foundational files:

These files form the technical baseline that determines whether LLMs even see your content, let alone cite it.
2. Benchmark your current LLM visibility
Generative engines pull information from a constantly shifting web of sources. Use tools like Scrunch to understand LLM visibility regularly. Across priority prompts, youâll want to understand your visibility across three dimensions:
- By platform: How often is your brand mentioned across key generative platforms like ChatGPT, Perplexity, Claude, Gemini, and others?
- By intent cluster: How consistently your brand shows up across variations of a core query. For example, an intent cluster for dining in NYC could include prompts like âWhat are good airline credit cards?â or âWhat are the best credit cards with point transfers to airlines?â
- By generation variability: AI responses can differ even when the exact same prompt is entered multiple times. This metric tracks how reliably your brand appears across repeated generations of the same question.
Youâll also want to understand your position and sentiment within these responses:
- Position: How high do you appear within multi-source answers?
- Sentiment: Are you being represented accurately and positively?
This isnât just about exposure; itâs about finding the blind spots that guide your content strategy. Tracking visibility and traffic together helps you spot:
- Pages that should be surfacing in LLMs but arenât
- High-performing content that deserves structural or semantic upgrades
- Untapped opportunities (i.e, third-party sites getting cited where your brand should)
In short: Before optimizing your content, align your infrastructure and your intelligence. Know what LLMs can access, understand how youâre currently represented, and measure where you're being found (or missed).Â
Tactics to improve LLM visibility
Once your infrastructure is in place and youâve captured a clear baseline of how and where your brand appears, the next step is content execution and optimization.
Step 1: Create a content plan that targets the queries you want to appear for
To be cited by LLMs, you first need to show up in the right places, especially for the queries that matter most to your business. Many generative engines still lean on traditional search indices to decide whatâs credible:
- ChatGPT uses Bingâs index
- Claude uses Brave Search
- Gemini uses Google
So while LLMs use different methods to generate answers, strong SEO fundamentals still carry weight. If you're ranking well in search, you're likely in the pool for LLM retrieval too.
To expand your surface area:
- Map your ideal queries across the funnel: from top-of-funnel explainer searches to bottom-of-funnel comparisons and product-related prompts.
- Fill content gaps: If a query like âbest open-source CRM platformsâ matters to you and you donât have a clear, standalone answer for it, build one.Â
This isnât about chasing every long-tail phrase or keyword, itâs about building the best possible reference for the questions you want to be part of.Â
Step 2: Layer on tactical optimizations to improve your visibility and citation frequency
In June 2024, researchers published a paper titled, GEO: Generative Engine Optimization, which has since become a foundational reference for understanding how to improve visibility within LLM outputs. The paper outlines a set of measurable tactics that influence whether (and how) content appears in AI-generated responses. To measure the effectiveness of different tactics, the researchers used the following metrics:
- Position-adjusted word count. This combines two factors: how long a citation is and where it appears in the response. Citations that are both lengthy and placed near the top of a response are more likely to be seen and trusted by users.
- Subjective Impression. This captures how relevant or valuable a citation feels to the user, beyond just length or placement. Several factors shape this impression, including (but not limited to):
- Relevance: How directly the citation addresses the userâs question
- Influence: How much the response depends on that specific citation
- Uniqueness: Whether the content offers rare or differentiated insights
- Diversity: Whether it adds breadth by incorporating varied sources or perspectives
Using these indicators, the researchers tested nine content optimizations to see which ones meaningfully improved LLM visibility. Below, weâve organized the findings by impact, giving you a priority-ranked playbook of tactics worth adopting.

Tactics yielding 15â25% improvement are often worth layering in, especially when paired with a strong foundational content strategy. But methods with 30%+ relative improvement, like quotation and statistics addition, should be considered high-priority tools in your generative SEO playbook.
Combining optimization tactics = bigger impact
While individual optimization tactics demonstrate notable improvements in content visibility within generative AI search engines, strategically combining these tactics leads to amplified, often multiplicative results.Â
Some of the strongest combinations that yielded ~5% average improvement over the highest performing individual tactic were:
- âfluencyâ + âstatisticsâ
- âfluencyâ + âquotes,âÂ
- âfluencyâ + âcitationsâÂ
- âstatisticsâ + âquotes.âÂ
The researchers also found that combining multiple optimization tactics can be especially powerful for smaller or less-established sites (those that typically struggle to rank in traditional search due to fewer backlinks or lower domain authority). Unlike Google, generative engines arenât just evaluating link equity or historical signals. They prioritize the intrinsic quality, clarity, and credibility of the content itself. That levels the playing field.
In the study, tactics like adding direct quotations (which introduce verifiable information and build trust) and improving readability through fluency optimization helped lower-ranked pages gain significantly more visibility in AI-generated outputs.Â
Step 3: Earn citations with third-party publishers
LLMs often pull heavily from a select set of third-party sources (review sites, media publishers, resource hubs) when synthesizing answers. If a third-party site is already ranking highly for a query you care about, you can still earn a presence there.
To secure third-party citations, reach out to the content owners of highly cited pages, pitch them a clear reason why they should include your brand, product or data, and offer them a resource they can link to within their content.Â
Canva has historically done this very well. They developed an entire playbook and team around outreach + backlinking that can be replicated for LLM visibility. Hereâs how they do it:Â
- Identify third-party articles that already rank or get cited for your target queries.
- Pitch inclusion by offering relevant data, quotes, or tools the publisher can link to.
- Make it easy for the publisher to say yes: provide stats, original research, or complementary resources.
Rethink visibility from the ground up
The rise of generative engines marks a turning point in how content gets discovered. Itâs no longer just about rankingâitâs about being cited, trusted, and surfaced by AI. While this shift brings complexity, it also levels the playing field for brands ready to adapt.
At daydream, we go beyond surface-level tactics to help you build a content ecosystem that AI wants to reference. If youâre ready to show up in the next generation of search, letâs chat.
The future of search is unfolding; donât get left behind
Gain actionable insights in real-time as we build and apply the future of AI-driven SEO