notes on AI, growth, and the journey from 0→n
How Gemini Crawls and Indexes Your Website
If Google can't see it, Gemini likely can’t either.

Why Gemini Visibility Still Starts with Google Search
Gemini’s generative answers aren’t pulled from thin air. They’re built atop Google’s existing web index—meaning if a page isn’t indexed in traditional Google Search, it likely won’t surface in Gemini’s AI summaries, citations, or synthesized responses.
While Gemini may eventually use custom indexing strategies or model-specific crawling, the current system heavily leans on what Google Search can already access and understand. That makes traditional crawlability and indexation table stakes for AI visibility.
Crawling and Indexing 101 (Gemini Edition)
Crawling is how Googlebot discovers pages on your site. It uses sitemaps, internal links, and backlinks to identify what to visit. Once discovered, indexing is the next step—Google analyzes and stores the page in its search index so it can appear in results.
A few non-negotiables:
- Make sure Googlebot can access all your important pages
- Don’t block key assets (like JS or CSS) in robots.txt
- Ensure each indexable page returns a 200 HTTP status code
- Avoid excessive redirects or infinite crawl loops
Pro tip: Test your page in Google’s URL Inspection tool. It shows whether a page was crawled, how it was rendered, and if it made it into the index.
JavaScript rendering: Gemini sees the full page
Unlike most LLM crawlers, Gemini has one major technical advantage: it inherits Googlebot’s full JavaScript rendering capabilities.
That means Gemini can crawl and index client-side rendered content, including:
- React or Vue applications rendered via CSR (Client-Side Rendering)
- Dynamic SPAs (Single-Page Applications)
- Content fetched asynchronously via fetch() or XHR
- Interactive documentation portals that depend on hydration
This is a significant departure from platforms like OpenAI, Claude, and Perplexity. According to recent analysis by Vercel and MERJ, none of those bots currently execute JavaScript. They can fetch JS files but can’t parse or hydrate them, meaning client-rendered pages appear blank to them.
Gemini, by contrast, leverages the same infrastructure as Googlebot, including:
- Full DOM rendering
- CSS parsing and layout
- JavaScript execution, including dynamic imports and async data fetching
- Ajax and fetch/XHR-based content retrieval
If your site relies on client-side rendering (CSR) for primary content, such as product descriptions, blog articles, or technical documentation, Gemini is the only widely-used LLM search experience confirmed to fully render JavaScript today.
That means Gemini can:
- Read and cite content hidden behind JS frameworks like React or Angular
- Accurately extract structured content from hydration-only components
- Surface relevant snippets even if your site lacks traditional SSR or SSG
If Google can index your site, Gemini likely can too.
This positions Gemini as the most technically complete LLM crawler today, especially for modern JavaScript-heavy websites.
Master Your Robots.txt File and XML Sitemap
Your robots.txt file lives at the root of your domain and tells crawlers which parts of your site they can and can’t access.
Use it to:
- Prevent crawling of low-value pages (e.g., admin panels, search result pages)
- Declare the location of your sitemap
Example:
# robots.txt – place at https://www.example.com/robots.txt
User-agent: *
Disallow: /admin/ # block low-value back-office pages
Disallow: /search/ # block faceted or internal search results
# Only add Allow when you need to override a broader Disallow rule.
# Example: let Google crawl a specific help article that lives under /search/
Allow: /search/help-center/article-123/
# Point crawlers to your XML sitemap
Sitemap: https://www.example.com/sitemap.xml
Blocking a page via robots.txt doesn’t prevent it from being indexed if other sites link to it. To keep a page out of Google entirely, use a noindex meta tag (and don’t block the page in robots.txt, or Google won’t see the tag).
Your sitemap acts as a blueprint for search engines. It should include:
- All important, canonical pages
- Only URLs returning a 200 status
- Recently updated content
Submit your sitemap via Google Search Console and reference it in robots.txt. Use separate sitemaps (or a sitemap index) if you have more than 50,000 URLs or over 50MB uncompressed.
Use Meta Robots Tags Strategically
The robots meta tag controls indexing at the page level, and it must be written into the HTML of the page inside the <head> section.
Common directives:
- <meta name="robots" content="noindex">: Don’t show this page in search results.
- <meta name="robots" content="nofollow">: Don’t follow links on this page.
- <meta name="robots" content="none">: Same as noindex, nofollow.
- <meta name="robots" content="nosnippet">: Don’t show a preview/snippet in results.
- <meta name="robots" content="indexifembedded">: Index this content only if embedded in another page.
- <meta name="robots" content="max-snippet:0">: Don’t show a snippet.
- <meta name="robots" content="max-image-preview:standard">: Limit image preview size.
- <meta name="robots" content="max-video-preview:-1">: No limit on video preview length.
- <meta name="robots" content="notranslate">: Don’t offer translation.
- <meta name="robots" content="noimageindex">: Don’t index images on this page.
- <meta name="robots" content="unavailable_after: [date/time]">: Expire from results after a given date.
These rules can be combined in a comma-separated list.
A few best practices:
- Don’t block pages with noindex in robots.txt—Google won’t be able to see the tag
- Avoid contradictory signals (e.g., canonicalizing to a noindex page)
Understand Crawl Timing and Update Frequency
Gemini’s visibility pipeline relies on Googlebot, which means its crawling behavior follows Google’s established patterns for timing, frequency, and freshness.
How often does Googlebot crawl your site?
Googlebot’s crawl rate is dynamic. It adjusts based on:
- Site popularity (more backlinks, more frequent crawls)
- Content change frequency (frequent updates = faster revisits)
- Server response times (fast = more requests allowed)
- Crawl budget (especially for larger sites with many URLs)
Pages that are linked prominently, update often, and load quickly tend to be revisited daily or weekly. Low-priority pages or deeply nested URLs may only get crawled every few weeks or months.
Gemini’s freshness depends on Google’s recency
Because Gemini builds on Google Search’s index, it doesn’t fetch your site in real-time like Perplexity or ChatGPT Browsing might. Instead, it reflects the last time Googlebot crawled and indexed your page.
If your content is updated but hasn’t been recrawled, Gemini might summarize outdated information, even if the page looks fresh to you.
How to encourage faster updates:
- Keep your XML sitemaps clean and updated with correct <lastmod> dates.
- Internally link to updated pages from high-traffic or frequently crawled pages.
- Submit new or updated URLs via Google Search Console’s URL Inspection tool to trigger reindexing.
- Use structured data like dateModified and datePublished to signal freshness.
While Gemini doesn’t yet offer visibility into how recent its summaries are, optimizing for fast and reliable indexing by Google gives you the best shot at being accurately represented.
TL;DR: Gemini doesn’t crawl your site on its own—it reads what Google sees. If you want Gemini to “know” your latest updates, make sure Googlebot sees them first.
Monitor and Debug with Google Search Console
GSC is your best friend for crawl/index diagnostics.
Key reports:
- Index Coverage: See which pages are indexed, excluded, or errored
- Crawl Stats: Understand Googlebot activity on your site
- URL Inspection: Inspect how Google sees a specific page
- Sitemaps: Track sitemap submission and indexing
Set up alerts to catch indexing issues early. Use the URL Inspection tool to request indexing after major updates.
What you should know about llms.txt for Gemini
There’s growing interest in llms.txt—a markdown file placed at http://yourdomain.com/llms.txt, as a way to guide LLMs through structured site content. It’s pitched as a human-readable map for AI summarization.
But let’s be clear: Gemini does not officially support llms.txt today.
Even though companies like Anthropic and Cloudflare publish these files, there’s no evidence that Googlebot or Gemini actively parse or prioritize them.
That said, llms.txt is:
- Easy to implement
- Aligned with existing best practices (clear structure, canonical links)
- Potentially useful if future standards emerge
We recommend treating it as a low-effort, speculative enhancement, not a core SEO requirement.
Sample structure:
# Title
Brief description of the site.
## Section Name
- [Link Title](https://link_url): Optional description
- [Link Title](https://link_url/subpath): Optional description
## Another Section
- [Link Title](https://link_url): Optional description
Use llms-full.txt if you want to bundle full documentation into a single ingestible file for AI tools—but again, treat this as experimental.
Understand Crawl Budget
For larger sites, Google allocates a limited crawl budget—the number of pages it will crawl within a given timeframe.
Improve crawl efficiency by:
- Consolidating duplicate content
- Fixing broken links and redirect chains
- Keeping sitemaps and internal links up-to-date
- Avoiding endless URL parameters or session IDs
Use the Crawl Stats report in GSC to see how your crawl budget is being spent. However, if you serve <≈1 M URLs, crawl budget is rarely the bottleneck.
Use Schema to Add Context for Gemini
Structured data (via JSON-LD or Microdata) helps Google and Gemini understand your content’s meaning, not just its appearance.
Schema.org provides a shared vocabulary of tags that describe what your content is. You can:
- Use Microdata to add attributes directly into your HTML
- Use JSON-LD to group metadata in a single <script> block
Microdata example:
<article itemscope itemtype="https://schema.org/BlogPosting">
<h1 itemprop="headline">Your Title</h1>
<span itemprop="author">Author Name</span>
<time itemprop="datePublished" datetime="2025-01-01">January 1, 2025</time>
</article>
JSON-LD example:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": "Your Title",
"author": "Author Name",
"datePublished": "2025-01-01"
}
</script>
Schema markup can cover everything from products, reviews, recipes, and events to business listings, organizations, and FAQs. Using it helps Google deliver richer results, and helps Gemini summarize your content more accurately.
You can validate your markup using Google's Rich Results Test or Schema.org’s validator.
These structures help both traditional search engines and AI agents surface your most important content correctly and keep it aligned with the intent behind the page.
Create Content Gemini Can Understand
AI search elevates content that is:
- Natural in tone
- Directly answers specific questions
- Easy to parse (headings, bullet points, short paragraphs)
Write with a focus on clarity, relevance, and depth. Gemini rewards content that anticipates questions and delivers structured, high-quality answers.
The AI SEO Flywheel
Gemini doesn’t just retrieve—it summarizes. Being cited, referenced, or ranked by Gemini depends on visibility across multiple layers:
- Indexed by Google → base requirement
- Linked and referenced → improves authority
- Structured and scannable → helps summarization
- Aligned with query intent → improves inclusion
Your traditional SEO efforts lay the foundation. Gemini adds a new layer of opportunity—but only if that foundation exists.
Prioritizing Google
To show up in Gemini, you still need to show up for Google.
- Ensure Googlebot can crawl and render your content
- Submit clean XML sitemaps and maintain internal linking
- Use meta tags and schema to control and enhance visibility
- Build content optimized for both traditional search and AI answers
- Monitor everything through Search Console
Gemini is built on top of Google’s infrastructure, and that means great SEO still works. Tomorrow’s winners will be those who optimize for rankings and how AI models consume, summarize, and cite their content.
At daydream, we help brands build SEO engines built for this hybrid world. If you’re ready to increase your visibility across both search and synthesis, let’s talk.
References:
- How to write and submit a robots.txt file
- Get started with Search Console
- Defining Site Architecture & How it Impacts Users & Search Engines
- Introduction to robots.txt
- Crawling & Indexing in SEO
- meta tags and attributes that Google supports
- Block access to content on your site
- Manage your sitemaps using the Sitemaps report
- Overview of Google crawlers and fetchers (user agents)
- How Google Search organizes information
The future of search is unfolding; don’t get left behind
Gain actionable insights in real-time as we build and apply the future of AI-driven SEO