The Death of Keyword SEO: Welcome to the Citation Era

Key Takeaways
  • AI search deconstructs a single query into numerous subqueries, each of which is analyzed across multiple sources. The results are then synthesized to provide a comprehensive answer, leveraging the capabilities of Large Language Models (LLMs) to understand and predict user intent. LLMs often cite sites that provide the most educational coverage, rather than those optimized for a single keyword.
  • Success in GEO/AIO/AEO now depends on websites demonstrating strong relevance, comprehensive content, and consistent topical authority across multiple sub-queries, rather than just ranking #1 for a single keyword.
  • Reciprocal Rank Fusion (RRF) and Generative Search reward the page that is relevant across 12โ€“50 subqueries.
  • You’re not trying to be clicked; you’re trying to be cited.

The Keyword Era Is Over. The Citation Era Has Begun.

Generative AI search has already buried what used to be your top-ranking pages, transforming the Search Engine Results Page (SERP) from a competition of pages to a competition of paragraphs within an LLM response.

AI Overviews now deliver comprehensive answers directly, reducing the need for users to click through to sites and diminishing the value of prime SERP real estate. The reality of todayโ€™s search environment means that we have to accept:

  • Clicks from Google SERPs are down.
  • AIO suppresses organic blue links.
  • Zero-click experiences widen every quarter.
  • The top result is often irrelevant to that userโ€™s long-tail needs.

SEO has evolved; most brands are struggling to adapt. To succeed in todayโ€™s environment, brands must build topical authority and create content engineered for Generative Search.

Traditional Search vs Generative Search

Traditional search engines find existing content by matching keywords, backlinks, and existing SERP rankings based on algorithms. Often, finding the answer to a question is a multiple-step process, with the user reviewing multiple sites and refining their keyword usage based on the results of the associated SERPs.

Generative search (or AI-driven generative search experiences) leverages Large Language Models (LLMs), which have fundamentally changed how users search, find, and process information to answer their questions. Generative Search has shifted from using and matching target keyword queries to having a conversational interaction by using natural language with the LLM. AI search is able to perform this conversation by a method called a query fan-out process.

What is a Query Fan-out Process?

AI search engines will take the same keyword query as a traditional search engine, but break it down into multiple relevant sub-queries, look at trending data (what similar prompts/answers are discovering, what other users with similar searches are looking for in addition to the original query/prompt), and perform research on each of these queries across multiple sources. 

This expansion of a single query into multiple smaller queries helps the models better understand the userโ€™s original intent and gather the most relevant information. The LLMs then review, analyze, and synthesize the data to form a comprehensive answer to the userโ€™s original question.

Users now expect comprehensive answers, including follow-up questions they didnโ€™t know they were searching for, all from a single query.

The Difference Between โ€œKeywordsโ€ and โ€œSub-queries / Sub-questionsโ€

  • Traditional SEO: Ranking in position #1 in the SERPs for a single URL focused on a single term.
  • Generative Search SEO: Being the answer and source to the 15-50 subqueries triggered from the user’s initial prompt or query.

Each sub-query triggers its own retrieval action. To be successful in todayโ€™s search environment, itโ€™s essential to create content that answers the entire query tree, not a single branch. To earn citations within generative search, being an authoritative source that comprehensively covers a topic is essential. Understanding how query fan-out works will help you understand how to win AI visibility, because query fan-out is a prerequisite for Reciprocal Rank Fusion (RRF).

What is Reciprocal Rank Fusion (RRF)?

Reciprocal Rank Fusion (RRF) is a common technique in information retrieval. It functions as a method for combining the results from several searches into a single, final ranking, utilizing a simple formula:

RRF score = 1/(60 + rank position)

For example:

  • Rank #1 = 1/(60+1) = 0.0164
  • Rank #5 = 1/(60+5) = 0.0154
  • Rank #10 = 1/(60+10) = 0.0143

If the system finds the same page in different positions across several queries, it combines all those scores.

Reciprocal Rank Fusion (RRF) and Topical Authority

Traditional SEO:

  • Keyword ranking in position #1 = Winning
  • Keyword ranking in position #2 = Losing

Generative Search with Reciprocal Rank Fusion:

  • Ranking one page for six separate queries is better than ranking #1 for one query.

Let us prove the importance of topical authority and topic clusters that address sub-queries with RRF by showing the math.

Example of a page that focuses on a single keyword:

  • โ€œFamily vacation ideasโ€ ranks in position #1 = RRF Score: 0.0164
  • โ€œBest family vacationsโ€ ranks in position #10 = RRF Score: 0.0143
  • โ€œPlaces to travel with kidsโ€ ranks in position #20 = RRF Score: 0.0125
  • โ€œFamily vacation destinationsโ€ does not rank = RRF Score: 0

Total RRF Score: 0.0432

Example of a page with broad topical authority and a topic cluster approach:

  • “Family vacation ideas” ranks #3 = RRF Score: 0.0159
  • “What are the best family vacation destinations?” ranks #5 = RRF Score: 0.0154
  • “Places to travel with kids” ranks #7 = RRF Score: 0.0149
  • “Things to do on a family vacation” ranks #4 = RRF Score: 0.0156
  • “What to pack for a family trip” ranks #6 = RRF Score: 0.0152
  • โ€œCruise vs resort for familiesโ€ ranks #9 = RRF Score: 0.0145

Total RRF Score: 0.0915

Page B wins by more than 2.12x because it has broader topical coverage. RRF rewards breadth of relevance across sub-queries, not single keyword dominance.

The final RRF score gives more weight to content that ranks higher across different pages, rather than relying on a single source or only considering results in the first position on the SERP. This provides the user with relevant information with citations from authoritative sources. Earning these SEO citations on relevant sub-queries within Generative Search and LLMs is how we win AI visibility, build brand awareness, and convert searchers into customers.

You donโ€™t win AI visibility by answering one question. You win by anticipating every sub-intent and grounding it in entity-level knowledge.

If your page only provides answers to the initial query, you are falling behind in Generative Search. The point here is, donโ€™t focus on ranking content, focus on creating a comprehensive hub with knowledge that provides LLMs with context AND coverage. Donโ€™t be A source of information, be THE source of truth across multiple relevant questions users are likely to ask in addition to their initial question.

Broadening Search Queries, Identifying Hidden User Intent, and Optimizing for Generative Search

AI visibility is now more important than rankings alone. Itโ€™s more important to be cited by the LLMs and be seen and referred to as an authoritative source. This means creating content that comprehensively addresses what Mike King refers to as โ€œspeculative sub-questionsโ€ or the userโ€™s likely follow-up questions. By providing the answers to these speculative sub-questions, along with the userโ€™s original question, we can earn citations on the sub-queries that occur when Generative Search performs the fan-out query process. Hereโ€™s a framework to help win at AI search.

Speculative Sub-Question Matrix

  • Example: โ€œFamily Cruisesโ€

Intent Type

Model Interpretation

Example Query Branch

Why It Wins RRF

Safety

Risk minimization

Are cruise pools supervised?

High co-occurrence in LLM training data

Budget

Optimization

Are dining packages worth it?

LLMs surface consumer tradeoffs

Logistics

Planning

What to pack for a 7-day cruise

Generates sub-queries recursively

Alternatives / Comparison

Comparative reasoning

Cruise vs resort for families

Requires multi-source citation

Timeline

Forecasting

When to book for school breaks

Reinforces authority across time

What-if

Contingency

Kid gets motion sickness

Rare queries = high citation trust

Identifying related subqueries and hidden user intent can be done in several ways: 

  • Using information that exists within the SERP
  • People Also Ask
  • People Also Search for
  • Reddit threads
  • Leveraging Ahrefs or SEMRush
  • Using tools like Screaming Frog

Here is what building out the speculative sub-question content tree looks like:

Use the information gathered from this research to create content topic clusters with pillars and spokes of related topics. Optimize for generative search by building pages and creating content that focuses on SEO fundamentals, like optimized:

  • Entity-based topical hubs
  • Page Titles
  • Title Tags
  • Meta Descriptions
  • Descriptive Headings
  • Structured Data Markup
  • Add a FAQ section that addresses questions that are not addressed naturally within the copy. 
  • Interlink pages that discuss relevant entities and topics to help LLMs crawl your website more efficiently.

Stop Trying to Rank. Start Trying to Be Referenced. 

SEO is no longer about ranking signals; itโ€™s about being woven into the LLM knowledge graph. Generative search and LLMs donโ€™t care who ranks first, because (if you havenโ€™t already noticed) AI Overviews have claimed that position. What matters most is who answers questions across the entire fan-out tree of subqueries that users donโ€™t know they need answers to, contributing to the comprehensive answer at the top of the SERPs. Generative Search doesnโ€™t reward the page that ranks #1; it rewards the page that is relevant across 12โ€“50 subqueries.

Sources:

Table of Contents