Glossary

Key terms and metrics used across LLM Authority Index reports and platform.

A

AI Share of Voice (SOV)

AI Share of Voice measures how often a brand appears in AI-generated answers compared with competitors.

This is one of the foundational metrics in the report because it answers the most basic discovery question: when people ask commercially relevant questions in AI tools, how often does the brand show up at all? It is usually calculated across a defined prompt set, cluster, platform, or total market view. Its value is that it gives a directional market-share style view of AI visibility, but it is included with an important caveat: appearance alone is not enough, because a brand can be visible without being preferred, recommended, or ranked highly. That is why SOV is always interpreted alongside ranking, recommendation rate, and citation strength.

AI Recommendation Share / Presence Rate

Presence rate tracks the percentage of prompts in which a brand appears in an AI response.

In practice, this is the cleanest way to measure raw inclusion: out of all relevant prompts tested, in how many did the company appear? Some materials also frame this as recommendation share, especially when the appearance is part of a list of suggested brands or products. It matters because a brand cannot win clicks, consideration, or trust if it is not present in the answer at all. It is included because many brands have decent general awareness but are entirely absent in the exact high-intent moments that drive selection.

AI Ranking Position

AI Ranking Position measures where a brand appears inside the AI answer, not just whether it appears.

The reports treat answer position as commercially meaningful because AI-generated recommendations are not neutral lists; users tend to notice and trust the first few names most. The system determines ranking by looking for an explicit ordered list or "best / top / first / second" structure, and if none exists, it defaults to first tracked-company mention order. This is included because being ranked first, second, or third has different likely outcomes than being buried lower in the response, even when overall visibility looks similar.

AI Revenue Index (ARI)

AI Revenue Index is a directional value metric calculated as ARS × Q × VPQ.

This is the boardroom-friendly expression of the model: how much AI-influenced demand value a brand appears to control. It is not meant as exact attribution, but as a disciplined estimate of the revenue pool associated with AI recommendation share. It is included because it gives the report a commercially legible output rather than stopping at abstract visibility numbers.

Attribute-Level Sentiment

Attribute-level sentiment measures how specific brand attributes, such as price, trust, AI features, or usability, are described.

A brand's overall sentiment may look healthy while specific attributes are weak or risky. For example, a platform may be described positively for ease of use but negatively for pricing transparency. This is included because strategy usually requires knowing what exactly is helping or hurting recommendation quality, not just whether the brand is liked in general.

Average Rank (AR)

Average Rank summarizes ranking performance by converting answer position into a point score.

The scoring model assigns 10 points to rank 1, 9 to rank 2, and so on down to 1 point for rank 10, with rank 11+ scoring zero. This allows the reports to compress many prompt outcomes into one comparable ranking measure. It is included because raw rank snapshots are too fragmented at scale; Average Rank gives a clearer view of how strongly a brand performs across an entire cluster or platform rather than in a single answer.

Average Rank, Cluster-Wide

Cluster-wide Average Rank measures total ranking strength across all prompts in a cluster, including prompts where the brand does not appear.

This version of AR divides total ranking points by all prompts in the cluster, so absence hurts the score. That makes it a better measure of overall cluster strength than a "when mentioned" score. It is included because it answers the tougher question: not just how well the company ranks when it shows up, but how strong it is across the full opportunity set.

Average Rank When Mentioned

Average Rank When Mentioned measures how well the brand ranks only in prompts where it appears.

This isolates answer quality from answer frequency. A company might have low overall visibility but rank very strongly whenever it does appear, or it might appear often but only in low positions. This metric is included to separate those cases and prevent analysts from confusing absence problems with ranking-quality problems.

B

Brand-in-Question vs Organic Appearance

This distinction separates prompts that explicitly name the brand from prompts where the brand appears without being asked for directly.

A company often performs better when it is already named in the question, but the more valuable discovery signal is whether it appears organically in non-branded prompts. This is included because branded presence measures existing awareness, while organic appearance measures real recommendation power and category relevance.

Buyer Stage

Buyer Stage indicates where a prompt sits in the decision journey, such as discovery, comparison, or evaluation.

The report architecture recognizes that prompts at different stages have different commercial implications. Early educational prompts shape category entry, while pricing and shortlist prompts influence conversion-ready decisions. Buyer Stage is included because the same visibility level can mean very different things depending on whether the user is just learning or about to choose.

C

Competitive Gap

A Competitive Gap is the measurable difference between the target company and competitors on visibility, ranking, citation support, or framing.

The report is explicitly competitive intelligence, so it does not stop at describing the target in isolation. It compares where competitors outrank, out-cite, or out-convert the target in commercially important conversations. This is included because strategy depends on understanding not just internal weakness, but who is taking the recommendation share the target is losing.

Citation Architecture

Citation Architecture describes the sources and source patterns that appear to shape how AI systems talk about a brand.

This is one of the most important explanatory concepts in the report. AI models do not form recommendations out of nowhere; they are influenced by the domains, content types, and third-party references that repeatedly support answers. Citation Architecture captures those support structures, including official sites, editorial sources, reviews, forums, and other public references. It is included because many ranking and recommendation outcomes are driven less by brand messaging and more by the surrounding evidence environment.

Cited Domains

Cited Domains are the websites or root domains that appear as supporting sources in AI responses.

Tracking cited domains shows which websites are repeatedly used to support brand recommendations, explanations, or comparisons. This matters because the source of the answer often influences the framing of the answer. It is included to reveal whether the brand is supported by strong first-party and third-party evidence, or whether competitors own the domains AI systems keep leaning on.

Citation Source Mix

Citation Source Mix shows the distribution of source types supporting AI answers, such as official sites, editorial pages, reviews, forums, and social platforms.

The methodology specifically calls for classifying citations by source type because not all evidence plays the same role. Official pages may explain products, editorial content may define "best" lists, reviews may shape trust, and forums may shape real-user credibility. It is included because understanding the mix helps explain why certain brands are framed as authorities while others are treated as alternatives or cautions.

Citation Moat / Citation Advantage

Citation Moat describes a durable advantage created when one brand is consistently supported by stronger and more numerous authoritative sources than competitors.

The sample reports repeatedly show that some competitors win not just because of brand strength, but because they have built an evidence environment AI systems repeatedly rely on. That produces self-reinforcing recommendation patterns. It is included because it explains why some brands are hard to displace even when the target has a competitive product.

Company-Associated Cited Domains

Company-associated cited domains are the domains most often connected to a specific brand when that brand appears in AI answers.

This metric goes beyond overall domain counts and asks which sources are effectively carrying the brand's authority. That can include the brand's own website, partner pages, editorial reviews, public discussions, or comparison sites. It is included because a company may appear often, but for fragile reasons tied to a narrow evidence base, while another brand may be supported by a broader, more durable citation network.

Competitive Velocity

Competitive Velocity measures how quickly competitors are gaining or losing AI discovery ground relative to the target company.

This turns momentum into a comparative signal. Instead of asking only whether the target improved, it asks whether the target improved faster or slower than the brands it competes against. It is included because a company can post positive gains and still lose relative position if competitors are accelerating faster.

Cost Per Click (CPC)

CPC is the estimated paid-search cost associated with a keyword and acts as a proxy for commercial intent or market value.

Higher CPC often signals that advertisers value traffic in that area, which makes it a useful proxy when estimating the economic importance of prompt clusters. It is included because the report's economic layer needs a directional commercial weighting system grounded in something more concrete than opinion.

D

Discovery Economics

Discovery Economics estimates the commercial significance of AI visibility and recommendation performance.

This is the value layer that converts discovery metrics into business relevance. Rather than treating appearance in AI answers as a vanity signal, the economics layer connects visibility to directional monetary value using search demand, commercial intent, and monetization proxies. It is included because decision-makers care not just who appears in AI answers, but what those appearances are likely worth.

F

Framing Distribution

Framing Distribution classifies the role the AI assigns to a brand, such as leader, strong option, specialist, alternative, fallback, or cautionary option.

This is one of the more sophisticated parts of the methodology because it captures market role, not just tone. A brand may receive positive sentiment but still be framed as niche, secondary, or conditional. It is included because the reports are meant to explain not just whether AI mentions a brand, but what kind of market position AI seems to believe that brand occupies.

H

High-Intent Prompt Cluster

A High-Intent Prompt Cluster is a themed group of commercially relevant prompts that represent a specific buying conversation.

Examples include pricing, alternatives, trust, educational, or recruitment software prompts. Clustering matters because discovery performance varies dramatically by intent type; a brand may be strong in alternatives but invisible in pricing, or trusted in educational content but absent from shortlist prompts. Clusters are included because they make the report strategically useful: instead of one blended number, the report shows where the commercial problem actually lives.

M

Mention-to-Top-1 Rate

Mention-to-Top-1 Rate shows how often a brand converts an appearance into a first-place recommendation.

This is a conversion metric rather than a visibility metric. It asks: once the brand enters the answer, how often does it become the leading recommendation? That matters because some brands are commonly present but rarely endorsed as the best option. It is included because it reveals the gap between being known and being preferred.

Mention-to-Top-3 Rate

Mention-to-Top-3 Rate shows how often an appearance turns into a top-three placement.

This is similar to Mention-to-Top-1 but slightly broader and often more stable across clusters. It is useful for diagnosing whether a brand is merely peripheral or genuinely considered competitive once it is surfaced. It is included because a company can have acceptable presence but poor conversion into commercially meaningful positions.

Monthly Momentum

Monthly Momentum tracks how the target company's AI visibility metrics change from month to month.

The system is designed for recurring reporting, so one static snapshot is not enough. Momentum shows whether share of voice, ranking, citations, and other indicators are improving, weakening, or holding flat. It is included because early movement often matters more than current size; a smaller brand gaining quickly may be strategically more important than a larger brand standing still.

P

Platform Visibility

Platform Visibility compares how the brand performs across individual AI systems such as ChatGPT, Gemini, Copilot, and Google AI surfaces.

The methodology explicitly separates datasets by platform because the same brand can perform differently across LLM environments. Each platform has its own retrieval patterns, answer structures, and citation tendencies, so a brand may win on one system and disappear on another. It is included because "AI visibility" is not one market; it is a set of overlapping but distinct discovery environments.

Platform Volatility

Platform Volatility measures how much performance changes across AI platforms or across reporting periods.

Because AI systems evolve quickly, visibility and recommendation behavior can shift by platform and by month. Tracking volatility helps analysts avoid overinterpreting one-off wins or losses. It is included because a durable opportunity is more valuable than a temporary fluctuation caused by platform instability.

Prompt Coverage

Prompt Coverage measures which relevant user prompts the brand appears in and which it misses.

The reports are built around the idea that not all prompts matter equally; what matters is coverage across high-intent questions that align with buying, comparing, trusting, or shortlisting. Prompt Coverage therefore shows where the brand is active and where it is absent across actual demand. It is included because brands often discover that their AI visibility is patchy: strong in some buyer questions and nonexistent in others.

Prompt Subtype Classification

Prompt subtype classification breaks a cluster into narrower prompt patterns or subtopics.

Within a larger cluster like "trust" or "comparison," prompts can still behave differently depending on wording, user need, or category angle. Classifying subtypes helps isolate which exact conversations the brand wins or loses. It is included because broad clusters can hide important nuances that matter for strategy and content intervention.

Q

Query Intent

Query Intent describes what the user is trying to achieve with a prompt, such as learning, comparing, pricing, trusting, or selecting.

Intent classification helps keep the report commercially grounded. It explains why some prompts are informational, some are evaluative, and some are directly transactional in nature. It is included because the report's usefulness depends on distinguishing curiosity from buying behavior.

Query Volume (Q)

Query Volume measures how often the tracked prompts or prompt themes are searched or asked.

In the economics model, not all prompts are equally valuable; some represent much larger pools of demand than others. Query Volume is included because a brand winning a low-volume prompt cluster is less meaningful than winning a high-volume cluster with strong buyer intent. It helps weight visibility by actual market demand.

S

Search Volume

Search Volume is the estimated number of searches associated with a keyword or query set.

Search Volume is often used as an input into cluster weighting and economics modeling because it helps quantify the size of the opportunity behind specific themes. It is included because the report aims to focus on real demand, not just arbitrary prompt samples.

Sentiment

Sentiment measures whether the AI's description of a brand is positive, neutral, or negative in context.

The methodology treats sentiment as contextual rather than simplistic. It considers the user question, the wording used by the AI, and the role the company plays in the answer. This is included because a brand can be visible but framed negatively, qualified cautiously, or treated as a weaker option. Visibility without favorable sentiment may not translate into trust or conversion.

Sentiment Score / Net Sentiment

Sentiment Score and Net Sentiment summarize the balance of positive, neutral, and negative brand framing across prompts.

Rather than looking only at raw counts, these summary measures provide a quicker way to compare how a brand is being described across clusters or platforms. They are included because analysts need a compact read on whether brand framing is improving, deteriorating, or holding steady over time.

T

Target Company Absence Prompt

A target company absence prompt is a prompt where competitors appear but the target company does not.

These are some of the most actionable records in the report because they represent live discovery losses. They show the exact moments where the market is having a conversation and the target brand is excluded from it. They are included because absence is often more strategically revealing than weak presence.

Top-1 Rate

Top-1 Rate is the percentage of prompts where the brand is ranked first in the AI answer.

This is the clearest measure of recommendation leadership. It shows how often the brand is the default, primary, or "best" answer rather than just one option among many. The value of Top-1 Rate is that it captures who is truly winning the buying moment. It is included because many brands are visible but rarely win the top slot, which creates a structural disadvantage that basic visibility metrics can hide.

Top-3 Rate

Top-3 Rate measures how often a brand appears among the first three recommended companies in an AI answer.

This is a practical conversion metric because many users focus on the first few options in a list. A brand that consistently lands in the top three is still meaningfully in the consideration set, even if it is not always first. It is included because it provides a more forgiving but still commercially useful measure of competitiveness than Top-1 alone, especially in crowded categories where the brand may not dominate but still earns serious evaluation.

Top-10 Rate

Top-10 Rate tracks how often a brand appears in the first ten ranked positions of an AI response.

This metric extends the ranking lens further down the answer to show whether a company is broadly present in ranked results even when it is not near the top. It is most useful for large comparison-style prompts where more brands appear. It is included because it helps distinguish total invisibility from weak-but-real inclusion and supports the ranking score logic used in Stage 0 extraction.

U

Undercontested Opportunity

An Undercontested Opportunity is a prompt area or discovery segment where the target company could gain visibility with relatively less competitive resistance.

These are the openings the report is meant to surface: clusters, platforms, or citation environments where the target is not yet strong but where recoverability or upside looks favorable. It is included because the report is not just diagnostic; it is supposed to show where gains are most realistic and commercially meaningful.

V

Value per Query (VPQ)

Value per Query estimates the economic value associated with a given query or query class.

VPQ usually comes from affiliate economics, monetization benchmarks, CPC proxies, or similar commercial inputs. It translates demand into value instead of treating all queries equally. It is included because some prompt categories are much more monetizable than others, and the report's economics layer depends on reflecting that uneven value distribution.

W

Weighted Commercial Score

Weighted Commercial Score is a blended metric used to reflect the relative business value of a cluster or prompt set.

The automation specification includes this to prevent all clusters from being treated as equally important. By combining demand and commercial signals, the report can prioritize discovery areas that likely matter more to revenue or lead generation. It is included because strategic recommendations should be guided by opportunity size, not only by visibility weakness.

See how AI is shaping buyer choice in your market

Start with a free AI Market Intelligence Report to understand how your company is being surfaced in high-intent AI buying moments.

Get My Free AI Market Intelligence Report