Case Study6 min read

The Visibility Trap: How AI Share of Voice Made a Brand Look Like It Was Winning While AI Was Sending Buyers Elsewhere

Life Alert appeared in 55% of AI pricing queries but captured 0% recommendations. Learn why visibility in AI search didn’t translate into buyer selection.

A medical alert systems case study on why mentions, sentiment, and recommendations must be separated before companies trust AI visibility reports.

Independent market analysis. Not client work.


The Story Is Simple and Powerful

Most AI visibility tools answer: “Did the brand show up?”

LLM Authority Index answers:

“Was the brand actually recommended, how strongly, in which buying moment, and what monthly value did that recommendation capture?”

A legacy medical alert brand looked highly visible in AI answers. When measured using an ARS / share-of-voice-style metric, the report said the company was strong, especially in comparison and pricing prompts.

But when the same category was evaluated using prompt-level sentiment and true recommendation capture, the story reversed: the company was visible because AI systems were discussing its weaknesses, not recommending it.

The competitor report showed Life Alert received 0% AI recommendation rate, 0% Top-3 ranking rate, and was framed as cautionary when present in many responses.

That is a staggering and important lesson.


The Core Problem

When we first measured AI visibility across high-intent medical alert system prompts, one legacy brand appeared to be in a strong position. It showed up frequently in comparison and pricing conversations, and a share-of-voice-style scoring model suggested the company was capturing meaningful monthly AI recommendation value.

But the underlying responses told a different story. AI systems were not consistently recommending the brand. They were often mentioning it as a familiar reference point, a legacy option, or a cautionary comparison while recommending competitors instead.

The difference was not a minor scoring adjustment. It completely changed the business conclusion.

Under the old metric, the company looked strong in pricing. Under sentiment-gated recommendation analysis, pricing became one of the clearest areas of exposure.

This case study explains why AI visibility alone is not enough — and why every serious AI market report must separate:

  • Mentions
  • Sentiment
  • Recommendation validity
  • Rank
  • Monthly captured value

Why This Case Study Works So Well

The mistake was not cosmetic. It changed the entire strategic diagnosis.

The three clusters used in the free report were not low-intent clusters. They were the exact buying moments companies care about most:

ClusterBuyer stageWhy it matters
Best Medical Alert Systems — Discovery & RankingConsiderationUsers are looking for the best or top-rated options
Medical Alert System Comparisons — Head-to-Head EvaluationEvaluationUsers are comparing alternatives and deciding between providers
Medical Alert System Pricing — Cost & Plan EvaluationDecisionUsers are evaluating affordability, fees, contracts, and value

Those three clusters are explicitly defined as commercial-intent buying clusters in the prompt set, with pricing prompts covering terms like cost, price, monthly fee, subscription, affordable, cheap, plan, and worth it.

That makes the pricing failure especially dangerous.

Pricing is not a passive awareness category. It is a decision-stage moment. If AI says a brand is expensive, opaque, inflexible, or a poor value, that should never become a positive ranking signal.


The Before / After Story

Measurement approachWhat the report concludedWhy it was dangerous
ARS / share-of-voice-style scoringLife Alert looked strong, ranked #2 overall, with modeled monthly value of about $271K. It appeared to lead Comparisons and Pricing, with zero missed value in those clusters.The metric treated appearances, neutral mentions, negative mentions, and first mention order as if they were recommendation strength.
Sentiment-gated recommendation scoringLife Alert was visible but not recommended. The competitor report found 0% AI recommendation rate, 0% Top-3 ranking rate, and frequent cautionary framing.This revealed the actual commercial risk: brand awareness was turning into competitor demand capture.

This old report made a company look safe in the exact places it was most exposed.

The most dangerous AI visibility report is not one that misses a brand. It is one that finds the brand, counts the mention as a win, and misses that the AI answer is telling buyers to choose someone else.


The Problem With ARS

ARS sounded like a performance metric, but it was functioning too much like share of voice.

The internal project brief already warned against this exact failure mode:

  • Share of voice alone is not enough
  • Being mentioned is not the same as being recommended
  • AI systems do more than surface brands — they recommend, rank, compare, frame, and exclude them

The Life Alert test gave you the proof.

The report did not merely overstate performance. It inverted the business conclusion.

In the flawed version, pricing looked like a Life Alert strength. The report said Life Alert held a #1 position in Pricing Evaluation, had 22.6% ARS, 22.2% Top 1, and about $157K in modeled monthly value with zero missed value.

But that same report’s sentiment section said the brand’s sentiment was overwhelmingly neutral-to-negative and that the absence of positive sentiment suggested AI platforms were acknowledging Life Alert without actively recommending it.


The New Methodology

The correction is not just “we fixed a report.”

It is:

We moved from AI visibility measurement to AI recommendation intelligence.

Old model

  • Mention = visibility
  • Visibility = share
  • Share = value

New model

  • Mention = presence only
  • Sentiment determines quality
  • Recommendation validity determines rank credit
  • Positive recommendation determines captured monthly value

The Revised Case Study Structure

1. The setup: AI visibility looked strong

The company appeared frequently in high-intent prompts. It had brand recognition. It showed up in comparison and pricing conversations. A surface-level AI visibility report would treat that as good news.

2. The first warning sign: visibility and sentiment disagreed

Show that the company was being mentioned, but the actual language around the company was neutral, comparative, or negative. This is where you introduce the core distinction:

A brand can be present in an AI answer and still be losing the buyer.

3. The metric failure

Explain that the ARS-style model counted appearances and ranking order without first determining whether the brand was actually being recommended.

The old methodology even allowed the first tracked-company mention order to become recommendation order when no explicit ranking existed.

That matters because AI answers often mention a known brand first as context, then recommend competitors.

4. The pricing problem: the most dangerous cluster

Make pricing the centerpiece.

Pricing prompts are decision-stage prompts. The three-cluster prompt set explicitly includes searches around cost, monthly fee, subscription, affordability, contract terms, hidden fees, price comparisons, and whether the system is worth the cost.

If a brand is visible there because buyers are asking about affordability concerns, contract issues, or better alternatives, that is not captured value. That is at-risk demand.

5. The correction: sentiment-gated recommendations

Introduce the new rules:

  • Negative = -1
  • Neutral = 0
  • Positive = +1

Only positive, valid recommendations can receive:

  • Recommended Rank 1 credit
  • Recommended Top 3 credit
  • Average Recommended Rank inclusion
  • Monthly Captured Recommendation Value

Neutral mentions are informational.Negative mentions are risk signals. Neither should create captured recommendation value.

6. The new report standard

  • AI Company Index → Three high-intent clusters. Monthly captured recommendation value only.
  • AI Competitor Index → who captures the value the target company does not capture
  • AI Market Intelligence Report → Full 10-cluster analysis, including deeper displacement, negative framing, defensive exposure, citation architecture, and market moats.

7. The lesson for companies

A company using a raw AI share-of-voice report could believe it is winning when AI systems are actually using the brand as the cautionary example that helps competitors close the sale.