Core Concepts

Understand the architecture behind LLMMonitor — how scans work, what every metric means, how competitors are detected, and how citations are tracked.

How LLMMonitor Works

LLMMonitor operates on a simple loop: ask AI models questions your customers ask, then analyze their answers for your brand. Behind that simplicity is a sophisticated pipeline that handles browser automation, entity extraction, sentiment analysis, and competitor detection.

At a high level, here's what happens every time you run a scan:

  1. Prompt loading — Your configured prompts are loaded from your account
  2. LLM interaction — Each prompt is sent to each selected AI model (ChatGPT, Gemini, Claude, Perplexity)
  3. Response capture — The full text response, citations, and internal search queries are extracted
  4. Brand detection — Your brand name and all aliases are searched for in the response text
  5. Competitor detection — All competitor brands (from your config) are searched for in the response
  6. Sentiment analysis — The text surrounding each brand mention is analyzed for positive/negative language
  7. Position scoring — Each brand's mention order is recorded (1st = position 1, 2nd = position 2, etc.)
  8. Citation extraction — All URLs and domains referenced by the AI are captured
  9. Storage — Results are written to your database and immediately available on the dashboard

The Scan Pipeline

LLMMonitor uses two different methods to interact with AI models, depending on the platform:

Web scraping (Selenium)

For ChatGPT, Gemini (web), and Claude, LLMMonitor uses browser automation to interact with the actual web interfaces — the same ones your customers use. This ensures responses match what real users see, not sanitized API outputs.

Why web scraping? API responses often differ from what users see in the actual interface. Sources may differ, formatting may differ, and some features (like web search in ChatGPT) behave differently via API. LLMMonitor prioritizes authentic user-experience data.

API mode

For Perplexity and Gemini (API), LLMMonitor uses official REST APIs. API mode is faster, more reliable, and provides richer structured data (like exact search queries the model performed internally).

PlatformWeb ScrapingAPI ModeSearch QueriesCitations
ChatGPT Selenium OpenAI Responses API
Gemini Selenium Gemini API + Search grounding
Claude Selenium
Perplexity Sonar API

Metrics & Scores

Every scan produces a set of metrics for your brand and every competitor detected. Here's what each one means:

Visibility

Definition: The percentage of scans where your brand appears in the AI response.

Visibility % = (Scans with brand mentioned / Total scans) × 100

Example: Your brand was mentioned in 12 out of 20 scans → 60% visibility

Visibility is your most important top-line metric. It answers: "When people ask AI about my industry, do they hear about me?"

Share of Voice (SoV)

Definition: Your brand mentions as a percentage of all brand mentions across all scans.

SoV % = (Your brand mentions / Total mentions of all tracked brands) × 100

Example: Your brand was mentioned 45 times, all competitors combined were mentioned 100 times → 31% SoV

SoV tells you your relative market presence in AI conversations. A 50% SoV means you're mentioned as often as all competitors combined.

Sentiment

Definition: A numeric score (and label) indicating how positively the AI describes your brand.

Score RangeLabelMeaning
Above 5PositiveAI uses favorable language ("leading", "trusted", "excellent")
-5 to 5NeutralAI mentions your brand factually without strong opinion
Below -5NegativeAI uses critical or unfavorable language

Sentiment is calculated by analyzing the text within ±200 characters of each brand mention. Positive-weighted words (+2 each) and negative-weighted words (-3 each) are tallied to produce the final score.

Sentiment is contextual A "negative" sentiment doesn't necessarily mean the AI hates your brand — it could mean the AI is discussing a limitation or challenge that involves your brand. Always read the full response for context.

Position

Definition: Where your brand appears in the order of brand mentions within a response. Lower is better. Position 1 means you're the first brand mentioned.

Position scale:
1 = First brand mentioned (best)
2 = Second brand mentioned
3 = Third or later
99 = Not mentioned at all

Position matters because AI models tend to give more weight and detail to brands they mention first. Being in position 1 vs position 5 can mean the difference between a detailed recommendation and a passing mention.

SRO Score — AI Search Readiness

The SRO Score is a composite 0-100 score that measures your overall AI search health. It's calculated from five weighted components:

ComponentWeightWhat it measures
Visibility30%How often you appear in AI responses
Position20%How prominently you appear when mentioned
Sentiment15%How positively you're described
Citation Presence20%How often your domain/content is cited as a source
UGC Coverage15%Presence in user-generated content sources (Reddit, forums, etc.)

Your SRO Score comes with prioritized recommendations for improvement. A score above 70 is strong; below 40 signals significant gaps in your AI visibility strategy.

Competitor Tracking

LLMMonitor detects competitors automatically in every AI response. Here's how:

  1. Your competitor list — You configure competitors in Settings with their brand names and aliases
  2. Response scanning — When an AI responds to a prompt, LLMMonitor searches the full response text for every competitor's name and aliases
  3. Position assignment — Competitors are ranked by their order of first appearance in the response
  4. Sentiment per competitor — The text around each competitor mention is analyzed separately, so you can see if the AI describes them more favorably than you

What you can learn from competitor data

Citations & Sources

When AI models answer questions, they often cite external sources — websites, articles, documentation. LLMMonitor captures every citation at both the domain and URL level.

How citations are extracted

Citation extraction varies by platform:

Citation metrics

MetricDefinition
Retrieved %Percentage of chats where at least one URL from this domain appeared as a source
Retrieval RateAverage number of URLs from this domain per chat
Citation RateHow often the domain is explicitly cited (vs. used silently as background context)
Content TypeClassification: Corporate, Editorial, UGC, Government, Academic, etc.

Brand visibility vs. source visibility

Key distinction Your brand can be cited as a source without being mentioned by name. This is a critical insight: if AI models trust your content enough to cite it, but don't name your brand, you have a brand recognition problem — not a content quality problem.

Conversely, if you're mentioned often but never cited, AI models associate your name with your industry but don't trust your content as an authoritative reference. Both patterns reveal different strategic gaps.