Core Concepts
Understand the architecture behind LLMMonitor — how scans work, what every metric means, how competitors are detected, and how citations are tracked.
How LLMMonitor Works
LLMMonitor operates on a simple loop: ask AI models questions your customers ask, then analyze their answers for your brand. Behind that simplicity is a sophisticated pipeline that handles browser automation, entity extraction, sentiment analysis, and competitor detection.
At a high level, here's what happens every time you run a scan:
- Prompt loading — Your configured prompts are loaded from your account
- LLM interaction — Each prompt is sent to each selected AI model (ChatGPT, Gemini, Claude, Perplexity)
- Response capture — The full text response, citations, and internal search queries are extracted
- Brand detection — Your brand name and all aliases are searched for in the response text
- Competitor detection — All competitor brands (from your config) are searched for in the response
- Sentiment analysis — The text surrounding each brand mention is analyzed for positive/negative language
- Position scoring — Each brand's mention order is recorded (1st = position 1, 2nd = position 2, etc.)
- Citation extraction — All URLs and domains referenced by the AI are captured
- Storage — Results are written to your database and immediately available on the dashboard
The Scan Pipeline
LLMMonitor uses two different methods to interact with AI models, depending on the platform:
Web scraping (Selenium)
For ChatGPT, Gemini (web), and Claude, LLMMonitor uses browser automation to interact with the actual web interfaces — the same ones your customers use. This ensures responses match what real users see, not sanitized API outputs.
API mode
For Perplexity and Gemini (API), LLMMonitor uses official REST APIs. API mode is faster, more reliable, and provides richer structured data (like exact search queries the model performed internally).
| Platform | Web Scraping | API Mode | Search Queries | Citations |
|---|---|---|---|---|
| ChatGPT | Selenium | OpenAI Responses API | ||
| Gemini | Selenium | Gemini API + Search grounding | ||
| Claude | Selenium | |||
| Perplexity | Sonar API |
Metrics & Scores
Every scan produces a set of metrics for your brand and every competitor detected. Here's what each one means:
Visibility
Definition: The percentage of scans where your brand appears in the AI response.
Visibility % = (Scans with brand mentioned / Total scans) × 100
Example: Your brand was mentioned in 12 out of 20 scans → 60% visibility
Visibility is your most important top-line metric. It answers: "When people ask AI about my industry, do they hear about me?"
Share of Voice (SoV)
Definition: Your brand mentions as a percentage of all brand mentions across all scans.
SoV % = (Your brand mentions / Total mentions of all tracked brands) × 100
Example: Your brand was mentioned 45 times, all competitors combined were mentioned 100 times → 31% SoV
SoV tells you your relative market presence in AI conversations. A 50% SoV means you're mentioned as often as all competitors combined.
Sentiment
Definition: A numeric score (and label) indicating how positively the AI describes your brand.
| Score Range | Label | Meaning |
|---|---|---|
| Above 5 | Positive | AI uses favorable language ("leading", "trusted", "excellent") |
| -5 to 5 | Neutral | AI mentions your brand factually without strong opinion |
| Below -5 | Negative | AI uses critical or unfavorable language |
Sentiment is calculated by analyzing the text within ±200 characters of each brand mention. Positive-weighted words (+2 each) and negative-weighted words (-3 each) are tallied to produce the final score.
Position
Definition: Where your brand appears in the order of brand mentions within a response. Lower is better. Position 1 means you're the first brand mentioned.
Position scale:
1 = First brand mentioned (best)
2 = Second brand mentioned
3 = Third or later
99 = Not mentioned at all
Position matters because AI models tend to give more weight and detail to brands they mention first. Being in position 1 vs position 5 can mean the difference between a detailed recommendation and a passing mention.
SRO Score — AI Search Readiness
The SRO Score is a composite 0-100 score that measures your overall AI search health. It's calculated from five weighted components:
| Component | Weight | What it measures |
|---|---|---|
| Visibility | 30% | How often you appear in AI responses |
| Position | 20% | How prominently you appear when mentioned |
| Sentiment | 15% | How positively you're described |
| Citation Presence | 20% | How often your domain/content is cited as a source |
| UGC Coverage | 15% | Presence in user-generated content sources (Reddit, forums, etc.) |
Your SRO Score comes with prioritized recommendations for improvement. A score above 70 is strong; below 40 signals significant gaps in your AI visibility strategy.
Competitor Tracking
LLMMonitor detects competitors automatically in every AI response. Here's how:
- Your competitor list — You configure competitors in Settings with their brand names and aliases
- Response scanning — When an AI responds to a prompt, LLMMonitor searches the full response text for every competitor's name and aliases
- Position assignment — Competitors are ranked by their order of first appearance in the response
- Sentiment per competitor — The text around each competitor mention is analyzed separately, so you can see if the AI describes them more favorably than you
What you can learn from competitor data
- Who dominates your space — Which competitors have the highest visibility and SoV
- Who gets better sentiment — Are competitors described more positively? That's a content/PR signal
- Co-occurrence patterns — Which competitors appear alongside you most often
- Battlecards — Head-to-head win rates: in scans where both you and a competitor appear, who's mentioned first?
Citations & Sources
When AI models answer questions, they often cite external sources — websites, articles, documentation. LLMMonitor captures every citation at both the domain and URL level.
How citations are extracted
Citation extraction varies by platform:
- ChatGPT (web): Citations are extracted from the DOM — source pills at the bottom of responses, inline citation links, and the web search reference carousel
- ChatGPT (API): Citations come from
web_search_call.action.queriesand inline URL references in the response - Gemini: Source links extracted from the response DOM and API grounding metadata
- Claude: Inline URLs and reference links parsed from the response text
- Perplexity: Structured source data from the API response
Citation metrics
| Metric | Definition |
|---|---|
| Retrieved % | Percentage of chats where at least one URL from this domain appeared as a source |
| Retrieval Rate | Average number of URLs from this domain per chat |
| Citation Rate | How often the domain is explicitly cited (vs. used silently as background context) |
| Content Type | Classification: Corporate, Editorial, UGC, Government, Academic, etc. |
Brand visibility vs. source visibility
Conversely, if you're mentioned often but never cited, AI models associate your name with your industry but don't trust your content as an authoritative reference. Both patterns reveal different strategic gaps.