Dashboard & Feature Guide
A walkthrough of every section on the LLMMonitor dashboard, plus how to manage prompts, tags, and source analysis.
Dashboard Overview
The Overview dashboard (/dashboard) is your command center. It shows all your brand's AI visibility data at a glance.
Visibility Score
The large percentage at the top of the dashboard. This is your overall brand visibility — the percentage of scans where your brand was mentioned. The delta indicator (green up / red down arrow) shows change vs. the previous period.
Below the score, an area chart plots your visibility over time. You can toggle individual competitors on/off to overlay their visibility trends alongside yours.
Presence by LLM
Horizontal bar charts showing your mention rate per AI platform. For example, you might see 72% on ChatGPT, 58% on Gemini, and 34% on Claude. Large disparities between platforms are a signal — investigate why you're strong on one but weak on another.
Rankings Table
A sortable, paginated table showing every brand tracked, with columns for:
| Column | Description |
|---|---|
| Brand | Brand name with favicon |
| Visibility | % of scans where brand appeared |
| Sentiment | Average sentiment score |
| Avg Position | Average rank when mentioned (lower = better) |
| Rank | Overall rank position |
Recent Mentions
A card grid showing the latest AI responses that mention your brand. Each card shows the prompt, the LLM, the response excerpt, and which competitors also appeared. Click any card to open the full Response Modal.
Additional Dashboard Sections
- Top Domains — Most-cited external domains, with usage % and citation count
- Top Competitors — Competitor grid with frequency bars and favicons
- Domain Types — Pie chart: Corporate vs. Competitor domain citations
- Share of Voice — Brand vs. competitor mention share with progress bars
- Sentiment Distribution — Positive / Neutral / Negative breakdown
- Search Queries — Top queries the LLMs performed internally during scans
Dashboard Filters
Filters persist to localStorage, so your selections survive page refreshes:
| Filter | Options |
|---|---|
| Date Range | 7 days / 30 days / 90 days / All time |
| LLM | All / ChatGPT / Gemini / Claude |
| Tags | Filter prompts by assigned tags |
Response Modal
Clicking any mention card opens a full-screen modal showing:
- The prompt that was sent to the LLM
- Brand presence analysis — was your brand mentioned, and where
- Full AI response text with competitors highlighted
- Competitive ecosystem badges — every competitor found, with position and sentiment
- Citations — all URLs the AI referenced
Prompt Management
The Prompts page (/dashboard/prompts) is where you manage the questions LLMMonitor asks AI models.
Prompt Table
Each row shows: prompt text, Share of Voice %, Visibility %, Location, competitor mentions, scan count, and creation date. Click any prompt to open its detail page with per-scan history.
Adding Prompts
Click Add Prompt to open a modal. Enter one prompt per line. You can add up to your plan's limit (varies by tier). Each prompt can optionally be assigned a topic, tags, and a geographic location for geo-targeted scans.
Tags System
Tags let you organize prompts by category, product, or campaign. Tags appear as color-coded pills and support:
- Inline assignment — Add tags directly from the prompt table dropdown
- Global management — Create and delete tags from a central interface
- Dashboard filtering — Filter the Overview dashboard by tag to see performance per category
- Bulk operations — Select multiple prompts and assign tags in bulk
Bulk CSV Upload
For large prompt sets, use the CSV bulk upload:
Prompt Text,Location,Topic,Tag1,Tag2
"What's the best CRM for small teams?",US,CRM Software,product,comparison
"Compare project management tools for agencies",DE,Project Mgmt,product,enterprise
"How to improve email open rates?",US,Email Marketing,guide
| Column | Required | Format |
|---|---|---|
| 1 — Prompt text | Yes | Max 200 characters |
| 2 — Location | No | ISO 3166-1 alpha-2 (US, DE, GB, etc.) |
| 3 — Topic | No | Free text |
| 4+ — Tags | No | One tag per column |
Source Analysis
The Sources page (/dashboard/sources) shows every domain and URL that AI models cite when answering your prompts.
Domain View vs. URL View
Toggle between domain-level aggregation and individual URL tracking. The domain view is best for spotting authoritative industry sources; the URL view helps you find specific articles and pages to target.
Source Table Columns
| Column | Description |
|---|---|
| Domain / URL | The source being cited |
| Mentions | Total citation count |
| Conversations | Number of distinct chats citing this source |
| Prompts | Which of your prompts triggered this citation (expandable popover) |
| URLs Count | Number of distinct URLs from this domain (domain view only) |
| Content Type | Corporate, Editorial, UGC, Government, Academic |
| % Total | Share of all citations |
Source Contribution Over Time
A line chart showing how domain citations trend over your selected date range. Useful for spotting rising authoritative sources and declining ones.