Dashboard & Feature Guide

A walkthrough of every section on the LLMMonitor dashboard, plus how to manage prompts, tags, and source analysis.

Dashboard Overview

The Overview dashboard (/dashboard) is your command center. It shows all your brand's AI visibility data at a glance.

Visibility Score

The large percentage at the top of the dashboard. This is your overall brand visibility — the percentage of scans where your brand was mentioned. The delta indicator (green up / red down arrow) shows change vs. the previous period.

Below the score, an area chart plots your visibility over time. You can toggle individual competitors on/off to overlay their visibility trends alongside yours.

Presence by LLM

Horizontal bar charts showing your mention rate per AI platform. For example, you might see 72% on ChatGPT, 58% on Gemini, and 34% on Claude. Large disparities between platforms are a signal — investigate why you're strong on one but weak on another.

Rankings Table

A sortable, paginated table showing every brand tracked, with columns for:

ColumnDescription
BrandBrand name with favicon
Visibility% of scans where brand appeared
SentimentAverage sentiment score
Avg PositionAverage rank when mentioned (lower = better)
RankOverall rank position

Recent Mentions

A card grid showing the latest AI responses that mention your brand. Each card shows the prompt, the LLM, the response excerpt, and which competitors also appeared. Click any card to open the full Response Modal.

Additional Dashboard Sections

Dashboard Filters

Filters persist to localStorage, so your selections survive page refreshes:

FilterOptions
Date Range7 days / 30 days / 90 days / All time
LLMAll / ChatGPT / Gemini / Claude
TagsFilter prompts by assigned tags

Response Modal

Clicking any mention card opens a full-screen modal showing:

Prompt Management

The Prompts page (/dashboard/prompts) is where you manage the questions LLMMonitor asks AI models.

Prompt Table

Each row shows: prompt text, Share of Voice %, Visibility %, Location, competitor mentions, scan count, and creation date. Click any prompt to open its detail page with per-scan history.

Adding Prompts

Click Add Prompt to open a modal. Enter one prompt per line. You can add up to your plan's limit (varies by tier). Each prompt can optionally be assigned a topic, tags, and a geographic location for geo-targeted scans.

Plan limits Free: 3 prompts. Lite: 25 prompts. Standard: 50 prompts. Pro: 100 prompts. Archived prompts don't count toward your limit but preserve historical data.

Tags System

Tags let you organize prompts by category, product, or campaign. Tags appear as color-coded pills and support:

Bulk CSV Upload

For large prompt sets, use the CSV bulk upload:

Prompt Text,Location,Topic,Tag1,Tag2
"What's the best CRM for small teams?",US,CRM Software,product,comparison
"Compare project management tools for agencies",DE,Project Mgmt,product,enterprise
"How to improve email open rates?",US,Email Marketing,guide
ColumnRequiredFormat
1 — Prompt textYesMax 200 characters
2 — LocationNoISO 3166-1 alpha-2 (US, DE, GB, etc.)
3 — TopicNoFree text
4+ — TagsNoOne tag per column

Source Analysis

The Sources page (/dashboard/sources) shows every domain and URL that AI models cite when answering your prompts.

Domain View vs. URL View

Toggle between domain-level aggregation and individual URL tracking. The domain view is best for spotting authoritative industry sources; the URL view helps you find specific articles and pages to target.

Source Table Columns

ColumnDescription
Domain / URLThe source being cited
MentionsTotal citation count
ConversationsNumber of distinct chats citing this source
PromptsWhich of your prompts triggered this citation (expandable popover)
URLs CountNumber of distinct URLs from this domain (domain view only)
Content TypeCorporate, Editorial, UGC, Government, Academic
% TotalShare of all citations

Source Contribution Over Time

A line chart showing how domain citations trend over your selected date range. Useful for spotting rising authoritative sources and declining ones.

Re-analyze Sources Click the "Re-analyze Sources" button to trigger a fresh analysis of all cited URLs. LLMMonitor will fetch each URL and check for brand and competitor mentions on the page itself — not just in the AI response.