FAQ & Troubleshooting

Solutions for common issues, guidance on writing effective prompts, and how to reach support.

Common Issues

Scans aren't starting

Check scans: Go to the Billing page and verify you have enough scans. Each prompt per LLM uses one scan slot.

Check for running scans: Only one scan can run at a time. If you see "Scan already in progress" or a queue position, wait for the current scan to finish.

Browser sessions: If you're using web scraping mode, LLMMonitor's browser sessions may need re-authentication. This happens automatically but can add 10-30 seconds to scan startup.

My brand isn't appearing in results

Check aliases: Go to Settings and make sure you've added all possible brand name variations as aliases. AI models might refer to "AcmeCorp" as "Acme Corporation" or "Acme" — if you only track "AcmeCorp", you'll miss mentions.

Check prompts: Prompts that are too generic won't trigger brand mentions. Try prompts that explicitly ask for comparisons or recommendations in your industry.

Check LLM selection: Not all LLMs are selected by default. Make sure the LLM you want to test is enabled in Settings under LLM Selection.

Dashboard shows no data

Run a scan first: The dashboard is empty until you've completed at least one scan. Go to Scan and run your first scan.

Check date filter: The dashboard defaults to "Last 7 days." If your last scan was older, change the filter to "All time."

Check tag filter: If you have a tag filter active, it might exclude all your prompts. Clear the tag filter or ensure your prompts have matching tags.

Sentiment seems wrong

Sentiment is calculated from the language used within ±200 characters of your brand mention. It's algorithmic, not AI-judged. If a response discusses your brand in a critical but factually accurate way (e.g., "Company X has reliability issues"), the sentiment score will be negative. This is working as intended — it reflects the language the AI model uses.

Citations are missing or incomplete

Citation extraction varies by platform. ChatGPT in web mode provides the richest citation data (source pills, inline links, reference carousel). API mode may provide fewer citations. Claude provides the fewest citations. If you need comprehensive citation tracking, prioritize ChatGPT and Perplexity.

Prompt Best Practices

The quality of your prompts directly determines the quality of your data. Here's how to write prompts that generate useful brand visibility insights.

Use natural, conversational language

Write the way your customers speak. AI models respond to natural language, not keyword strings.

What's the best project management tool for creative agencies with remote teams?
Which CRM would you recommend for a sales team of 10 people?
Compare HubSpot, Salesforce, and Pipedrive for small businesses.
project management tool
CRM sales team
CRM comparison

Include intent and context

Every effective prompt has two parts:

# Intent only (too vague):
What's the best CRM?

# Intent + context (effective):
What's the best CRM for a B2B sales team of 10-20 people in the SaaS industry?

Cover your product categories

Create prompts for every product category where you want visibility — not just your core product. If you sell CRM software but also offer marketing automation, create prompts for both categories.

Vary prompt phrasing

Don't worry about exact wording — semantically similar prompts produce similar results over time. But vary your phrasing to capture different angles:

Ask for brand mentions explicitly when needed

Informational prompts ("How do I improve email open rates?") often won't mention brands unless you specifically ask. Add brand-context: "How do I improve email open rates — what tools or platforms are best for this?"

Support

Getting Help

Before Contacting Support

To help us resolve your issue faster, please include:

  1. Your account email
  2. A description of what you expected vs. what happened
  3. The scan ID or date/time of the issue (if applicable)
  4. Your browser and operating system
Feedback welcome LLMMonitor is actively developed. Feature requests, bug reports, and usability feedback directly shape the roadmap.