Over 300 companies trust Media Removal. Get a Quote Now >
How to tell if ChatGPT, Gemini, Claude or any other large language model is speaking negatively about you or your business
It doesn’t matter if ChatGPT speaks badly about you today; what matters is how many models agree, with what confidence, and which source originated the narrative.
Most people check what a model says once and either feel reassured or panic. Neither response works, because an isolated mention doesn’t represent the real damage.
The problem appears when several models agree, when the narrative consolidates, and when a specific source quietly feeds that perception.
What LLMs actually prioritize when they talk about you
LLMs prioritize user satisfaction over truth. When they don’t have reliable data about your brand, they generate coherent but false answers, a practice known as hallucination.
Their goal is to deliver the most likely answer based on their training, not the most accurate one. That’s why what the model “thinks it knows” can be contaminated by a single old review or a poorly indexed forum.
Two main methods to know what LLMs say about you
There are two ways to audit what LLMs say about you: the manual method and the automated method. The manual one costs less money but more time, and the automated one flips that equation.
The recommended approach is to combine both: a manual audit to understand the landscape and an automated system to detect changes.
Manual verification methods
Manual methods consist of asking the models as if you were a potential customer. All you need is time and discipline.
Direct audit
The direct audit is the simplest way to start: ask the model from a fresh session, with no memory and no prior context. That neutrality replicates what any outside person would see.
Run “incognito” searches across models
“Incognito” searches are the prompts a buyer or researcher would use without filters: “What do you think of [your company]?”, “What are the most common complaints about [company]?”, “Is [brand] trustworthy?”, “Are there scams related to [company]?”, “Alternatives to [your company]?”.
The scam question usually triggers the most negative information in the corpus.
Test with different tones and personas
The same prompt changes depending on the tone and persona you ask from. Try five profiles: neutral, critical, comparative, undecided buyer, and investigative journalist.
This reveals whether the model is repeating rumors, outdated data, or false information depending on the pressure of the question.
Test in multiple languages
Multilingual testing is mandatory if your brand operates internationally.
Repeat the same prompts in each language because the training corpus varies between languages, and a brand can have a good reputation in English and be contaminated in Spanish by a single unverified outlet.
Triangulate across models
Triangulation across models separates an isolated hallucination from a real problem. Gemini tends to be better for real-time data, Claude stands out at writing, and ChatGPT at creativity.
If all three agree on something negative, it’s very likely a real reputation problem; if only one says it, suspect a hallucination.
Test across all major platforms
Testing across all major platforms closes the audit. Check OpenAI (ChatGPT), Google (Gemini), Anthropic (Claude), Microsoft (Copilot), Perplexity AI, and the assistants embedded in apps, because each one uses different sources, policies, and memory.
Verify the cited sources
Verifying the cited sources is the step most people skip.
When Gemini, ChatGPT, or Perplexity cite pages, open every link and confirm the citation isn’t out of context, because many models summarize paragraphs by pulling them out of context and generate a negative perception that the actual source doesn’t support.
Automated methods: continuous monitoring
Automated monitoring makes up for the main weakness of the manual method: information in LLMs changes constantly and a single search becomes outdated within days.
There are three levels, ordered from lower to higher investment.
Google Alerts
Google Alerts is a free, basic tool. Set up alerts with your personal name, your brand name, and common misspelling variants at google.com/alerts.
If a new negative review appears, the LLM is likely to incorporate it soon when it queries real-time search.
Review aggregators
Aggregators track reviews across multiple platforms at once.
Reputology shows you changes on Google Reviews, Trustpilot, Yelp, and other sites in a single view. It matters because LLMs heavily consult reviews when evaluating a business.
Specialized LLM brand monitoring tools (AI visibility tracking)
Specialized AI visibility tools form a new category, known as AI Visibility Tracking or LLM Brand Monitoring.
Unlike general monitoring, they directly measure how you appear inside LLM responses. In 2026 there’s already an entire market dedicated to this.
What these tools actually do
These tools run hundreds or thousands of prompts relevant to your industry against several LLMs in an automated, daily way. They give you mention frequency, position, sentiment, share of voice against competitors, and most importantly, which sources LLMs are citing when they talk about your brand.
That last data point is pure gold for reputation, because it tells you which pages are feeding the narrative.
Main players in 2026
The main players in 2026 fall into three categories. Ahrefs Brand Radar measures mentions, share of voice, and impressions of your brand in LLMs compared to competitors.
The dedicated players include Profound, Peec AI, AthenaHQ, Otterly.AI, LLM Pulse, and Vryse. SEO suites add Semrush Enterprise AIO, Frase, Finseo, Scrunch, and Frizerly. For corporate PR, Meltwater GenAI Lens is the reference.
Techniques to detect AI hallucinations
Knowing whether the model is making things up is just as important as knowing what it’s saying. There are two simple tricks that work the same in ChatGPT, Gemini, and Claude.
The “alert mode” trick
The “alert mode” trick consists of adding to the end of any prompt: “If there’s anything you don’t know for sure, flag that part with the word uncertain.” This forces the model to separate what it knows from what it’s guessing, and reduces invented claims.
Interrogate the model
Interrogating the model is the second trick. If it gives you a negative answer, copy that answer and send it back asking: “Where could you have gotten this information wrong? Cite your trusted sources.” When the model can’t cite a specific source, the claim is usually a hallucination.
The two sources behind what LLMs say about you
Before talking about solutions, you need to understand where what models say comes from. Their output comes from two distinct sources, and only one is actionable.
Training data
Training data is public content the models ingested months or years ago. That information is frozen inside the model and can’t be edited directly. The only thing you can do is influence what the model will see the next time it gets retrained.
Real-time web search
Real-time web search is what Google and Bing index today and what the LLM consults when answering. This one is actionable: if you improve what appears on the public web about you, the model will see a different version. Here SEO and reputation management are the most important lever.
What to do when you detect negative AI content
When you detect negative content in an LLM there are three immediate actions that attack different sources of the problem.
Report directly to the platform
Reporting directly to the platform is the fastest. ChatGPT, Gemini, and Claude have a “dislike” or “report” button below each answer. Use it every time you see false or harmful information, because those reports feed reinforcement learning and each company’s safety team reviews them.
Generate better content (SEO)
Generating better content is the underlying lever. The model is trained on the web and queries it in real time, so if you publish more positive, well-ranked articles, the narrative changes over time.
To go deeper, you can check our guide on how to create positive content for the AI overview and how to push down negative search results on AI overviews.
Contact the AI company directly
Contacting the AI company directly applies to cases of serious defamation or exposure of personal data. OpenAI, Google, and Anthropic have forms to report content protected by privacy or to request removal. They are slow channels, but the only ones that escalate a real case.
How to resolve what AI models say about you
Resolving what AI models say about you requires deeper actions than the previous three.
Professional removal combines takedowns at the original source, deindexing in search engines, and active suppression so that LLMs retrain on a cleaner version of the web.
Final thoughts on auditing your reputation in LLMs
What matters isn’t an isolated answer, but the consensus between models and the source that originated the narrative. Detection rests on three pillars: manual audit, automated monitoring, and specialized AI visibility tools.
Anti-hallucination techniques and triangulation between models tell you whether you’re facing a real problem or an invention.
Actions range from direct reporting to generating SEO content and professional removal.
If your brand appears negatively on ChatGPT, Gemini, or Claude, at Media Removal we have a team of online reputation experts that audits, neutralizes, and rewrites what LLMs say about you. You can request a quote and share the models and prompts that are affecting your reputation.

