What Does ChatGPT Say About You? Here's How to Find Out
Someone is going to ask an AI about you. When they do, will the answer help you or hurt you? Here's how to check before it matters.
The Question You Haven't Asked Yet
Open ChatGPT right now and type: "What can you tell me about [your full name] in [your city, state]?" The answer might surprise you. For some people, it returns a clean professional summary. For others, it surfaces arrest records, court cases, and personal details scraped from data broker sites — all synthesized into a single authoritative-sounding paragraph.
Unlike Google, where you can see each source and judge its credibility, ChatGPT delivers one narrative. There are no ten blue links. No way for the reader to evaluate context. The AI's summary becomes the truth for whoever asked.
Where ChatGPT Gets Its Information
ChatGPT with web search enabled pulls from the same sources that rank in Google: news articles, court record aggregators, data broker profiles, social media, and professional directories. But instead of showing these as separate results, it blends them together.
That blending is where the danger lives. A dismissed charge from 2015, a data broker profile listing old addresses, and your current LinkedIn bio might all appear in the same paragraph. The AI doesn't distinguish between a ten-year-old arrest and yesterday's professional achievement. It treats all indexed content as equally relevant.
- Data brokers are the biggest source of problematic AI content. Sites like Spokeo, BeenVerified, and Whitepages aggregate public records and make them easily discoverable. When ChatGPT searches the web, these high-authority sites rank prominently.
- Court record aggregators like JudyRecords and UniCourt index case details that AI can reference directly.
- Mugshot sites create especially damaging AI outputs because they pair your name with crime-related context that AI weights heavily.
- News articles about arrests or charges persist online indefinitely, even if the case was later dismissed.
How to Check What ChatGPT Says About You
Don't ask one vague question and call it done. Use structured queries that mirror what an employer or landlord would actually ask:
- "What can you tell me about [full name] in [city, state]?"
- "Does [full name] have any criminal record or legal history?"
- "What should I know before hiring [full name]?"
- "Is [full name] in [city, state] trustworthy?"
- "Tell me about [full name]'s background and reputation."
Run each query separately. Copy the exact response text. Note which sources ChatGPT cites. This gives you a baseline you can compare against after cleanup efforts.
Then repeat the same process on Google's Gemini. Different AI models access different data and generate different summaries. A clean result on ChatGPT doesn't guarantee a clean result on Gemini, and vice versa.
What to Do When the Results Are Bad
If ChatGPT surfaces negative or inaccurate information about you, the fix isn't to contact OpenAI — it's to clean up the sources the AI is pulling from. AI outputs are downstream of web content. Change the sources, and the AI narrative follows.
Start with the highest-impact sources:
- Data broker opt-outs — Submit removal requests to every broker listing your information. This is the single most effective action for improving AI reputation.
- Mugshot removal — If booking photos appear online, removing them eliminates a major negative signal.
- Court record de-indexing — Use Google's Results About You tool to request removal of court records from search results.
- Build positive content — LinkedIn profiles, professional bios, and published work give AI positive material to reference.
Why Manual Checking Isn't Enough
AI responses change constantly. Models update, web indexes refresh, and source pages are added or removed. A clean ChatGPT response today can turn negative next month if a data broker re-lists your information or a news article gets re-indexed.
That's why one-time checks create a false sense of security. The only reliable approach is ongoing monitoring — running the same structured queries on a regular schedule and tracking changes over time. If something new surfaces, you want to catch it before an employer or landlord does.
The Stakes Are Higher Than You Think
A 2025 study found that nearly half of hiring managers now use AI tools during the screening process. For housing applications, the number is harder to quantify because individual landlords rarely disclose their methods. But the pattern is clear: AI is becoming the first check people run, before formal background checks, before reference calls, before interviews.
The combination of employers Googling your name and asking AI about you means your online reputation is being evaluated from two angles simultaneously. Optimizing for one while ignoring the other leaves you exposed. For a comprehensive approach, see our guide on cleaning up your online reputation.
Take Control Before Someone Else Defines You
The AI narrative about you exists whether you've checked it or not. Every day you wait is another day that employers, landlords, and clients are making decisions based on information you haven't reviewed. The gap between what you think is out there and what AI actually says can be significant — and the consequences are real.
Find out what ChatGPT and Gemini say about you in under 5 minutes.
Free scan checks AI platforms, Google, 18 data brokers, court records, and mugshot sites.
Run Your Free Scan