How do I tell if an AI answer is made up?
Models sound confident even when they are wrong. A few habits catch most mistakes.
Try this first
- 1Ask for sources or citations and actually open them. No working link or a title you cannot look up: treat the answer as unsure.
- 2Always double-check numbers, names, dates, and legal articles yourself. This is where hallucinations hit hardest.
- 3Ask the same question in two sessions or a different model. If answers differ, at least one of them is shaky.
- 4For legal, medical, or financial text: AI is a faster typist, not a substitute for the specialist who has to sign it off.
- 5Give the model context (own documents via Copilot or Claude Projects). Grounded material reduces hallucinations a lot, not entirely.
When to bring us in
If hallucinations regularly leak into your quotes or reports: time for a short process check, we can help with that.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- Copilot, Copilot Pro, M365 Copilot. What does each one do?Microsoft calls several products Copilot. Below is what each variant actually does, function-wise. For the price and licence question, see 'Is a Copilot licence worth it?' under SaaS and licences.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.