A colleague used AI output and something genuinely went wrong, now what?
AI incidents look like other incidents: contain fast, limit damage, learn. The difference is the cause often sits in a prompt or hallucination, not in a log with a stack trace. Treat it as a production incident, not a personal mistake.
Try this first
- 1Stop work in flight: the email not yet sent, the quote not yet out, the code not yet in production. First stop spread, then analyse.
- 2Collect evidence: the prompt, the output, the tool, the model version, the timestamp. Screenshot or copy the chat log before it gets overwritten.
- 3Assess impact: did customer data leak, was a wrong invoice sent, was a wrong diagnosis or advice given? On a data breach the 72-hour AP notification clock starts.
- 4Communicate where needed: the customer or vendor gets a short correction, internally the team gets a no-blame summary of what happened.
- 5Capture the lesson in the AI policy or tools matrix. A repeat of the same pattern is a process failure, not a person failure.
When to bring us in
If you are in an incident now with customer data involved, do not hesitate to call us within office hours. Time is the main factor here.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.