A colleague pastes screenshots into ChatGPT
A screenshot is no safer than text. What is in it (customer names, amounts, IDs) ships along.
Try this first
- 1Treat a screenshot like text. 'Did I paste a mail or an image?' is irrelevant for data leakage.
- 2Mask sensitive parts first (Snipping Tool or a markup tool with blur). A black box is not enough for some tools that read metadata or pixels.
- 3ID documents, passports, social security numbers, payslips: never in a private or consumer AI. Not even 'just this once'.
- 4When the AI analyses a UI screenshot containing customer data: the whole screen counts, not just the part you meant to ask about.
- 5Train colleagues on one concrete rule: 'would I put this screenshot in a tweet? No? Then not in AI either.'
When to bring us in
Screenshot with customer data ended up in a private account: stop, preserve logs, damage scoping with us.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.