Which AI is allowed to do what with our data?
Data residency and training policies differ by vendor and tier. The free version is almost never the same as the business one.
Try this first
- 1Microsoft 365 Copilot: data stays in your tenant, EU Data Boundary available, prompts not used for training. DPA via your M365 agreement.
- 2ChatGPT Team/Enterprise: prompts policy-excluded from training, default 30-day retention for abuse monitoring (configurable on Enterprise).
- 3Claude (Anthropic): business API and Claude for Work not used for training. US-hosted, EU residency depends on product and timing, verify current state with Anthropic.
- 4Gemini in Google Workspace: business-account prompts not used for training, Workspace DPA covers it. Private Gemini behaves differently.
- 5For each tool you use, add one line to your AI policy: which data class is allowed. Policies change yearly, set a reminder.
When to bring us in
Audit, customer question, or due diligence on what a specific tool does with your data: that calls for a fresh DPA/sub-processor check, we help with that.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.