I want a one-page AI policy for my team
A real one-pager beats a thick document nobody reads. Four headers and concrete examples.
Try this first
- 1Header 1, approved tools: name explicitly which AIs are allowed for work (e.g. Copilot in M365, ChatGPT Team account). Anything not listed is off-limits for work data.
- 2Header 2, what can go in: bullet list of data classes. Public info and internal notes are fine, customer data only anonymised, social security/medical/payroll never.
- 3Header 3, review: 'AI output is a first draft, not a final product.' Whoever ships it, checks it and owns it.
- 4Header 4, reporting: where to report a mistake, a data leak, or a suspicious answer. One mailbox or channel.
- 5Sign it with a date and revisit twice a year. AI moves, the policy has to move with it.
When to bring us in
Regulated sector (health, finance, law)? Have a lawyer read it before you sign, then you are covered.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
- Copilot, Copilot Pro, M365 Copilot. What does each one do?Microsoft calls several products Copilot. Below is what each variant actually does, function-wise. For the price and licence question, see 'Is a Copilot licence worth it?' under SaaS and licences.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.