Skip to content

We want a first AI data policy on paper, but do not know where to start

You do not need forty pages. A workable first AI policy fits on one or two A4s and has five sections: purpose, allowed tools, data classification, output control, incident. Write it in plain language so colleagues actually read it.

Try this first

  1. 1Section 1, purpose: write in two sentences why AI is allowed and what the company wants from it. Also state explicitly what you do not want, for example customer data in public chats.
  2. 2Section 2, allowed tools: list business-paid tools that are allowed, with version. Private accounts or free tiers without business terms go on the not-allowed list explicitly.
  3. 3Section 3, data classification: three levels is enough. Public can go anywhere, internal only into approved tools, confidential and personal data go nowhere without written approval.
  4. 4Section 4, output control: AI output is a first draft, not the final version. Whoever sends or publishes owns the facts, numbers and sources.
  5. 5Section 5, incident: what to do if AI did something wrong or if customer data ended up in it. One contact person, one timeframe, no blame culture.

When to bring us in

Want a template that fits your sector and existing security policy, we can fill in the first version with you.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.