Skip to content

How do I tell which EU AI Act category our AI use case falls in?

The AI Act has prohibited practices, high-risk systems, limited-risk (transparency) and minimal-risk. Effective dates phase in. Category drives your obligations.

Try this first

  1. 1Start with the exclusions. Prohibited practices such as government social scoring or certain biometric identification are banned from February 2025 in the EU.
  2. 2Check the high-risk annexes. Hiring, access to education, credit scoring, critical infrastructure and some law-enforcement tools typically land here.
  3. 3Limited risk. Chatbots, generated content (deepfakes) and emotion recognition mostly require user-facing transparency.
  4. 4GPAI models. Providers of general-purpose AI models get specific documentation and transparency duties under the AI Act from August 2025.
  5. 5Document the classification and the rationale. When in doubt go stricter until the regulator clarifies your case.

When to bring us in

Building something tilting towards high-risk with direct effects on people? An AI Act lawyer plus DPIA is not optional.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.