Skip to content

How do I tell who on my team really knows AI and who does not?

A formal test is overkill, but without any measurement claims stay anecdotal. A short practical test per quarter gives signal: not to grade people, but to see where training hours land and where not.

Try this first

  1. 1Define three standard tasks any participant can do regardless of role. For example: summarise a long mail thread, ask AI to build a table from pasted text, debug a given error message.
  2. 2Hand out the tasks in a shared document with a column for the prompt, the output, and 'what did you still fix manually'.
  3. 3Score not on output quality alone but on prompt iteration and output checking. Blind copy-pasters score zero, those who verify the data score.
  4. 4Discuss results in a team meeting: which patterns work, which pitfalls recur. Not a leaderboard, just a group learning.
  5. 5Repeat three months later with the same tasks. The delta shows whether the training in between stuck.

When to bring us in

Want a test set that fits your roles and data classification, we can draft the three tasks together.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.