How do I tell who on my team really knows AI and who does not?
A formal test is overkill, but without any measurement claims stay anecdotal. A short practical test per quarter gives signal: not to grade people, but to see where training hours land and where not.
Try this first
- 1Define three standard tasks any participant can do regardless of role. For example: summarise a long mail thread, ask AI to build a table from pasted text, debug a given error message.
- 2Hand out the tasks in a shared document with a column for the prompt, the output, and 'what did you still fix manually'.
- 3Score not on output quality alone but on prompt iteration and output checking. Blind copy-pasters score zero, those who verify the data score.
- 4Discuss results in a team meeting: which patterns work, which pitfalls recur. Not a leaderboard, just a group learning.
- 5Repeat three months later with the same tasks. The delta shows whether the training in between stuck.
When to bring us in
Want a test set that fits your roles and data classification, we can draft the three tasks together.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.