Copilot or Cursor suggests code that compiles but is wrong
The most dangerous AI mistake is not a red line, but plausible code doing just the wrong thing. Reviewing on 'sounds reasonable' is not enough.
Try this first
- 1Set the rule that AI suggestions always get at least one test that hits the path on purpose. No test, no merge.
- 2Be extra sceptical on suggestions touching external APIs, regex, or date handling. Those are the areas where models most often produce plausible nonsense.
- 3Look up the exact API name in the official docs before accepting a suggestion. A made-up method name sometimes compiles when it happens to resemble a real one.
- 4Read the PR as if the AI did not exist. 'Claude says it works' is not a review, and the author (you) remains responsible.
- 5On a team: log which suggestions caused bugs. Patterns help identify a use case where the tool costs more than it gives.
When to bring us in
If AI bugs structurally land in production, the test infrastructure or review discipline is usually missing, not the tool. We are happy to look at that with you.
See also
- Can I paste a customer file or email into ChatGPT?Depends on the account and settings. Free ChatGPT and a Team tenant behave very differently from what most people assume.
- I want a one-page AI policy for my teamA real one-pager beats a thick document nobody reads. Four headers and concrete examples.
- How do I tell if an AI answer is made up?Models sound confident even when they are wrong. A few habits catch most mistakes.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.