Skip to content

My LLM step sometimes returns prose, sometimes JSON, downstream breaks

Default LLM output is textual and not reliably parseable. For automation you need structured output: JSON mode or tool calling. Then output is guaranteed valid JSON to your schema.

Try this first

  1. 1OpenAI: set response_format to {type: 'json_object'} or better 'json_schema' with a schema definition. Anthropic: use tool-use to enforce a schema.
  2. 2Define a minimal schema: only fields you need, with types and required markers. The tighter, the more reliable.
  3. 3Close the prompt with explicit instruction 'reply with JSON only, no commentary'. That dampens overshoot on older models.
  4. 4Test edge cases: input with no relevant info, with conflicting info, with very long input. What does the model do?
  5. 5Catch JSON.parse errors with a fallback: second LLM call to 'fix this JSON', or a hardcoded default record.

When to bring us in

Got LLM output to process downstream where fields are critical, a validated schema plus assertion helps. We can set it up.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.