My LLM step sometimes returns prose, sometimes JSON, downstream breaks
Default LLM output is textual and not reliably parseable. For automation you need structured output: JSON mode or tool calling. Then output is guaranteed valid JSON to your schema.
Try this first
- 1OpenAI: set response_format to {type: 'json_object'} or better 'json_schema' with a schema definition. Anthropic: use tool-use to enforce a schema.
- 2Define a minimal schema: only fields you need, with types and required markers. The tighter, the more reliable.
- 3Close the prompt with explicit instruction 'reply with JSON only, no commentary'. That dampens overshoot on older models.
- 4Test edge cases: input with no relevant info, with conflicting info, with very long input. What does the model do?
- 5Catch JSON.parse errors with a fallback: second LLM call to 'fix this JSON', or a hardcoded default record.
When to bring us in
Got LLM output to process downstream where fields are critical, a validated schema plus assertion helps. We can set it up.
See also
- n8n: self-host or cloud?Self-hosted is cheaper at volume and keeps data local. Cloud removes ops burden.
- Zapier or Make: which fits better?Zapier is straight-line; Make handles complex flows with routers and iterators for less money.
- Power Automate Cloud or Desktop: which to use?Cloud for SaaS integrations and triggers. Desktop for RPA against legacy Windows apps without APIs.
None of the above fits?
Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.
Or skip the DIY entirely
Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.