Skip to content

I don't know which n8n workflows actually run and which sit idle

n8n's own executions view shows runs per workflow, but without your own dashboards you miss trends and silent failures. A custom monitoring table with N8N_LOG_LEVEL and webhook status makes it manageable.

Try this first

  1. 1Enable execution-data persistence for all workflows in n8n (Workflow settings > Save executions). Otherwise runs disappear after retention.
  2. 2Build a meta-workflow that hourly queries the Postgres execution table: runs per workflow, error rate, last-run time.
  3. 3Push output to a table or a Slack report. Weekly trend: workflows that stopped, workflows with growing error rate.
  4. 4Self-host: also monitor the container itself (CPU, RAM, restart count) with an external uptime monitor. n8n events say nothing about crashes.
  5. 5Document each workflow's SLA: how often should it run, what is acceptable failure. Otherwise the monitor can't tell when something's off.

When to bring us in

Got a growing n8n install without run visibility, a mini monitoring layer pays off. We can set up queries and dashboard.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.