Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Founder & CEO, Agentik {OS}
n8n connects apps with intelligence. Self-hosted, no per-execution pricing, and every step can use AI to analyze, decide, and adapt. Build it right.

Most automation tools connect A to B. A trigger fires, an action runs, done. A form submission creates a CRM contact. A new Stripe payment logs to a spreadsheet. Useful. Mechanical. Dumb.
n8n does something fundamentally different. It lets you wire an AI model into the middle of any workflow so each step can analyze context, make decisions, and adapt its behavior. The workflow stops being a pipeline and starts being a decision engine.
I have been building production n8n workflows for eighteen months. Some of them route tens of thousands of operations per day. Here is everything I learned, including the parts the documentation does not tell you.
The honest answer is pricing and control.
Zapier charges per task execution. Make charges per operation. Both models work fine when your workflows run occasionally. They become brutal when you start processing high volumes with AI steps.
Consider a workflow that classifies incoming support emails with AI, extracts key information, routes them to the right team, and generates a draft response. On Zapier, that is at minimum four operations per email. Scale to five hundred emails per day and you are looking at serious monthly costs before you have written a single line of custom logic.
On n8n, self-hosted on a $20/month VPS, that is your infrastructure doing its job. Fixed cost. Unlimited executions.
Self-hosting n8n is not optional for serious production workflows. The moment your execution count starts climbing, the economics flip decisively in favor of owning your infrastructure.
Beyond pricing, control matters. When a workflow breaks at 2 AM, I want to SSH into my server and read the actual logs. Not submit a support ticket. Not wait for a platform status page to update. n8n gives you that.
The trade-off is setup complexity. Zapier works in ten minutes. n8n requires server configuration, SSL certificates, and some DevOps knowledge. That cost is real. Pay it once, and you will not regret it.
The fastest production-ready setup uses Docker Compose:
// docker-compose.yml equivalent config
// Run on any VPS with 2GB+ RAM
// Use nginx reverse proxy for SSL termination
// PostgreSQL for persistence (not SQLite in prod)
// Key environment variables:
// N8N_BASIC_AUTH_ACTIVE=true
// N8N_BASIC_AUTH_USER=your-user
// N8N_BASIC_AUTH_PASSWORD=strong-password
// DB_TYPE=postgresdb
// N8N_HOST=n8n.yourdomain.com
// N8N_PROTOCOL=https
// WEBHOOK_URL=https://n8n.yourdomain.com/Use PostgreSQL, not the default SQLite. SQLite works for development. It will corrupt your execution history under production load. I learned this the hard way.
Enable basic auth. Your n8n instance will be internet-accessible (it needs to be for webhooks). Without authentication, anyone who finds the URL owns your workflows.
Standard n8n workflows are powerful but predictable. The moment you add an AI node between data steps, the workflow becomes adaptive.
Here is a real workflow I run: incoming customer emails arrive via webhook, an AI node reads the full email text and classifies it into categories (billing, technical, feature request, complaint, other), extracts the customer sentiment score, identifies the urgency level, and outputs structured JSON. Downstream nodes route based on that JSON without any manual rules.
// AI Classification Node Output (structured JSON prompt response)
const classificationPrompt = `
Analyze this customer email and return JSON:
{
"category": "billing|technical|feature|complaint|other",
"urgency": "low|medium|high|critical",
"sentiment": -1.0 to 1.0,
"key_issue": "one sentence summary",
"suggested_team": "billing|engineering|product|support"
}
Email:
{{$json.email_body}}
`;
// The AI node processes this and returns parseable JSON
// Downstream IF nodes route on category and urgency fieldsThe key insight is structured output. When you ask an AI to return JSON with specific fields, you can wire its output directly into routing logic without fragile string parsing. Every AI node in my production workflows returns structured JSON. No exceptions.
The pattern I use for every AI-powered routing workflow:
Step 3 is critical and most tutorials skip it. AI models occasionally produce malformed JSON or miss required fields. A validation node catches this and routes to an error handler instead of crashing the workflow.
// Validation function node (JavaScript)
const aiOutput = $input.first().json;
const required = ['category', 'urgency', 'sentiment', 'key_issue'];
const missing = required.filter(field => !(field in aiOutput));
if (missing.length > 0) {
throw new Error(`AI output missing fields: ${missing.join(', ')}`);
}
// Validate enum values
const validCategories = ['billing', 'technical', 'feature', 'complaint', 'other'];
if (!validCategories.includes(aiOutput.category)) {
throw new Error(`Invalid category: ${aiOutput.category}`);
}
return [{ json: aiOutput }];Instead of creating separate webhooks for every integration, build a single intelligent gateway. One URL receives everything. The AI node determines what it is and routes accordingly.
I use this for a client whose team sends requests via email, Slack, and a web form. Three different formats, one workflow. The AI node normalizes everything into a standard structure before any downstream processing happens.
This eliminates the "webhook sprawl" problem where you end up with forty different entry points that all need maintenance.
n8n handles batch operations well but the error handling requires thought. When processing a thousand items, you do not want one failure to kill the entire batch.
// Error handling in loop nodes
// Set "Continue on Fail" = true for each item in the batch
// Collect errors in a separate branch
// Send error report after batch completes
// Never let one bad item stop a thousand good ones
const results = {
processed: 0,
failed: 0,
errors: [] as string[]
};
// After each item, update counters
// At the end, notify with summarySome decisions should not be fully automated. n8n handles this elegantly with wait nodes.
The workflow processes the request, prepares a recommendation, sends it to a human for approval via email or Slack with approve/reject links, then waits. When the human clicks, the workflow resumes from where it paused.
I use this for high-value decisions: anything over a certain dollar threshold, anything touching customer data, anything that would be embarrassing if wrong.
Different AI models have different strengths. A workflow can use a fast cheap model for initial classification, then a more capable model only for complex cases that need it.
// First pass: fast classification
// Model: claude-haiku or gpt-4o-mini
// Cost: fraction of a cent per request
// Handles 80% of cases confidently
// Second pass: only for low-confidence or complex cases
// Model: claude-opus or gpt-4o
// Cost: 20x more expensive
// Handles the hard 20%
// Result: same quality at roughly 25% of the cost
// of running everything through the expensive modelThis tiered approach cuts AI API costs dramatically without sacrificing quality on hard cases.
Combine n8n's cron trigger with AI summarization to build automated reporting that actually gets read.
Every morning at 7 AM: fetch all Slack messages from the previous day, all open support tickets, all GitHub issues, and all key metrics. Feed to an AI that generates a crisp daily briefing highlighting the three most important things, flagging blockers, and noting wins. Send to the team.
People read it because it is actually useful, not because it dumps raw data at them.
Understanding n8n's execution model helps you build more reliable workflows.
Each workflow execution is isolated. Variables do not persist between runs unless you explicitly store them in a database or use n8n's built-in data storage. This is a feature, not a bug. Stateless execution means failures do not corrupt future runs.
The queue mode (available in enterprise) separates the main process from workers. For production workflows handling high volume, this is essential. Without queue mode, a burst of simultaneous triggers can overwhelm the main process.
| Mode | Best For | Limitation |
|---|---|---|
| Default | Development, low volume | Synchronous, can bottleneck |
| Queue Mode | Production, high volume | Requires Redis, more setup |
| Scaling | Enterprise workloads | Multiple worker processes |
For most production deployments, queue mode with a Redis instance is the right choice. Set it up before you need it, not after a traffic spike exposes the bottleneck.
Production workflows fail. APIs go down. Rate limits hit. Data arrives malformed. How you handle failure determines whether you wake up to a disaster or a routine incident.
Every production workflow I build has three error handling layers:
Node-level: Continue on fail is enabled for nodes that can have acceptable failures (like optional data enrichment steps). Error outputs route to logging nodes that capture the failure context.
Workflow-level: An error trigger workflow catches any uncaught execution failure, logs the full execution data, and sends an alert to Telegram or Slack with enough context to diagnose the issue quickly.
Monitoring: n8n exposes execution metrics. Hook them into your monitoring stack. Alert on elevated failure rates before they become crises.
// Error trigger workflow structure:
// 1. Error Trigger node (fires on any workflow failure)
// 2. Extract execution ID, workflow name, error message
// 3. Format human-readable alert
// 4. Send to Slack/Telegram
// 5. Log to database for analysis
const errorAlert = {
workflow: $workflow.name,
executionId: $execution.id,
error: $json.error.message,
timestamp: new Date().toISOString(),
url: `https://n8n.yourdomain.com/execution/${$execution.id}`
};The URL to the failed execution is the most valuable part. When I get an alert at 3 AM, I click the link, see exactly what happened, and can fix it in minutes instead of hunting through logs.
n8n does not replace your code. It orchestrates it.
My production stack: n8n handles workflow orchestration and scheduling. Custom API services handle heavy computation. Specialized AI agents handle domain-specific tasks. n8n calls all of them and coordinates the results.
The HTTP Request node is your bridge to everything. Any API, any service, any custom endpoint. When a built-in integration does not exist (or is limited), HTTP Request plus a little JSON handles it.
For AI operations specifically, I primarily use the HTTP Request node to call AI APIs directly rather than the built-in AI nodes. This gives me more control over models, parameters, and error handling than the abstraction layer provides.
Combine n8n with Make for specific integration patterns where Make's visual data transformer gives you an edge, and you have a complete automation infrastructure that handles anything.
Honesty matters here.
n8n is not the right tool for real-time, sub-second operations. The execution overhead is measurable. If you need to respond to a webhook in under 100ms, n8n will not get you there.
It is also not ideal for extremely complex business logic that would be cleaner as code. Sometimes a Python script or TypeScript function is more maintainable than a forty-node visual workflow. Know when to reach for code.
And the self-hosting requirement is a genuine operational burden. You own the infrastructure. You handle updates. You manage backups. For teams without DevOps capacity, the managed n8n Cloud option exists at higher cost, or Make or Zapier might be more appropriate despite the execution costs.
n8n is the best workflow automation platform for builders who want control and are willing to operate their own infrastructure. The self-hosting model, combined with AI integration, creates a competitive advantage that SaaS alternatives cannot match on price or flexibility.
Start simple. One webhook, one AI node, one action. Get comfortable with the execution model and error handling patterns. Then scale up.
The teams running n8n in production are building automation capabilities their competitors think require dedicated engineering teams. They are not wrong.
Q: What is n8n and how does it work?
n8n is an open-source workflow automation platform that connects different services through visual flows. You create workflows by connecting nodes that represent actions (API calls, data transformations, AI model calls, database operations). n8n runs on your infrastructure, giving full control over data and execution.
Q: How does n8n compare to Zapier and Make?
n8n is self-hosted (better for data privacy and cost at scale), supports complex logic (code nodes, branching, error handling), and has a more developer-friendly interface. Zapier is easiest for simple automations. Make has the best visual editor. n8n wins for complex AI workflows and organizations that need data control.
Q: What can you automate with n8n and AI?
Common AI automations include email classification and routing, content generation pipelines, data extraction from documents, lead scoring and routing, customer support ticket triage, report generation, and multi-step AI workflows that combine multiple models and tools.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

Make (Integromat) + AI: Advanced Integration Patterns
Make transforms data between every step, not just connects apps. Add AI routing and classification and you get workflows that actually think.

n8n Workflow Automation: The Setup Guide That Actually Works
n8n can connect anything to anything. Most people spend days fighting the setup instead of building automations. Here is how to get running fast.

AI Automation Agency: How to Spot the Real Ones in 2026
Reviewed 12 AI automation agencies last month. Nine were repackaged freelancers with ChatGPT. Here is how to tell the genuine ones from the noise.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.