Most N8N workflows move data: pull a record from one place, transform it, push it somewhere else. That works well for deterministic tasks. But when you need a workflow to understand text, classify intent, or generate a first draft of an email, deterministic logic falls short. That is where OpenAI and Anthropic Claude come in.

This tutorial walks through exactly how to connect N8N to both providers, design prompts that return consistent structured output, and build six AI-powered workflow patterns that solve real business problems. No coding required for the core integration. A bit of JSON for the advanced patterns.

Already comfortable with N8N basics? Good. If you are starting from scratch, read how to build your first N8N workflow first, then come back here.

Two Ways to Connect N8N to AI Models

N8N gives you two integration paths depending on your version and use case.

Option 1: The Native OpenAI Node (Easiest)

N8N ships a built-in OpenAI node available in the node library under "AI." It handles authentication and request formatting for you. Supported operations include:

To set it up: add the OpenAI node to your workflow, create a new credential (type: OpenAI API), paste your API key from platform.openai.com, and you are ready. The node exposes a simple prompt field and returns the model's response as a string you can reference in downstream nodes with {{ $json.message.content }}.

Option 2: The HTTP Request Node (Claude + Any Model)

For Anthropic Claude, or for any model not yet in N8N's native library, use the HTTP Request node. This node can call any REST API, including the Anthropic Messages API. The same pattern works for Mistral, Cohere, Groq, or a self-hosted Ollama instance.

N8N also added an Anthropic node inside its AI Agent component starting with version 1.30. If you are running a recent version, check the AI sub-nodes under "Chat Models" before resorting to raw HTTP requests.

Which should you use? Use the native OpenAI node when you want the fastest setup and GPT-4o is a good fit. Use the HTTP Request node or the AI Agent component when you need Claude's structured output, longer context window, or want to keep model choice flexible without reconfiguring credentials later.

Step-by-Step: Connecting N8N to OpenAI

Step 1: Create an OpenAI API Key

Log in to platform.openai.com, navigate to API Keys, and create a new secret key. Copy it immediately. OpenAI only shows it once.

Step 2: Add the OpenAI Credential in N8N

In N8N, go to Settings > Credentials > Add Credential. Search for "OpenAI API." Paste your key and save. N8N encrypts it at rest.

Step 3: Add the OpenAI Node to a Workflow

In any workflow, click the plus button, search for "OpenAI," and add it. Set the Operation to "Message a Model" (for chat completion). Choose your model (GPT-4o for most tasks, GPT-3.5 Turbo if cost is a priority). Write your prompt in the "User Message" field.

Step 4: Parse the Output

The node returns the model's reply as a text string. Reference it downstream with {{ $json.message.content }}. If you ask the model to return JSON, use the Code node to parse it: return JSON.parse($input.first().json.message.content).

Step-by-Step: Connecting N8N to Anthropic Claude

Step 1: Get Your Anthropic API Key

Log in to console.anthropic.com, navigate to API Keys, and generate a key. Note the model IDs you plan to use: claude-3-5-sonnet-20241022 for Claude 3.5 Sonnet or claude-3-haiku-20240307 for the fastest/cheapest option.

Step 2: Add an HTTP Request Node

In N8N, add an HTTP Request node to your workflow. Configure it as follows:

Add a second header: anthropic-version with the value 2023-06-01. This is required by Anthropic's API.

Step 3: Build the Request Body

Set the Body Type to JSON and use an expression to build the payload:

{
  "model": "claude-3-5-sonnet-20241022",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "{{ $json.userPrompt }}"
    }
  ]
}

Replace {{ $json.userPrompt }} with whatever upstream data you want to pass to the model. This can be a customer email, a form submission, a scraped webpage, or any text your workflow has collected.

Step 4: Access the Response

Claude returns a JSON object. The generated text lives at {{ $json.content[0].text }}. Pass this to your next node, whether that is a Google Sheets write, a Slack message, an email send, or another processing step.

GPT-4o vs Claude 3.5 Sonnet for N8N Workflows

Both models are capable. The differences matter at the margins, and which one you choose depends on your specific task.

Factor GPT-4o (OpenAI) Claude 3.5 Sonnet (Anthropic)
N8N native node Yes, built in Via AI Agent or HTTP Request
Context window 128K tokens 200K tokens
JSON output reliability Good (use response_format) Excellent (follows instructions tightly)
Speed (short prompts) Faster Slightly slower
Input cost $5 / 1M tokens $3 / 1M tokens
Output cost $15 / 1M tokens $15 / 1M tokens
Best for Quick classification, routing, short drafts Long documents, complex drafts, reliable JSON

Most production N8N setups use GPT-4o for high-frequency, short-prompt tasks (classifying a lead takes 200 tokens) and Claude for lower-frequency but higher-stakes tasks (drafting a sales proposal from 10 pages of notes).

Prompt Design for Reliable Automation

A workflow prompt is different from a chat prompt. In chat, you iterate. In a workflow, the output feeds the next node automatically. If the format breaks, the workflow breaks. These four principles keep your AI outputs consistent:

1. Always Specify the Output Format

Tell the model exactly what format to return. Do not say "summarize this email." Say "Summarize this email in exactly three bullet points. Return only the bullet points, no preamble." For structured data, say "Return a valid JSON object with keys: category, priority, and sentiment. No markdown, no explanation."

2. Include a System Prompt

For OpenAI, add a System Message field in the node. For Claude via HTTP, add a "system" key to the request body alongside "messages". The system prompt locks the model's role and output rules so they do not drift across different user inputs.

3. Pass Dynamic Context from Upstream Nodes

Use N8N expressions to inject data into the prompt. If the workflow received a form submission, your prompt might read: "Classify the following customer message into one of these categories: billing, technical support, feature request, or other. Message: {{ $json.formMessage }}". The expression pulls the actual form value at runtime.

4. Validate the Output Before Passing It On

Add an IF node after the AI call to check that the output matches your expected shape. If the model returns something unexpected (it happens), route it to an error handler or a human review queue rather than letting corrupted data flow downstream. The N8N error handling guide covers this pattern in detail.

Six AI Workflow Patterns for Small Businesses

1. Lead Scoring from Inbound Form Submissions

Trigger: Webhook from your contact form. AI node: Pass the prospect's message to GPT-4o with a prompt like "Score this sales inquiry from 1 to 10 based on budget signals, urgency, and specificity. Return JSON with keys: score, rationale." Downstream: IF score > 7, add to high-priority CRM stage and notify the sales team via Slack. IF score <= 7, add to nurture sequence.

This is the kind of workflow that turns a five-second form fill into an actionable sales signal without anyone looking at it manually.

2. Customer Email Drafting with CRM Context

Trigger: New deal created in your CRM (via webhook or polling). AI node: Pull the contact's name, company, industry, and any notes from the CRM record. Pass them to Claude with a prompt: "Draft a personalized follow-up email for a sales rep to send after their first call with this prospect. Tone: professional but warm. Length: 150 words max." Downstream: Create a draft in Gmail. The rep reviews and sends with one click.

3. Support Ticket Classification and Routing

Trigger: New email to support@yourcompany.com (via Gmail trigger or email parsing service). AI node: Classify the email into a category (billing, technical, account, other) and extract urgency (high/medium/low). Return JSON. Downstream: Route to the correct Zendesk queue, tag in your help desk, and if urgency is high, ping the on-call engineer via Slack. This workflow alone saves a tier-1 support agent two to three hours per day on triage.

4. Content Summarization from URLs or Documents

Trigger: Google Sheets row added with a URL column. Upstream: HTTP Request node to fetch the page content. AI node: Summarize the content in three sentences. Extract key topics as a JSON array. Downstream: Write summary and topics back to the Google Sheet row. Useful for research teams, content strategists, and PR teams monitoring competitor updates. Connect this to the N8N Google Sheets integration for a complete pipeline.

5. Sentiment Analysis on Reviews and Feedback

Trigger: New row in Airtable from a review aggregation tool. AI node: Analyze sentiment (positive/negative/neutral) and extract the top complaint or compliment. Return JSON. Downstream: Update Airtable with sentiment data. If negative, trigger an alert to the customer success team. Route positive reviews to a "testimonial candidates" table. This workflow caught a pattern of complaints about slow shipping for one of our clients within 48 hours of the issue starting.

6. Invoice and Document Data Extraction

Trigger: New file in a Google Drive folder (invoice PDFs from vendors). Upstream: Use the Google Drive node to get the file, pass it through a PDF parser. AI node: Extract vendor name, invoice number, amount, due date, and line items. Return structured JSON. Downstream: Write to Google Sheets, create a bill in QuickBooks, and notify the finance team. This replaces manual data entry entirely.

What This Looks Like in Practice

One of our clients, an outdoor kitchen appliance brand, was manually reviewing 200 to 300 customer inquiry emails per week to triage them between sales, support, and wholesale teams. We built a two-node N8N workflow: Gmail trigger + GPT-4o classification. The workflow now routes every email automatically, with the right team notified via Slack within 30 seconds of arrival.

That is not a dramatic AI story. It is a $50/month OpenAI bill replacing three hours of human sorting per day. The same pattern that we used in the Le Marquier implementation, where AI handling reached 98% of inquiries with an 80% cost reduction, applies at any scale.

If you want to calculate what this kind of workflow would be worth for your business, the ROI calculator gives you a concrete number in under two minutes.

Production Checklist Before Going Live

Before you activate an AI workflow for live traffic, run through this list:

Not sure if your business is ready for AI workflow automation? The AI readiness assessment takes five minutes and tells you where you stand across data, processes, and team readiness.

When to Build vs When to Hire

These workflow patterns are buildable by a technical founder or operations manager with a few days of N8N experience. The documentation is good, the node library covers most integrations, and the community forum is active.

That said, there is a gap between a working prototype and a production workflow that handles errors gracefully, scales without breaking, and integrates cleanly with your existing systems. If you need the latter without the learning curve, our N8N automation service handles scoping, build, testing, and handoff in two to three weeks.

Frequently Asked Questions

Does N8N have a built-in OpenAI node?

Yes. N8N ships a native OpenAI node that supports chat completions, text completion, image generation, and embeddings. You authenticate with your OpenAI API key, then drag the node into any workflow without writing any code.

How do I connect N8N to Claude (Anthropic)?

Use the N8N HTTP Request node pointed at https://api.anthropic.com/v1/messages. Set the method to POST, add an x-api-key header with your Anthropic API key, add an anthropic-version header (e.g., 2023-06-01), and pass your model and messages in the JSON body. N8N also offers an Anthropic node in its AI Agent component starting from version 1.30.

Which is better for N8N workflows: OpenAI GPT-4o or Claude?

For most N8N automation tasks, both models perform well. GPT-4o is slightly faster for short classification tasks and has native function-calling support. Claude 3.5 Sonnet tends to produce more consistent structured output (JSON) and handles longer documents well. Many teams use GPT-4o for quick routing decisions and Claude for detailed summarization or drafting tasks within the same workflow.

How much does it cost to run AI nodes in N8N?

N8N itself does not charge for AI node executions beyond normal execution credits. You pay OpenAI or Anthropic directly based on tokens used. GPT-4o costs roughly $5 per million input tokens and $15 per million output tokens. Claude 3.5 Sonnet is priced at $3 per million input tokens and $15 per million output tokens. For most small business workflows running a few hundred executions per day with modest prompts, monthly AI API costs typically run $10 to $50.

Can I use N8N AI workflows without cloud hosting?

Yes. N8N can be self-hosted on your own server using Docker. Your N8N instance makes outbound API calls to OpenAI or Anthropic, so your data is sent to those providers but not to any N8N cloud. If data privacy is a concern, you can also route requests through a local LLM (like Ollama running LLaMA 3) using the same HTTP Request node pattern.

What are the most common N8N + AI workflow use cases?

The six most common patterns are: (1) lead scoring and qualification from form submissions, (2) customer email drafting with personalized context, (3) support ticket classification and routing, (4) content summarization from documents or URLs, (5) sentiment analysis on reviews or feedback, and (6) structured data extraction from unstructured text like invoices or emails.

Ready to Get Started?

Book a free 30-minute discovery call. We'll identify your biggest opportunities and show you exactly what AI automation can do for your business.

Book a Free Discovery Call

Suyash Raj
Suyash Raj Founder of rajsuyash.com, an AI automation agency helping SMBs save time and scale with AI agents, N8N workflows, and voice automation.