CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with n8n: Workflow Automation Guide
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with n8n: Workflow Automation Guide

C
CrawlForge Team
Engineering Team
April 5, 2026
7 min read

On this page

n8n is one of the most popular workflow automation platforms, with over 400 integrations and a visual builder that makes complex pipelines feel simple. CrawlForge MCP adds 18 specialized web scraping tools to your n8n toolkit -- giving your automated workflows the ability to fetch pages, extract structured data, monitor changes, and run deep research without writing a single line of scraping code.

This guide walks you through connecting CrawlForge to n8n, building your first scraping workflow, and scaling to production-grade pipelines.

Table of Contents

  • Prerequisites
  • How CrawlForge Works with n8n
  • Step 1: Configure the HTTP Request Node
  • Step 2: Build a Price Monitoring Workflow
  • Step 3: Add Scheduling and Notifications
  • Advanced: Multi-Page Crawl Pipeline
  • Credit Cost Breakdown
  • Common Errors and Fixes
  • Next Steps

Prerequisites

Before you start, you will need:

  • n8n installed locally or running in the cloud (n8n.io)
  • A CrawlForge API key -- sign up free to get 1,000 credits
  • Basic familiarity with n8n's visual workflow editor

How CrawlForge Works with n8n

CrawlForge exposes a REST API at https://crawlforge.dev/api/v1/tools/. Each of the 18 tools has its own endpoint. You call these endpoints from n8n's HTTP Request node, passing your API key in the Authorization header and the tool parameters in the JSON body.

The flow looks like this:

Trigger (Schedule/Webhook) -> HTTP Request (CrawlForge) -> Transform Data -> Output (Slack/Email/DB)

No custom n8n nodes to install. No npm packages. Just standard HTTP requests.

Step 1: Configure the HTTP Request Node

Open n8n and create a new workflow. Add an HTTP Request node and configure it:

Typescript

Set the Authentication to "Header Auth" with:

  • Name: Authorization
  • Value: Bearer cf_live_your_api_key_here

Click Execute Node to test. You should see clean, extracted content from the target page in the output panel. This call costs 2 credits.

Reusable Credential Setup

To avoid repeating your API key across nodes, create a Header Auth credential in n8n:

  1. Go to Settings > Credentials > Add Credential
  2. Select Header Auth
  3. Set Name to Authorization, Value to Bearer cf_live_xxxxx
  4. Save as "CrawlForge API"

Now every HTTP Request node can reference this credential.

Step 2: Build a Price Monitoring Workflow

Here is a practical workflow that monitors competitor pricing pages daily and sends a Slack alert when prices change.

Workflow Architecture

Schedule Trigger (Daily 9am) -> HTTP Request: CrawlForge scrape_structured -> IF Node: Compare with yesterday's data -> Slack Node: Send alert if changed -> Google Sheets Node: Log all prices

The Scraping Node

Configure the HTTP Request node to use CrawlForge's scrape_structured tool:

Typescript

This call costs 2 credits. Running daily for 30 days = 60 credits per month for one competitor. Monitor 10 competitors for just 600 credits/month -- well within the Hobby plan.

The Comparison Logic

Use n8n's IF node to compare today's prices with the previous run. The Code node can store and diff values:

Typescript

Step 3: Add Scheduling and Notifications

Schedule Trigger

Add a Schedule Trigger node at the start of your workflow:

  • Trigger Interval: Every day
  • Hour: 9 (runs at 9:00 AM)
  • Timezone: Your local timezone

Slack Notification

Add a Slack node after the IF node (true branch):

Channel: #competitive-intel Message: "Price change detected on competitor.com: Previous: {{ $json.previousPrices }} Current: {{ $json.currentPrices }} Changed at: {{ $json.timestamp }}"

Advanced: Multi-Page Crawl Pipeline

For larger scraping jobs, use CrawlForge's batch_scrape tool to process multiple URLs in a single API call.

Typescript

This processes all 5 URLs in parallel for 5 credits total -- compared to 5 separate extract_content calls at 2 credits each (10 credits). Batching saves 50% on multi-URL jobs.

Use n8n's Split In Batches node to process the results one at a time if needed for downstream nodes.

Credit Cost Breakdown

WorkflowTools UsedCredits per RunMonthly (Daily)
Single page extractextract_content260
Price monitoring (1 site)scrape_structured260
Batch scrape (5 URLs)batch_scrape5150
Full site crawlcrawl_deep5150
Research pipelinedeep_research10300

The Free tier (1,000 credits/month) covers ~16 daily single-page scrapes. The Hobby plan ($19/month, 10,000 credits) handles most production workflows.

Common Errors and Fixes

401 Unauthorized: Your API key is missing or invalid. Check the Authorization header format: Bearer cf_live_xxxxx.

429 Rate Limited: You are sending too many requests per second. Add a Wait node between HTTP Request nodes with a 1-second delay, or use batch_scrape to combine requests.

Empty response body: The target site may require JavaScript rendering. Switch from extract_content to scrape_with_actions (5 credits) for dynamic pages.

Next Steps

You now have a working CrawlForge + n8n pipeline. From here, you can:

  • Add error handling with n8n's Error Trigger node
  • Store results in PostgreSQL, Airtable, or Google Sheets
  • Chain tools -- use search_web to find URLs, then extract_content to process them
  • Explore all 18 tools in the CrawlForge documentation

For more integration patterns, check out:

  • CrawlForge Quick Start Guide
  • 18 Web Scraping Tools in One MCP Server
  • Building an AI Research Assistant

Ready to automate your web scraping workflows? Start free with 1,000 credits -- no credit card required.

Tags

n8nworkflow-automationintegrationweb-scrapingno-codetutorialmcp

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with Dify Workflows
Tutorials

How to Use CrawlForge with Dify Workflows

Add CrawlForge as a custom tool in Dify for web scraping in your LLM app workflows. No-code and API integration guide with workflow examples.

C
CrawlForge Team
|
Apr 22
|
7m
How to Use CrawlForge with LangGraph Agents
Tutorials

How to Use CrawlForge with LangGraph Agents

Build stateful web scraping agents with LangGraph and CrawlForge. TypeScript guide covering graph nodes, state management, and conditional scraping flows.

C
CrawlForge Team
|
Apr 24
|
8m
How to Use CrawlForge with Make and Zapier
Tutorials

How to Use CrawlForge with Make and Zapier

Connect CrawlForge to Make (Integromat) and Zapier for automated web scraping. No-code setup with HTTP modules, webhooks, and workflow examples.

C
CrawlForge Team
|
Apr 23
|
8m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.