On this page
n8n is one of the most popular workflow automation platforms, with over 400 integrations and a visual builder that makes complex pipelines feel simple. CrawlForge MCP adds 18 specialized web scraping tools to your n8n toolkit -- giving your automated workflows the ability to fetch pages, extract structured data, monitor changes, and run deep research without writing a single line of scraping code.
This guide walks you through connecting CrawlForge to n8n, building your first scraping workflow, and scaling to production-grade pipelines.
Table of Contents
- Prerequisites
- How CrawlForge Works with n8n
- Step 1: Configure the HTTP Request Node
- Step 2: Build a Price Monitoring Workflow
- Step 3: Add Scheduling and Notifications
- Advanced: Multi-Page Crawl Pipeline
- Credit Cost Breakdown
- Common Errors and Fixes
- Next Steps
Prerequisites
Before you start, you will need:
- n8n installed locally or running in the cloud (n8n.io)
- A CrawlForge API key -- sign up free to get 1,000 credits
- Basic familiarity with n8n's visual workflow editor
How CrawlForge Works with n8n
CrawlForge exposes a REST API at https://crawlforge.dev/api/v1/tools/. Each of the 18 tools has its own endpoint. You call these endpoints from n8n's HTTP Request node, passing your API key in the Authorization header and the tool parameters in the JSON body.
The flow looks like this:
Trigger (Schedule/Webhook) -> HTTP Request (CrawlForge) -> Transform Data -> Output (Slack/Email/DB)
No custom n8n nodes to install. No npm packages. Just standard HTTP requests.
Step 1: Configure the HTTP Request Node
Open n8n and create a new workflow. Add an HTTP Request node and configure it:
Set the Authentication to "Header Auth" with:
- Name:
Authorization - Value:
Bearer cf_live_your_api_key_here
Click Execute Node to test. You should see clean, extracted content from the target page in the output panel. This call costs 2 credits.
Reusable Credential Setup
To avoid repeating your API key across nodes, create a Header Auth credential in n8n:
- Go to Settings > Credentials > Add Credential
- Select Header Auth
- Set Name to
Authorization, Value toBearer cf_live_xxxxx - Save as "CrawlForge API"
Now every HTTP Request node can reference this credential.
Step 2: Build a Price Monitoring Workflow
Here is a practical workflow that monitors competitor pricing pages daily and sends a Slack alert when prices change.
Workflow Architecture
Schedule Trigger (Daily 9am)
-> HTTP Request: CrawlForge scrape_structured
-> IF Node: Compare with yesterday's data
-> Slack Node: Send alert if changed
-> Google Sheets Node: Log all prices
The Scraping Node
Configure the HTTP Request node to use CrawlForge's scrape_structured tool:
This call costs 2 credits. Running daily for 30 days = 60 credits per month for one competitor. Monitor 10 competitors for just 600 credits/month -- well within the Hobby plan.
The Comparison Logic
Use n8n's IF node to compare today's prices with the previous run. The Code node can store and diff values:
Step 3: Add Scheduling and Notifications
Schedule Trigger
Add a Schedule Trigger node at the start of your workflow:
- Trigger Interval: Every day
- Hour: 9 (runs at 9:00 AM)
- Timezone: Your local timezone
Slack Notification
Add a Slack node after the IF node (true branch):
Channel: #competitive-intel
Message: "Price change detected on competitor.com:
Previous: {{ $json.previousPrices }}
Current: {{ $json.currentPrices }}
Changed at: {{ $json.timestamp }}"
Advanced: Multi-Page Crawl Pipeline
For larger scraping jobs, use CrawlForge's batch_scrape tool to process multiple URLs in a single API call.
This processes all 5 URLs in parallel for 5 credits total -- compared to 5 separate extract_content calls at 2 credits each (10 credits). Batching saves 50% on multi-URL jobs.
Use n8n's Split In Batches node to process the results one at a time if needed for downstream nodes.
Credit Cost Breakdown
| Workflow | Tools Used | Credits per Run | Monthly (Daily) |
|---|---|---|---|
| Single page extract | extract_content | 2 | 60 |
| Price monitoring (1 site) | scrape_structured | 2 | 60 |
| Batch scrape (5 URLs) | batch_scrape | 5 | 150 |
| Full site crawl | crawl_deep | 5 | 150 |
| Research pipeline | deep_research | 10 | 300 |
The Free tier (1,000 credits/month) covers ~16 daily single-page scrapes. The Hobby plan ($19/month, 10,000 credits) handles most production workflows.
Common Errors and Fixes
401 Unauthorized: Your API key is missing or invalid. Check the Authorization header format: Bearer cf_live_xxxxx.
429 Rate Limited: You are sending too many requests per second. Add a Wait node between HTTP Request nodes with a 1-second delay, or use batch_scrape to combine requests.
Empty response body: The target site may require JavaScript rendering. Switch from extract_content to scrape_with_actions (5 credits) for dynamic pages.
Next Steps
You now have a working CrawlForge + n8n pipeline. From here, you can:
- Add error handling with n8n's Error Trigger node
- Store results in PostgreSQL, Airtable, or Google Sheets
- Chain tools -- use
search_webto find URLs, thenextract_contentto process them - Explore all 18 tools in the CrawlForge documentation
For more integration patterns, check out:
- CrawlForge Quick Start Guide
- 18 Web Scraping Tools in One MCP Server
- Building an AI Research Assistant
Ready to automate your web scraping workflows? Start free with 1,000 credits -- no credit card required.