API Reference
CrawlForge MCP API Reference
Complete reference for all 19 CrawlForge MCP tools. Build powerful web scraping applications with our developer-first API.
API Overview
Base URL
https://crawlforge.dev/api/v1
Authentication
All API requests require an API key passed in the X-API-Key
header:
Bash
Get your API key from the dashboard.
Request Format
All tool endpoints accept JSON in the request body:
Json
Response Format
All responses follow a standard format:
Json
HTTP Status Codes
Code | Description |
---|---|
200 | OK - Request successful |
400 | Bad Request - Invalid parameters |
401 | Unauthorized - Invalid or missing API key |
402 | Payment Required - Insufficient credits |
429 | Too Many Requests - Rate limit exceeded |
500 | Internal Server Error |
Rate Limits
Rate limits apply per API key and vary by plan. Exceeding the limit returns a
429 Too Many Requests
response.Free Plan
2 requests per second
Hobby Plan
5 requests per second
Professional Plan
20 requests per second
Business Plan
50 requests per second
All Tools
Basic Tools
1-2 creditsfetch_url
Fetch and parse web pages with automatic handling
extract_text
Extract clean text from HTML with intelligent parsing
extract_links
Discover and extract all links from a webpage
extract_metadata
Extract page metadata, OpenGraph, and Twitter Card tags
custom_headers
Make requests with custom HTTP headers and cookies
scrape_structured
Extract structured data using CSS selectors
extract_content
map_site
screenshot
Capture screenshots of web pages (PNG, JPEG, WebP)
pdf_extract
Extract text, tables, and images from PDFs
process_document
data_validation
Validate extracted data against schemas
localization
Advanced Tools
3-6 creditsstructured_extract
AI-assisted structured data extraction
form_submit
Fill and submit web forms automatically
proxy_request
Route requests through proxies and custom configurations
monitor_changes
Monitor web pages for content changes
content_analysis
Analyze content for sentiment, SEO, and quality
analyze_content
summarize_content
crawl_deep
rate_limit_bypass
Intelligent rate limit handling and retry logic
stealth_mode
Bypass anti-bot detection with advanced stealth techniques
scrape_with_actions
batch_scrape
Scrape multiple URLs concurrently with rate limiting
search_web
api_discovery
Discover and document APIs from web applications
Error Handling
All errors follow a standard format with
error.code
and error.message
fields.Json
See the Error Reference for a complete list of error codes and solutions.
Ready to start building?
Pick a tool and explore its documentation