Frequently Asked Questions
Get answers to common questions about CrawlForge MCP API, credits, authentication, and troubleshooting.
Getting Started
What is CrawlForge MCP?▼
CrawlForge MCP is a comprehensive web scraping platform that provides 18 specialized tools for extracting data from websites. It's designed for AI applications and supports the Model Context Protocol (MCP), making it perfect for use with Claude, Cursor, and other AI tools.
Key features include:
- 18 powerful scraping tools (fetch_url, deep_research, stealth_mode, etc.)
- Credit-based pricing with predictable costs
- RESTful API and MCP protocol support
- Free tier with 1,000 credits/month
- Enterprise-grade security and reliability
How do I get started with CrawlForge MCP?▼
Getting started is simple and takes less than 5 minutes:
- Sign up: Create a free account at crawlforge.dev/signup
- Get your API key: Navigate to Dashboard → Settings and generate your API key
- Make your first request: Use the API key to call any of our 18 tools
You'll start with 1,000 free credits - no credit card required!
See our Getting Started Guide for detailed instructions.
What's included in the free tier?▼
The free tier includes:
- 1,000 credits per month (resets on your billing date)
- Access to all 18 tools (same features as paid plans)
- Rate limit: 2 requests per second
- Data retention: 30 days of usage logs
- Support: Community support via Discord and documentation
Perfect for testing, small projects, and prototyping. No credit card required to sign up.
What is the MCP protocol?▼
The Model Context Protocol (MCP) is an open standard created by Anthropic for enabling seamless communication between AI applications and external data sources. It allows AI models like Claude to directly access web scraping tools without manual API integration.
Benefits of MCP:
- Use CrawlForge tools directly in Claude Desktop, Cursor, and other MCP-compatible apps
- No code required - just natural language instructions
- Automatic tool selection based on your needs
- Standardized interface across all MCP servers
Learn more in our MCP Protocol Guide.
How do I make my first API call?▼
Here's a simple example using the fetch_url tool (1 credit):
Or using TypeScript:
See our Getting Started Guide for more examples.
API & Authentication
How do I get an API key?▼
To generate an API key:
- Sign in to your account
- Navigate to Dashboard → Settings
- Scroll to the "API Keys" section
- Click "Generate New API Key"
- Give your key a descriptive name (e.g., "Production", "Development")
- Copy the key immediately - it won't be shown again!
What authentication methods are supported?▼
CrawlForge MCP supports API key authentication via the X-API-Key header:
API key format:
cf_test_...- Test environment (development)cf_live_...- Production environment
All requests must be made over HTTPS. HTTP requests will be rejected.
What are the rate limits?▼
Rate limits vary by plan:
| Plan | Rate Limit | Burst Limit |
|---|---|---|
| Free | 2 req/sec | 10 req/min |
| Hobby | 5 req/sec | 100 req/min |
| Professional | 20 req/sec | 500 req/min |
| Business | 50 req/sec | 1000 req/min |
When you hit the rate limit, you'll receive a 429 Too Many Requests response.
How do I handle API errors?▼
CrawlForge MCP uses standard HTTP status codes:
- 200 OK: Request succeeded
- 400 Bad Request: Invalid parameters or missing required fields
- 401 Unauthorized: Missing or invalid API key
- 402 Payment Required: Insufficient credits
- 429 Too Many Requests: Rate limit exceeded
- 500 Internal Server Error: Server-side error (we'll investigate)
Example error response:
Implement retry logic with exponential backoff for 429 and 500 errors. See our Error Handling Guide.
Can I use CrawlForge from serverless functions?▼
Yes! CrawlForge MCP works perfectly with serverless functions on Vercel, AWS Lambda, Cloudflare Workers, and more.
Tips for serverless:
- Set appropriate timeouts (most tools respond in 200-500ms)
- Use environment variables for API keys
- Implement connection pooling for high-volume applications
- Consider using batch_scrape for multiple URLs
Example for Vercel Edge Functions:
Credits & Billing
How do credits work?▼
Credits are the unit of usage in CrawlForge MCP. Each tool costs a specific number of credits per request:
- 1 credit: Basic tools (fetch_url, extract_text, extract_links, extract_metadata)
- 2 credits: Structured extraction (scrape_structured, extract_content, map_site)
- 3-4 credits: Advanced processing (analyze_content, monitor_changes, summarize_content)
- 5 credits: Browser automation (scrape_with_actions, search_web, stealth_mode, batch_scrape)
- 10 credits: Deep research (multi-source aggregation)
Credits are deducted only on successful requests. Failed requests don't consume credits.
See the full breakdown in our Credit Optimization Guide.
What are the credit costs for each tool?▼
| Credits | Tools |
|---|---|
| 1 | fetch_url, extract_text, extract_links, extract_metadata |
| 2 | scrape_structured, extract_content, map_site, localization, process_document |
| 3 | analyze_content, monitor_changes |
| 4 | summarize_content, crawl_deep |
| 5 | scrape_with_actions, search_web, stealth_mode, batch_scrape |
| 10 | deep_research |
When do credits refill?▼
Credit refills depend on your plan:
- Free Plan: 1,000 credits refill on the 1st of each month
- Paid Plans: Credits refill on your billing date (the day you subscribed or upgraded)
Example: If you upgraded to Hobby on January 15th, you'll receive 5,000 credits on the 15th of every month.
Great news: Unused credits roll over to the next month, so you never lose credits you've paid for!
You can check your credit balance and next refill date in your Dashboard.
What happens to unused credits?▼
Unused credits roll over to the next month! Your remaining balance carries forward when you receive your monthly allocation.
Example:
- You have the Hobby plan (5,000 credits/month)
- You used 3,000 credits this month, leaving 2,000 unused
- On your refill date, you'll have 7,000 credits (2,000 + 5,000)
How do Stripe payments work?▼
CrawlForge MCP uses Stripe for secure payment processing:
- Subscribe: Click "Upgrade" on the Pricing page
- Enter payment details: Stripe handles all payment information securely
- Automatic billing: You'll be charged monthly on your subscription date
- Instant activation: Credits are added immediately after successful payment
We accept:
- Credit cards (Visa, Mastercard, American Express)
- Debit cards
- Apple Pay & Google Pay
- Bank transfers (Business plan only)
You can cancel or change your plan anytime from your Dashboard.
Tools & Features
What are the most popular tools?▼
Based on usage data, the top 5 most popular tools are:
- fetch_url (1 credit) - Basic page fetching, fastest and cheapest
- extract_text (1 credit) - Clean text extraction without HTML
- scrape_structured (2 credits) - Extract specific data using CSS selectors
- deep_research (10 credits) - Multi-source research and aggregation
- stealth_mode (5 credits) - Bypass anti-bot detection
View all 18 tools in the API Reference.
When should I use batch_scrape vs individual requests?▼
Use batch_scrape when:
- You need to scrape 3+ URLs at once
- You want to parallelize requests for better performance
- You're willing to trade credits for speed (50% faster on average)
Use individual requests when:
- You only need 1-2 URLs
- You need to process results sequentially
- You want more granular error handling
Cost comparison:
- Individual: 10 URLs × 1 credit = 10 credits, ~5 seconds (sequential)
- Batch: 10 URLs × 1 credit = 10 credits, ~1 second (parallel)
See our Batch Processing Guide for examples.
When should I use browser automation (scrape_with_actions)?▼
Use scrape_with_actions when:
- Content loads via JavaScript (SPAs, React, Vue, Angular apps)
- You need to interact with the page (click buttons, fill forms, scroll)
- Content requires authentication (login flows)
- Pages use infinite scroll or lazy loading
Don't use it when:
- The page serves static HTML (use fetch_url for 1 credit instead of 5)
- An API endpoint is available (use fetch_url)
- You only need basic text extraction
Learn more in our Advanced Scraping Guide.
Troubleshooting
Why am I getting 429 "Too Many Requests" errors?▼
You're hitting your plan's rate limit. This happens when you send too many requests too quickly.
Solutions:
- Implement retry logic: Wait 1-2 seconds and retry with exponential backoff
- Use batch_scrape: Batch multiple URLs into a single request
- Add delays: Space out your requests (e.g., 500ms between calls on Free plan)
- Upgrade your plan: Higher plans have higher rate limits
Example retry logic:
Why can't I connect to the API?▼
Common connection issues and fixes:
- Invalid API key (401 error):
- Verify your API key is correct (check for typos)
- Ensure you're using
X-API-Keyheader (notAuthorization) - Regenerate your API key if needed
- CORS errors (browser):
- API calls from browsers are not supported (security risk)
- Make API calls from your backend/serverless functions instead
- Never expose API keys in client-side code
- SSL/TLS errors:
- Ensure you're using
https://nothttp:// - Update your SSL certificates if using an old environment
- Ensure you're using
- Network timeouts:
- Check your firewall/proxy settings
- Increase request timeout (most tools respond in <500ms)
Still having issues? Check our status page or contact support.