CrawlForge
Support
Faq

Frequently Asked Questions

Get answers to common questions about CrawlForge MCP API, credits, authentication, and troubleshooting.

Getting Started

What is CrawlForge MCP?▼

CrawlForge MCP is a comprehensive web scraping platform that provides 18 specialized tools for extracting data from websites. It's designed for AI applications and supports the Model Context Protocol (MCP), making it perfect for use with Claude, Cursor, and other AI tools.

Key features include:

  • 18 powerful scraping tools (fetch_url, deep_research, stealth_mode, etc.)
  • Credit-based pricing with predictable costs
  • RESTful API and MCP protocol support
  • Free tier with 1,000 credits/month
  • Enterprise-grade security and reliability
How do I get started with CrawlForge MCP?▼

Getting started is simple and takes less than 5 minutes:

  1. Sign up: Create a free account at crawlforge.dev/signup
  2. Get your API key: Navigate to Dashboard → Settings and generate your API key
  3. Make your first request: Use the API key to call any of our 18 tools

You'll start with 1,000 free credits - no credit card required!

See our Getting Started Guide for detailed instructions.

What's included in the free tier?▼

The free tier includes:

  • 1,000 credits per month (resets on your billing date)
  • Access to all 18 tools (same features as paid plans)
  • Rate limit: 2 requests per second
  • Data retention: 30 days of usage logs
  • Support: Community support via Discord and documentation

Perfect for testing, small projects, and prototyping. No credit card required to sign up.

View all plans →

What is the MCP protocol?▼

The Model Context Protocol (MCP) is an open standard created by Anthropic for enabling seamless communication between AI applications and external data sources. It allows AI models like Claude to directly access web scraping tools without manual API integration.

Benefits of MCP:

  • Use CrawlForge tools directly in Claude Desktop, Cursor, and other MCP-compatible apps
  • No code required - just natural language instructions
  • Automatic tool selection based on your needs
  • Standardized interface across all MCP servers

Learn more in our MCP Protocol Guide.

How do I make my first API call?▼

Here's a simple example using the fetch_url tool (1 credit):

terminalBash

Or using TypeScript:

example.tsTypescript

See our Getting Started Guide for more examples.

API & Authentication

How do I get an API key?▼

To generate an API key:

  1. Sign in to your account
  2. Navigate to Dashboard → Settings
  3. Scroll to the "API Keys" section
  4. Click "Generate New API Key"
  5. Give your key a descriptive name (e.g., "Production", "Development")
  6. Copy the key immediately - it won't be shown again!
Security tip: Never commit API keys to version control. Use environment variables instead.
What authentication methods are supported?▼

CrawlForge MCP supports API key authentication via the X-API-Key header:

terminalBash

API key format:

  • cf_test_... - Test environment (development)
  • cf_live_... - Production environment

All requests must be made over HTTPS. HTTP requests will be rejected.

What are the rate limits?▼

Rate limits vary by plan:

PlanRate LimitBurst Limit
Free2 req/sec10 req/min
Hobby5 req/sec100 req/min
Professional20 req/sec500 req/min
Business50 req/sec1000 req/min

When you hit the rate limit, you'll receive a 429 Too Many Requests response.

How do I handle API errors?▼

CrawlForge MCP uses standard HTTP status codes:

  • 200 OK: Request succeeded
  • 400 Bad Request: Invalid parameters or missing required fields
  • 401 Unauthorized: Missing or invalid API key
  • 402 Payment Required: Insufficient credits
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: Server-side error (we'll investigate)

Example error response:

error.jsonJson

Implement retry logic with exponential backoff for 429 and 500 errors. See our Error Handling Guide.

Can I use CrawlForge from serverless functions?▼

Yes! CrawlForge MCP works perfectly with serverless functions on Vercel, AWS Lambda, Cloudflare Workers, and more.

Tips for serverless:

  • Set appropriate timeouts (most tools respond in 200-500ms)
  • Use environment variables for API keys
  • Implement connection pooling for high-volume applications
  • Consider using batch_scrape for multiple URLs

Example for Vercel Edge Functions:

api/scrape.tsTypescript

Credits & Billing

How do credits work?▼

Credits are the unit of usage in CrawlForge MCP. Each tool costs a specific number of credits per request:

  • 1 credit: Basic tools (fetch_url, extract_text, extract_links, extract_metadata)
  • 2 credits: Structured extraction (scrape_structured, extract_content, map_site)
  • 3-4 credits: Advanced processing (analyze_content, monitor_changes, summarize_content)
  • 5 credits: Browser automation (scrape_with_actions, search_web, stealth_mode, batch_scrape)
  • 10 credits: Deep research (multi-source aggregation)

Credits are deducted only on successful requests. Failed requests don't consume credits.

See the full breakdown in our Credit Optimization Guide.

What are the credit costs for each tool?▼
CreditsTools
1fetch_url, extract_text, extract_links, extract_metadata
2scrape_structured, extract_content, map_site, localization, process_document
3analyze_content, monitor_changes
4summarize_content, crawl_deep
5scrape_with_actions, search_web, stealth_mode, batch_scrape
10deep_research
Cost optimization tip: Always start with fetch_url (1 credit) when you know the URL. Only use search_web (5 credits) when you need to discover URLs.
When do credits refill?▼

Credit refills depend on your plan:

  • Free Plan: 1,000 credits refill on the 1st of each month
  • Paid Plans: Credits refill on your billing date (the day you subscribed or upgraded)

Example: If you upgraded to Hobby on January 15th, you'll receive 5,000 credits on the 15th of every month.

Great news: Unused credits roll over to the next month, so you never lose credits you've paid for!

You can check your credit balance and next refill date in your Dashboard.

What happens to unused credits?▼

Unused credits roll over to the next month! Your remaining balance carries forward when you receive your monthly allocation.

Example:

  • You have the Hobby plan (5,000 credits/month)
  • You used 3,000 credits this month, leaving 2,000 unused
  • On your refill date, you'll have 7,000 credits (2,000 + 5,000)
Pro tip: Credits never expire, so you can build up a balance for larger projects!
How do Stripe payments work?▼

CrawlForge MCP uses Stripe for secure payment processing:

  1. Subscribe: Click "Upgrade" on the Pricing page
  2. Enter payment details: Stripe handles all payment information securely
  3. Automatic billing: You'll be charged monthly on your subscription date
  4. Instant activation: Credits are added immediately after successful payment

We accept:

  • Credit cards (Visa, Mastercard, American Express)
  • Debit cards
  • Apple Pay & Google Pay
  • Bank transfers (Business plan only)

You can cancel or change your plan anytime from your Dashboard.

Tools & Features

What are the most popular tools?▼

Based on usage data, the top 5 most popular tools are:

  1. fetch_url (1 credit) - Basic page fetching, fastest and cheapest
  2. extract_text (1 credit) - Clean text extraction without HTML
  3. scrape_structured (2 credits) - Extract specific data using CSS selectors
  4. deep_research (10 credits) - Multi-source research and aggregation
  5. stealth_mode (5 credits) - Bypass anti-bot detection

View all 18 tools in the API Reference.

When should I use batch_scrape vs individual requests?▼

Use batch_scrape when:

  • You need to scrape 3+ URLs at once
  • You want to parallelize requests for better performance
  • You're willing to trade credits for speed (50% faster on average)

Use individual requests when:

  • You only need 1-2 URLs
  • You need to process results sequentially
  • You want more granular error handling

Cost comparison:

  • Individual: 10 URLs × 1 credit = 10 credits, ~5 seconds (sequential)
  • Batch: 10 URLs × 1 credit = 10 credits, ~1 second (parallel)

See our Batch Processing Guide for examples.

When should I use browser automation (scrape_with_actions)?▼

Use scrape_with_actions when:

  • Content loads via JavaScript (SPAs, React, Vue, Angular apps)
  • You need to interact with the page (click buttons, fill forms, scroll)
  • Content requires authentication (login flows)
  • Pages use infinite scroll or lazy loading

Don't use it when:

  • The page serves static HTML (use fetch_url for 1 credit instead of 5)
  • An API endpoint is available (use fetch_url)
  • You only need basic text extraction
Cost tip: Browser automation costs 5 credits vs 1 credit for static scraping. Always test with fetch_url first!

Learn more in our Advanced Scraping Guide.

Troubleshooting

Why am I getting 429 "Too Many Requests" errors?▼

You're hitting your plan's rate limit. This happens when you send too many requests too quickly.

Solutions:

  • Implement retry logic: Wait 1-2 seconds and retry with exponential backoff
  • Use batch_scrape: Batch multiple URLs into a single request
  • Add delays: Space out your requests (e.g., 500ms between calls on Free plan)
  • Upgrade your plan: Higher plans have higher rate limits

Example retry logic:

retry.tsTypescript
Why can't I connect to the API?▼

Common connection issues and fixes:

  1. Invalid API key (401 error):
    • Verify your API key is correct (check for typos)
    • Ensure you're using X-API-Key header (not Authorization)
    • Regenerate your API key if needed
  2. CORS errors (browser):
    • API calls from browsers are not supported (security risk)
    • Make API calls from your backend/serverless functions instead
    • Never expose API keys in client-side code
  3. SSL/TLS errors:
    • Ensure you're using https:// not http://
    • Update your SSL certificates if using an old environment
  4. Network timeouts:
    • Check your firewall/proxy settings
    • Increase request timeout (most tools respond in <500ms)

Still having issues? Check our status page or contact support.

Still have questions?
We're here to help! Get support through multiple channels.
Documentation
Comprehensive guides and API reference
Discord Community
Chat with other developers
Email Support
support@crawlforge.dev
New to CrawlForge? Sign up for free and get 1,000 credits to start building.