Web scraping has never been more critical for AI applications. Whether you're building RAG systems, training models, or creating intelligent agents, you need reliable access to web data. CrawlForge MCP delivers 18 specialized tools in a single package, designed specifically for developers building with Claude and other LLMs.
Why One MCP Server Changes Everything
Traditional scraping solutions force you to cobble together multiple tools:
- A basic HTTP client for simple fetches
- A browser automation framework for JavaScript-heavy sites
- A separate service for search
- Another tool for content extraction
- Yet another for monitoring changes
With CrawlForge, you get one unified API with consistent authentication, pricing, and response formats. Claude can intelligently choose the right tool for each task.
The Complete Tool Reference
Basic Tools (1 Credit Each)
These foundational tools handle the most common scraping tasks efficiently:
fetch_url
The simplest tool - fetches raw HTML from any URL with automatic redirect handling.
Best for: Initial page loads, API endpoints, static content
extract_text
Strips HTML and returns clean, readable text content.
Best for: Content analysis, LLM context, text processing
extract_links
Parses all anchor tags and returns structured link data.
Best for: Site mapping, crawler seeds, SEO analysis
extract_metadata
Pulls SEO metadata, Open Graph tags, Twitter cards, and Schema.org data.
Best for: Content previews, SEO audits, social sharing analysis
Structured Extraction Tools (2 Credits Each)
When you need more than raw content:
scrape_structured
Use CSS selectors to extract specific elements into structured JSON.
Best for: E-commerce data, listings, structured pages
extract_content
Intelligent main content extraction - removes navigation, ads, and boilerplate.
Best for: Articles, blog posts, documentation pages
map_site
Discovers and maps website structure, finding all accessible URLs.
Best for: Pre-crawl planning, documentation indexing, sitemap generation
process_document
Extracts text from PDFs and other document formats via URL.
Best for: PDF scraping, document processing, academic papers
localization
Geo-targeted scraping with 26+ country proxies, timezone spoofing, and locale headers.
Best for: Price comparison, localized content, geo-restricted sites
Advanced Tools (3-5 Credits)
For complex scraping scenarios:
analyze_content (3 Credits)
AI-powered content analysis including sentiment, language detection, and topic extraction.
Best for: Sentiment analysis, content classification, language detection
stealth_mode (3 Credits)
Anti-detection browsing with fingerprint randomization and human behavior simulation.
Best for: Sites with bot detection, Cloudflare-protected pages
summarize_content (4 Credits)
AI-generated summaries with configurable length and focus.
Best for: Content digests, research summaries, quick overviews
crawl_deep (Variable: 1 Credit/Page)
Multi-page crawling with depth control, pattern matching, and content extraction.
Best for: Blog archives, documentation sites, full-site indexing
scrape_with_actions (5 Credits)
Browser automation with click, type, scroll, and screenshot capabilities.
Best for: Login-gated content, interactive forms, SPA navigation
batch_scrape (Variable: 1 Credit/URL)
Process multiple URLs in parallel with unified response format.
Best for: Bulk data collection, comparison scraping, efficiency
search_web (5 Credits)
Google Custom Search integration for discovering relevant URLs.
Best for: Research starting points, topic discovery, competitive analysis
track_changes (Variable: 2-5 Credits)
Monitor websites for content changes with configurable sensitivity.
Best for: Competitor monitoring, price tracking, news alerts
AI Research Tool (10 Credits)
deep_research
The most powerful tool - multi-stage research with source verification and synthesis.
Returns:
- Synthesized summary
- Key findings with confidence scores
- Verified sources with relevance ranking
- Conflict detection between sources
Best for: Competitive intelligence, market research, technical research, fact-checking
Credit Optimization Tips
- Start cheap: Use
fetch_url(1 credit) before trying expensive tools - Batch when possible:
batch_scrapeis more efficient than individual calls - Know your URLs: Don't use
search_web(5 credits) when you have the URL - Cache results: Same URL = same content, don't re-scrape unnecessarily
- Use the right tool:
extract_content(2 credits) beats manual parsing
Pricing Comparison
| Plan | Credits/Month | Price | Cost per Credit |
|---|---|---|---|
| Free | 1,000 | $0 | - |
| Hobby | 5,000 | $19 | $0.0038 |
| Professional | 50,000 | $99 | $0.00198 |
| Business | 250,000 | $399 | $0.00160 |
Get Started
- Sign up free at crawlforge.dev/signup
- Get 1,000 credits instantly (no credit card)
- Add to Claude Desktop in 5 minutes (guide)
Ready to start? Create your free account at crawlforge.dev and unlock all 18 tools today.