CrawlForge
HomePricingDocumentationBlog
Tutorials

How to Add Web Scraping to Claude Desktop in 5 Minutes

C
CrawlForge Team
Engineering Team
December 20, 2025
8 min read

If you've ever wished Claude could fetch real-time data from the web, search for information, or extract content from websites, you're in the right place. With the Model Context Protocol (MCP), you can give Claude Desktop native web scraping capabilities in just a few minutes.

Why Claude Needs Web Access

Claude is powerful for analysis, writing, and reasoning, but it's trained on data with a knowledge cutoff. Without web access, Claude can't:

  • Research current events or pricing
  • Fetch documentation from external websites
  • Extract data from competitor sites
  • Verify information in real-time
  • Aggregate content from multiple sources

That's where MCP comes in.

What is MCP?

The Model Context Protocol (MCP) is Anthropic's open standard for connecting AI assistants like Claude to external tools and data sources. Think of it as a plugin system for Claude Desktop.

Instead of Claude being limited to its training data, MCP servers can:

  • Fetch live data from APIs and websites
  • Execute actions like web scraping, database queries, or file operations
  • Provide tools that Claude can call intelligently based on your prompts

CrawlForge MCP is a specialized MCP server that gives Claude 18 powerful web scraping tools, from basic URL fetching to AI-powered research.

Prerequisites

Before we begin, make sure you have:

  • Claude Desktop installed (download here)
  • Node.js 18+ installed (nodejs.org)
  • A free CrawlForge account with 1,000 credits (signup here)

That's it. No coding required.

Step 1: Get Your API Key

First, we need an API key to authenticate requests to CrawlForge:

  1. Go to crawlforge.dev and sign up for a free account
  2. You'll get 1,000 free credits to start (no credit card required)
  3. Navigate to Dashboard → API Keys
  4. Click "Create API Key"
  5. Give it a name (e.g., "Claude Desktop")
  6. Copy the API key (it starts with cf_live_)

⚠️ Important: Save this key somewhere safe. You'll only see it once.

Step 2: Configure Claude Desktop

Now we'll add CrawlForge to Claude's MCP configuration file.

Find Your Config File

The location depends on your operating system:

macOS:

Bash

Windows:

Bash

Linux:

Bash

Add the CrawlForge MCP Server

Open the file in your text editor and add this configuration:

Json

Replace cf_live_YOUR_API_KEY_HERE with the API key you copied in Step 1.

If you already have other MCP servers configured, just add the "crawlforge" entry to the existing "mcpServers" object.

Step 3: Restart and Test

  1. Quit Claude Desktop completely (right-click the icon and select "Quit")
  2. Reopen Claude Desktop
  3. You should see a small tools icon (🔧) in the input box, indicating MCP tools are loaded

To test, try this prompt:

Fetch the homepage of example.com and extract its text content

Claude will automatically use the fetch_url tool (1 credit) to grab the page, then extract_text (1 credit) to parse the content. You should see the full text from example.com in the response.

5 Practical Examples

Now that CrawlForge is connected, here's what you can do:

1. Fetch a Web Page

Get me the HTML from https://news.ycombinator.com

Claude uses fetch_url (1 credit) to retrieve the raw HTML.

2. Extract Article Content

Extract the main content from this article: https://example.com/blog/post

Claude uses extract_content (2 credits) to identify and extract just the article text, removing ads and navigation.

3. Get All Links

Find all external links on https://crawlforge.dev

Claude uses extract_links (1 credit) to parse all <a> tags and return the URLs.

4. Analyze Page Metadata

What's the SEO metadata for https://github.com/trending?

Claude uses extract_metadata (1 credit) to pull title tags, meta descriptions, Open Graph data, and more.

5. Research a Topic

Research "Next.js 16 new features" and summarize the top 5 findings with sources

Claude uses deep_research (10 credits) to:

  • Search multiple sources
  • Extract relevant content
  • Verify information
  • Synthesize a summary with citations

This is the most powerful tool for comprehensive research tasks.

Available Tools Overview

CrawlForge gives Claude access to 18 specialized tools organized by credit cost:

Basic Tools (1 credit each)

  • fetch_url - Fetch raw HTML from any URL
  • extract_text - Clean text extraction
  • extract_links - Get all links on a page
  • extract_metadata - SEO and social media tags

Structured Extraction (2 credits)

  • scrape_structured - CSS selector-based extraction
  • extract_content - Main content extraction (articles, blog posts)
  • map_site - Website structure mapping
  • process_document - Extract text from PDFs and documents
  • localization - Geo-targeted scraping (26 countries)

Advanced Tools (3-5 credits)

  • monitor_changes (3 credits) - Track website changes over time
  • analyze_content (3 credits) - Sentiment analysis, language detection
  • summarize_content (4 credits) - AI-powered summarization
  • crawl_deep (4 credits) - Multi-page crawling with depth control
  • stealth_mode (5 credits) - Anti-detection browsing
  • scrape_with_actions (5 credits) - Browser automation (clicks, forms)
  • batch_scrape (5 credits) - Process multiple URLs in parallel
  • search_web (5 credits) - Google Custom Search integration

AI Research (10 credits)

  • deep_research - Multi-stage research with source verification and synthesis

Credit Usage

Every tool call deducts credits from your account:

  • Free tier: 1,000 credits (enough for ~100-500 operations depending on tools used)
  • Hobby: 5,000 credits/month for $19
  • Professional: 50,000 credits/month for $99
  • Business: 250,000 credits/month for $399

You can monitor usage in the dashboard.

Tips for Efficient Usage

  1. Start cheap: Use fetch_url (1 credit) instead of search_web (5 credits) when you know the URL
  2. Batch requests: Use batch_scrape for multiple URLs instead of separate calls
  3. Cache results: If you need the same data multiple times, save it in your conversation
  4. Use the right tool: Don't use deep_research (10 credits) for simple lookups

Troubleshooting

"No tools found" error:

  • Make sure you quit Claude Desktop completely (not just close the window)
  • Check that your API key is valid (test it at crawlforge.dev/dashboard/keys)
  • Verify the JSON syntax in your config file

"Insufficient credits" error:

  • Check your balance at crawlforge.dev/dashboard
  • Upgrade your plan or purchase additional credits

Tool calls failing:

  • Some websites block scraping - try stealth_mode (5 credits) for better success rates
  • Check the website's robots.txt for restrictions
  • Verify the URL is correct and accessible

What's Next?

Now that you have web scraping enabled in Claude Desktop, you can:

  • Build research workflows that aggregate data from multiple sources
  • Monitor competitor websites for changes
  • Extract structured data for analysis
  • Automate content collection for AI training datasets

For more advanced usage, check out:

  • API Documentation - Use CrawlForge programmatically
  • Tool Guides - Detailed documentation for each tool
  • Integration Examples - LangChain, LlamaIndex, and more

Ready to upgrade? View pricing plans or contact support for custom enterprise solutions.


Try it now: Sign up for free at crawlforge.dev/signup and get 1,000 credits to start.

Tags

Claude DesktopMCPWeb ScrapingGetting Started

About the Author

C
CrawlForge Team

Engineering Team

Related Articles

Tutorials
How to Install CrawlForge MCP and Use It in Claude Code: A Beginner's Guide
Step-by-step tutorial for installing CrawlForge MCP via npm and setting it up in Claude Code terminal. Perfect for beginners who want to add web scraping to their AI workflow.
Claude CodeMCPInstallation+3 more
C
CrawlForge Team
Dec 26, 2025
10 min read
Read more
Tutorials
5 Ways to Use CrawlForge with LangChain: AI Web Scraping Tutorial
Learn how to integrate CrawlForge MCP with LangChain for powerful AI-driven web scraping workflows. Build RAG systems, research agents, and data pipelines.
LangChainTutorialAI Engineering+2 more
C
CrawlForge Team
Dec 24, 2025
12 min read
Read more
Product Updates
18 Web Scraping Tools in One MCP Server: The Complete CrawlForge Guide
Discover all 18 powerful web scraping tools available in CrawlForge MCP - from basic URL fetching to AI-powered research. Learn which tool to use for every scraping scenario.
MCPWeb ScrapingAPI+2 more
C
CrawlForge Team
Dec 23, 2025
10 min read
Read more

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing

Resources

  • Getting Started
  • Guides
  • Blog
  • FAQ

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Acceptable Use

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

Š 2025 CrawlForge. All rights reserved.