CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Install CrawlForge MCP and Use It in Claude Code: A Beginner's Guide
Tutorials
Back to Blog
Tutorials

How to Install CrawlForge MCP and Use It in Claude Code: A Beginner's Guide

C
CrawlForge Team
Engineering Team
January 10, 2026
10 min read
Updated April 14, 2026

On this page

If you're new to Claude Code (Anthropic's terminal-based AI assistant) and want to give it the power to scrape websites, search the web, and extract content, you're in the right place. This beginner-friendly guide will walk you through everything from installation to your first successful web scrape.

What You'll Learn

By the end of this guide, you'll be able to:

  • Install CrawlForge MCP on your computer
  • Set up your free API key (1,000 credits included)
  • Configure Claude Code to use CrawlForge
  • Run your first web scraping commands

Time required: About 5-10 minutes

What is CrawlForge MCP?

Before we dive in, let's understand what we're installing:

  • CrawlForge is a professional web scraping service with 20 different tools
  • MCP stands for Model Context Protocol - it's how AI assistants like Claude connect to external tools
  • Claude Code is Anthropic's command-line AI assistant that runs in your terminal

When you connect CrawlForge to Claude Code, you can simply ask Claude to "fetch this webpage" or "search for information about X" and it will use CrawlForge's tools automatically.

Prerequisites

Before starting, make sure you have:

  1. Node.js 18 or higher installed

    • Check by running: node --version
    • Download from nodejs.org if needed
  2. Claude Code installed

    • If you don't have it yet, install with: npm install -g @anthropic-ai/claude-code
  3. A terminal/command prompt open

That's it! Let's get started.

Step 1: Install CrawlForge MCP Server

Open your terminal and run this command:

Bash

What this does:

  • Downloads the CrawlForge MCP server package
  • Installs it globally on your computer (the -g flag)
  • Makes it available from any directory

You should see output like:

added 1 package in 2s

Step 2: Set Up Your API Key

Now we need to get your free API key and configure it. Run:

Bash

This interactive setup wizard will:

  1. Ask if you have an API key

    • If yes: Enter it when prompted
    • If no: It will open your browser to create a free account
  2. Guide you to get your free API key

    • Go to crawlforge.dev/signup
    • Create your account (no credit card required)
    • You'll receive 1,000 free credits instantly
    • Copy your API key (it starts with cf_live_)
  3. Configure your credentials securely

    • The setup stores your API key at ~/.crawlforge/config.json
    • This keeps it safe and separate from your projects
  4. Verify everything is working

    • The setup tests your connection to confirm it's ready

Alternative: Manual Configuration

If you prefer to set up manually, you can:

Option A: Environment Variable

Bash

Option B: Config File

Create ~/.crawlforge/config.json:

Json

Step 3: Configure Claude Code

Now we need to tell Claude Code about CrawlForge. There are two ways to do this:

Method 1: Add to Claude Code Settings (Recommended)

Run Claude Code and use the /mcp command to add the server:

Bash

This adds CrawlForge to your Claude Code configuration automatically.

Method 2: Edit Configuration File

You can also manually edit your Claude Code MCP settings. The configuration file location depends on your operating system:

macOS:

~/.config/claude/mcp_servers.json

Windows:

%APPDATA%\claude\mcp_servers.json

Linux:

~/.config/claude/mcp_servers.json

Add this configuration:

Json

Step 4: Restart Claude Code

For the changes to take effect, restart Claude Code:

  1. If Claude Code is running, exit it (type exit or press Ctrl+C)
  2. Start it again:
Bash

You should see CrawlForge listed in the available MCP tools.

Step 5: Your First Web Scrape!

Now for the fun part - let's test it! Start Claude Code and try these commands:

Example 1: Fetch a Simple Webpage

Fetch the content from https://example.com

Claude will use the fetch_url tool (1 credit) and show you the HTML content.

Example 2: Extract Clean Text

Extract the main text content from https://news.ycombinator.com

Claude will use extract_text (1 credit) to get clean, readable text without HTML tags.

Example 3: Get All Links from a Page

List all the links on https://crawlforge.dev

Claude will use extract_links (1 credit) to find and list every link.

Example 4: Search the Web

Search the web for "best practices for web scraping in 2026"

Claude will use search_web (5 credits) to find relevant results.

Example 5: Deep Research

Research the latest developments in AI and summarize your findings

Claude will use deep_research (10 credits) to search multiple sources, verify information, and synthesize a comprehensive answer.

Understanding Credits

CrawlForge uses a credit-based system. Each tool costs a certain number of credits:

Tool TypeCreditsExamples
Basic1fetch_url, extract_text, extract_links, extract_metadata
Structured2scrape_structured, extract_content, map_site, process_document, localization
Analysis3track_changes, analyze_content
Advanced4summarize_content, crawl_deep
Premium5-10search_web, batch_scrape, stealth_mode, deep_research

Your free account includes 1,000 credits - that's enough for:

  • 1,000 basic page fetches, OR
  • 100 deep research queries, OR
  • A mix of different operations

Pro tip: Start with basic tools like fetch_url and extract_text to conserve credits while learning!

All 20 Available Tools

Here's what you can do with CrawlForge:

Basic Tools (1 credit each)

  • fetch_url - Get raw HTML from any URL
  • extract_text - Extract clean text without HTML
  • extract_links - Get all links from a page
  • extract_metadata - Get SEO metadata, Open Graph tags, etc.

Content Tools (2-3 credits)

  • scrape_structured - Extract data using CSS selectors
  • extract_content - Smart article/main content extraction
  • summarize_content - AI-powered summarization
  • analyze_content - Sentiment, language, and topic analysis

Site Tools (2-5 credits)

  • map_site - Discover all pages on a website
  • crawl_deep - Crawl multiple pages with depth control
  • batch_scrape - Process many URLs at once

Research Tools (5-10 credits)

  • search_web - Web search via Google/DuckDuckGo
  • deep_research - Multi-stage research with verification

Advanced Tools (3-10 credits)

  • stealth_mode - Anti-detection browsing
  • scrape_with_actions - Browser automation (clicks, forms)
  • process_document - Extract text from PDFs
  • localization - Geo-targeted scraping
  • track_changes - Detect content changes on monitored pages

Troubleshooting

"Command not found: npx"

Make sure Node.js is installed:

Bash

If not, download from nodejs.org.

"API key not found"

Run the setup again:

Bash

Or manually check your config file at ~/.crawlforge/config.json.

"Insufficient credits"

Check your balance at crawlforge.dev/dashboard. You can:

  • Upgrade to a paid plan for more credits
  • Purchase additional credit packs

Claude Code doesn't see CrawlForge

  1. Make sure you restarted Claude Code after configuration
  2. Check that the MCP server is properly configured
  3. Try running npx crawlforge-mcp-server manually to see if there are errors

What's Next?

Now that you have CrawlForge set up, here are some ideas:

  1. Build a research workflow - Ask Claude to research topics and compile reports
  2. Monitor competitors - Track changes on competitor websites
  3. Collect data - Extract product information, prices, or reviews
  4. Create content - Gather information for blog posts or documentation

Pricing Plans

When you need more credits:

PlanCredits/MonthPriceBest For
Free1,000$0Testing & learning
Hobby5,000$19/moPersonal projects
Professional50,000$99/moProduction use
Business250,000$399/moHeavy usage

Credits never expire and roll over month-to-month on paid plans!

Need Help?

  • Documentation: crawlforge.dev/docs
  • GitHub Issues: github.com/mysleekdesigns/crawlforge-mcp/issues
  • Email: support@crawlforge.dev
  • Discord: Join our community

Ready to start? You've got 1,000 free credits waiting for you at crawlforge.dev/signup. Happy scraping!

Tags

Claude CodeMCPInstallationBeginnersTutorialnpm

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Frequently Asked Questions

How do I install CrawlForge MCP for Claude Code?+

Run `npm install -g crawlforge-mcp-server` to install the package globally, then run `npx crawlforge-setup` to configure your API key. The whole installation takes about 5-10 minutes and requires Node.js 18 or higher.

Do I need a credit card to try CrawlForge with Claude Code?+

No. Sign up at crawlforge.dev/signup and you receive 1,000 free credits instantly with no credit card required. That is enough to run roughly 1,000 basic fetches or 100 deep_research calls before you decide to upgrade.

Where does CrawlForge store my API key on my machine?+

The setup wizard saves your credentials to `~/.crawlforge/config.json`, separate from your project files. You can also set the `CRAWLFORGE_API_KEY` environment variable in your shell profile if you prefer manual configuration.

What do I do if Claude Code does not see CrawlForge after install?+

Fully restart Claude Code (do not just reload the window), then run `/mcp` to confirm crawlforge is listed as connected. If it is missing, re-run the `claude mcp add` command from Step 3 and verify your config file syntax is valid JSON.

How many credits does each scrape cost?+

Basic tools like fetch_url and extract_text cost 1 credit per call, structured extraction tools cost 2 credits, search_web and stealth_mode cost 5 credits, and deep_research costs 10 credits. The free tier covers about 1,000 simple scrapes or 100 deep research queries.

Related Articles

How to Scrape Websites with Claude Code (2026 Guide)
Tutorials

How to Scrape Websites with Claude Code (2026 Guide)

Scrape any website from your terminal with Claude Code and CrawlForge MCP. Fetch pages, extract data, bypass anti-bot -- in under 2 minutes.

C
CrawlForge Team
|
Apr 14
|
10m
How to Scrape Websites in Cursor IDE with CrawlForge MCP
Tutorials

How to Scrape Websites in Cursor IDE with CrawlForge MCP

Turn Cursor IDE into a web scraping workstation. Connect CrawlForge MCP and extract structured data from any site without leaving your editor.

C
CrawlForge Team
|
Apr 14
|
9m
How to Scrape Websites in Zed AI with CrawlForge MCP
Tutorials

How to Scrape Websites in Zed AI with CrawlForge MCP

Add web scraping to Zed AI in 3 minutes. Configure CrawlForge MCP in Zed so your editor can fetch, extract, and research live web data on demand.

C
CrawlForge Team
|
Apr 14
|
9m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 20 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.