If you're new to Claude Code (Anthropic's terminal-based AI assistant) and want to give it the power to scrape websites, search the web, and extract content, you're in the right place. This beginner-friendly guide will walk you through everything from installation to your first successful web scrape.
What You'll Learn
By the end of this guide, you'll be able to:
- Install CrawlForge MCP on your computer
- Set up your free API key (1,000 credits included)
- Configure Claude Code to use CrawlForge
- Run your first web scraping commands
Time required: About 5-10 minutes
What is CrawlForge MCP?
Before we dive in, let's understand what we're installing:
- CrawlForge is a professional web scraping service with 18 different tools
- MCP stands for Model Context Protocol - it's how AI assistants like Claude connect to external tools
- Claude Code is Anthropic's command-line AI assistant that runs in your terminal
When you connect CrawlForge to Claude Code, you can simply ask Claude to "fetch this webpage" or "search for information about X" and it will use CrawlForge's tools automatically.
Prerequisites
Before starting, make sure you have:
-
Node.js 18 or higher installed
- Check by running:
node --version - Download from nodejs.org if needed
- Check by running:
-
Claude Code installed
- If you don't have it yet, install with:
npm install -g @anthropic-ai/claude-code
- If you don't have it yet, install with:
-
A terminal/command prompt open
That's it! Let's get started.
Step 1: Install CrawlForge MCP Server
Open your terminal and run this command:
What this does:
- Downloads the CrawlForge MCP server package
- Installs it globally on your computer (the
-gflag) - Makes it available from any directory
You should see output like:
added 1 package in 2s
Step 2: Set Up Your API Key
Now we need to get your free API key and configure it. Run:
This interactive setup wizard will:
-
Ask if you have an API key
- If yes: Enter it when prompted
- If no: It will open your browser to create a free account
-
Guide you to get your free API key
- Go to crawlforge.dev/signup
- Create your account (no credit card required)
- You'll receive 1,000 free credits instantly
- Copy your API key (it starts with
cf_live_)
-
Configure your credentials securely
- The setup stores your API key at
~/.crawlforge/config.json - This keeps it safe and separate from your projects
- The setup stores your API key at
-
Verify everything is working
- The setup tests your connection to confirm it's ready
Alternative: Manual Configuration
If you prefer to set up manually, you can:
Option A: Environment Variable
Option B: Config File
Create ~/.crawlforge/config.json:
Step 3: Configure Claude Code
Now we need to tell Claude Code about CrawlForge. There are two ways to do this:
Method 1: Add to Claude Code Settings (Recommended)
Run Claude Code and use the /mcp command to add the server:
This adds CrawlForge to your Claude Code configuration automatically.
Method 2: Edit Configuration File
You can also manually edit your Claude Code MCP settings. The configuration file location depends on your operating system:
macOS:
~/.config/claude/mcp_servers.json
Windows:
%APPDATA%\claude\mcp_servers.json
Linux:
~/.config/claude/mcp_servers.json
Add this configuration:
Step 4: Restart Claude Code
For the changes to take effect, restart Claude Code:
- If Claude Code is running, exit it (type
exitor press Ctrl+C) - Start it again:
You should see CrawlForge listed in the available MCP tools.
Step 5: Your First Web Scrape!
Now for the fun part - let's test it! Start Claude Code and try these commands:
Example 1: Fetch a Simple Webpage
Fetch the content from https://example.com
Claude will use the fetch_url tool (1 credit) and show you the HTML content.
Example 2: Extract Clean Text
Extract the main text content from https://news.ycombinator.com
Claude will use extract_text (1 credit) to get clean, readable text without HTML tags.
Example 3: Get All Links from a Page
List all the links on https://crawlforge.dev
Claude will use extract_links (1 credit) to find and list every link.
Example 4: Search the Web
Search the web for "best practices for web scraping in 2025"
Claude will use search_web (5 credits) to find relevant results.
Example 5: Deep Research
Research the latest developments in AI and summarize your findings
Claude will use deep_research (10 credits) to search multiple sources, verify information, and synthesize a comprehensive answer.
Understanding Credits
CrawlForge uses a credit-based system. Each tool costs a certain number of credits:
| Tool Type | Credits | Examples |
|---|---|---|
| Basic | 1 | fetch_url, extract_text, extract_links, extract_metadata |
| Advanced | 2-3 | scrape_structured, summarize_content, analyze_content |
| Premium | 5-10 | search_web, crawl_deep, batch_scrape, deep_research |
Your free account includes 1,000 credits - that's enough for:
- 1,000 basic page fetches, OR
- 100 deep research queries, OR
- A mix of different operations
Pro tip: Start with basic tools like fetch_url and extract_text to conserve credits while learning!
All 18 Available Tools
Here's what you can do with CrawlForge:
Basic Tools (1 credit each)
- fetch_url - Get raw HTML from any URL
- extract_text - Extract clean text without HTML
- extract_links - Get all links from a page
- extract_metadata - Get SEO metadata, Open Graph tags, etc.
Content Tools (2-3 credits)
- scrape_structured - Extract data using CSS selectors
- extract_content - Smart article/main content extraction
- summarize_content - AI-powered summarization
- analyze_content - Sentiment, language, and topic analysis
Site Tools (2-5 credits)
- map_site - Discover all pages on a website
- crawl_deep - Crawl multiple pages with depth control
- batch_scrape - Process many URLs at once
Research Tools (5-10 credits)
- search_web - Web search via Google/DuckDuckGo
- deep_research - Multi-stage research with verification
Advanced Tools (3-10 credits)
- stealth_mode - Anti-detection browsing
- scrape_with_actions - Browser automation (clicks, forms)
- process_document - Extract text from PDFs
- localization - Geo-targeted scraping
- generate_llms_txt - Create AI interaction guidelines
Troubleshooting
"Command not found: npx"
Make sure Node.js is installed:
If not, download from nodejs.org.
"API key not found"
Run the setup again:
Or manually check your config file at ~/.crawlforge/config.json.
"Insufficient credits"
Check your balance at crawlforge.dev/dashboard. You can:
- Upgrade to a paid plan for more credits
- Purchase additional credit packs
Claude Code doesn't see CrawlForge
- Make sure you restarted Claude Code after configuration
- Check that the MCP server is properly configured
- Try running
npx crawlforge-mcp-servermanually to see if there are errors
What's Next?
Now that you have CrawlForge set up, here are some ideas:
- Build a research workflow - Ask Claude to research topics and compile reports
- Monitor competitors - Track changes on competitor websites
- Collect data - Extract product information, prices, or reviews
- Create content - Gather information for blog posts or documentation
Pricing Plans
When you need more credits:
| Plan | Credits/Month | Price | Best For |
|---|---|---|---|
| Free | 1,000 | $0 | Testing & learning |
| Hobby | 5,000 | $19/mo | Personal projects |
| Professional | 50,000 | $99/mo | Production use |
| Business | 250,000 | $399/mo | Heavy usage |
Credits never expire and roll over month-to-month on paid plans!
Need Help?
- Documentation: crawlforge.dev/docs
- GitHub Issues: github.com/mysleekdesigns/crawlforge-mcp/issues
- Email: support@crawlforge.dev
- Discord: Join our community
Ready to start? You've got 1,000 free credits waiting for you at crawlforge.dev/signup. Happy scraping!