On this page
Anthropic's Claude API supports native tool use -- you define tools with JSON schemas, and Claude decides when to invoke them during a conversation. CrawlForge's 18 web scraping tools are a natural fit: they give Claude the ability to search the web, extract content, scrape structured data, and conduct deep research, all through the standard tool_use API.
This guide walks you through defining CrawlForge tools for the Claude API, handling tool use responses, and building a production-grade research assistant.
Table of Contents
- Prerequisites
- How Claude Tool Use Works with CrawlForge
- Step 1: Define CrawlForge Tool Schemas
- Step 2: Handle the Tool Use Loop
- Step 3: Build a Research Assistant
- Advanced: Streaming with Tool Use
- Credit Cost Breakdown
- Best Practices
- Frequently Asked Questions
- Next Steps
Prerequisites
Get your CrawlForge API key at crawlforge.dev/signup -- 1,000 free credits included. For Claude API access, visit console.anthropic.com.
How Claude Tool Use Works with CrawlForge
Claude's tool use follows a request-response loop:
- You send a message with tool definitions and a user prompt
- Claude responds with either text or a
tool_usecontent block - You execute the tool (call CrawlForge API) and return the result
- Claude incorporates the result and continues its response
You -> Claude: "What's on the Hacker News front page?"
Claude -> You: tool_use { name: "extract_content", input: { url: "https://news.ycombinator.com" } }
You -> CrawlForge: POST /api/v1/tools/extract_content { url: "..." }
CrawlForge -> You: { content: "..." }
You -> Claude: tool_result { content: "..." }
Claude -> You: "Here are the top stories on Hacker News right now: ..."
Step 1: Define CrawlForge Tool Schemas
Define the tools Claude can use. Each tool needs a name, description, and input_schema (JSON Schema format):
Step 2: Handle the Tool Use Loop
The core pattern: send messages to Claude, check if it wants to use a tool, execute the tool via CrawlForge, and return the result.
This loop handles multi-step tool use automatically. Claude might search, then extract content from a result, then search again -- the loop continues until it produces a final text response.
Step 3: Build a Research Assistant
Wrap the agent in a more structured application:
Advanced: Streaming with Tool Use
For a better user experience, use streaming to show Claude's thinking in real time:
Credit Cost Breakdown
| Workflow | Tools Used | Credits |
|---|---|---|
| Quick answer (1 page) | extract_content | 2 |
| Search + read top result | search_web + extract_content | 7 |
| Thorough research (3 sources) | search_web + 3x extract_content | 11 |
| Structured data extraction | scrape_structured | 2 |
| Page metadata check | extract_metadata | 1 |
| Raw HTML fetch | fetch_url | 1 |
| Deep multi-source report | deep_research | 10 |
The Free tier (1,000 credits/month) supports approximately 140 single-page extractions or 90 search-and-read workflows. The Hobby plan ($19/month, 10,000 credits) is ideal for development and light production use.
Best Practices
Write descriptive tool descriptions. Claude uses the description field to decide which tool to call. Include what the tool does, when to use it, and its credit cost. "Extract the main readable content from a web page" is better than "Get content".
Include credit costs in descriptions. When Claude knows that fetch_url costs 1 credit and deep_research costs 10, it naturally chooses the cheaper option for simple tasks.
Handle errors gracefully. Return error messages as tool results rather than throwing exceptions. Claude can adapt its strategy when a tool fails -- for example, trying a different URL or rephrasing a search.
Set max_tokens appropriately. Web content can be long. Set max_tokens to at least 4096 to give Claude room to incorporate tool results into comprehensive responses.
Use system prompts to guide tool use. Tell Claude when to search vs. when to directly access a known URL. This prevents unnecessary search_web calls (5 credits) when a direct extract_content (2 credits) would suffice.
Frequently Asked Questions
Can I use CrawlForge with Claude 3.5 Haiku for lower costs?
Yes. All Claude models that support tool use work with CrawlForge tools. Haiku is cheaper per token but may need more explicit instructions to select the right tool. Claude Sonnet provides the best balance of cost and tool-use accuracy.
How do I handle rate limits?
CrawlForge's API includes rate limiting headers (X-RateLimit-Remaining). If you hit a 429 response, add a retry with exponential backoff. For high-volume use, the Professional plan includes higher rate limits.
Can Claude call multiple CrawlForge tools in one turn?
Yes. Claude can request multiple tool uses in a single response. The tool use loop in Step 2 handles this -- it iterates over all tool_use blocks and returns all results at once.
What happens when CrawlForge credits run out?
The API returns a 402 Payment Required error. Return this as a tool result so Claude can inform the user. You can check remaining credits via the dashboard or the credits API endpoint.
Next Steps
You now have a Claude-powered application with live web access. Explore further:
- CrawlForge Quick Start for native MCP integration with Claude Code
- All 18 tools explained with credit costs and usage examples
- Building an AI Research Assistant with Claude and CrawlForge
- CrawlForge vs Firecrawl comparison for choosing the right tool
Give Claude access to the live web. Start free with 1,000 credits -- no credit card required.