CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with Anthropic Claude API
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with Anthropic Claude API

C
CrawlForge Team
Engineering Team
April 15, 2026
9 min read

On this page

Anthropic's Claude API supports native tool use -- you define tools with JSON schemas, and Claude decides when to invoke them during a conversation. CrawlForge's 18 web scraping tools are a natural fit: they give Claude the ability to search the web, extract content, scrape structured data, and conduct deep research, all through the standard tool_use API.

This guide walks you through defining CrawlForge tools for the Claude API, handling tool use responses, and building a production-grade research assistant.

Table of Contents

  • Prerequisites
  • How Claude Tool Use Works with CrawlForge
  • Step 1: Define CrawlForge Tool Schemas
  • Step 2: Handle the Tool Use Loop
  • Step 3: Build a Research Assistant
  • Advanced: Streaming with Tool Use
  • Credit Cost Breakdown
  • Best Practices
  • Frequently Asked Questions
  • Next Steps

Prerequisites

Bash
Bash

Get your CrawlForge API key at crawlforge.dev/signup -- 1,000 free credits included. For Claude API access, visit console.anthropic.com.

How Claude Tool Use Works with CrawlForge

Claude's tool use follows a request-response loop:

  1. You send a message with tool definitions and a user prompt
  2. Claude responds with either text or a tool_use content block
  3. You execute the tool (call CrawlForge API) and return the result
  4. Claude incorporates the result and continues its response
You -> Claude: "What's on the Hacker News front page?" Claude -> You: tool_use { name: "extract_content", input: { url: "https://news.ycombinator.com" } } You -> CrawlForge: POST /api/v1/tools/extract_content { url: "..." } CrawlForge -> You: { content: "..." } You -> Claude: tool_result { content: "..." } Claude -> You: "Here are the top stories on Hacker News right now: ..."

Step 1: Define CrawlForge Tool Schemas

Define the tools Claude can use. Each tool needs a name, description, and input_schema (JSON Schema format):

Typescript

Step 2: Handle the Tool Use Loop

The core pattern: send messages to Claude, check if it wants to use a tool, execute the tool via CrawlForge, and return the result.

Typescript

This loop handles multi-step tool use automatically. Claude might search, then extract content from a result, then search again -- the loop continues until it produces a final text response.

Step 3: Build a Research Assistant

Wrap the agent in a more structured application:

Typescript

Advanced: Streaming with Tool Use

For a better user experience, use streaming to show Claude's thinking in real time:

Typescript

Credit Cost Breakdown

WorkflowTools UsedCredits
Quick answer (1 page)extract_content2
Search + read top resultsearch_web + extract_content7
Thorough research (3 sources)search_web + 3x extract_content11
Structured data extractionscrape_structured2
Page metadata checkextract_metadata1
Raw HTML fetchfetch_url1
Deep multi-source reportdeep_research10

The Free tier (1,000 credits/month) supports approximately 140 single-page extractions or 90 search-and-read workflows. The Hobby plan ($19/month, 10,000 credits) is ideal for development and light production use.

Best Practices

Write descriptive tool descriptions. Claude uses the description field to decide which tool to call. Include what the tool does, when to use it, and its credit cost. "Extract the main readable content from a web page" is better than "Get content".

Include credit costs in descriptions. When Claude knows that fetch_url costs 1 credit and deep_research costs 10, it naturally chooses the cheaper option for simple tasks.

Handle errors gracefully. Return error messages as tool results rather than throwing exceptions. Claude can adapt its strategy when a tool fails -- for example, trying a different URL or rephrasing a search.

Set max_tokens appropriately. Web content can be long. Set max_tokens to at least 4096 to give Claude room to incorporate tool results into comprehensive responses.

Use system prompts to guide tool use. Tell Claude when to search vs. when to directly access a known URL. This prevents unnecessary search_web calls (5 credits) when a direct extract_content (2 credits) would suffice.

Frequently Asked Questions

Can I use CrawlForge with Claude 3.5 Haiku for lower costs?

Yes. All Claude models that support tool use work with CrawlForge tools. Haiku is cheaper per token but may need more explicit instructions to select the right tool. Claude Sonnet provides the best balance of cost and tool-use accuracy.

How do I handle rate limits?

CrawlForge's API includes rate limiting headers (X-RateLimit-Remaining). If you hit a 429 response, add a retry with exponential backoff. For high-volume use, the Professional plan includes higher rate limits.

Can Claude call multiple CrawlForge tools in one turn?

Yes. Claude can request multiple tool uses in a single response. The tool use loop in Step 2 handles this -- it iterates over all tool_use blocks and returns all results at once.

What happens when CrawlForge credits run out?

The API returns a 402 Payment Required error. Return this as a tool result so Claude can inform the user. You can check remaining credits via the dashboard or the credits API endpoint.

Next Steps

You now have a Claude-powered application with live web access. Explore further:

  • CrawlForge Quick Start for native MCP integration with Claude Code
  • All 18 tools explained with credit costs and usage examples
  • Building an AI Research Assistant with Claude and CrawlForge
  • CrawlForge vs Firecrawl comparison for choosing the right tool

Give Claude access to the live web. Start free with 1,000 credits -- no credit card required.

Tags

anthropicclaude-apitool-useintegrationweb-scrapingtutorialtypescript

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with LangGraph Agents
Tutorials

How to Use CrawlForge with LangGraph Agents

Build stateful web scraping agents with LangGraph and CrawlForge. TypeScript guide covering graph nodes, state management, and conditional scraping flows.

C
CrawlForge Team
|
Apr 24
|
8m
How to Use CrawlForge with Mastra AI Agents
Tutorials

How to Use CrawlForge with Mastra AI Agents

Build AI agents with web scraping capabilities using Mastra and CrawlForge. TypeScript setup guide with tool integration, workflows, and agent examples.

C
CrawlForge Team
|
Apr 21
|
7m
How to Use CrawlForge with Vercel AI SDK
Tutorials

How to Use CrawlForge with Vercel AI SDK

Build AI apps with live web data using CrawlForge and the Vercel AI SDK. Add web scraping tools to your AI chatbot in under 10 minutes.

C
CrawlForge Team
|
Apr 7
|
8m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.