CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with OpenAI Agents SDK
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with OpenAI Agents SDK

C
CrawlForge Team
Engineering Team
April 13, 2026
8 min read

On this page

OpenAI's Agents SDK provides a production-ready framework for building autonomous AI agents with tool use, handoffs, and guardrails. CrawlForge adds the missing piece: live web access. By connecting CrawlForge's 18 scraping tools to your OpenAI agents, you enable them to search the web, extract structured data, read documentation, and conduct multi-source research -- all within the Agents SDK's orchestration framework.

This guide shows you how to define CrawlForge tools as OpenAI agent functions and build agents that act on real-time web data.

Table of Contents

  • Prerequisites
  • Architecture: CrawlForge + OpenAI Agents
  • Step 1: Create the CrawlForge Tool Functions
  • Step 2: Build a Web Research Agent
  • Step 3: Add Structured Data Extraction
  • Advanced: Multi-Agent Web Pipeline
  • Credit Cost Breakdown
  • Best Practices
  • Next Steps

Prerequisites

Bash
Bash

Get your CrawlForge API key at crawlforge.dev/signup -- 1,000 free credits included.

Architecture: CrawlForge + OpenAI Agents

The OpenAI Agents SDK uses a tool pattern similar to the function calling API but with richer orchestration. You define tools as functions with JSON Schema parameters, and the agent decides when and how to call them.

User Query -> OpenAI Agent -> Tool Selection -> CrawlForge API -> Results -> Agent Response

CrawlForge's REST API at https://crawlforge.dev/api/v1/tools/ maps cleanly to the Agents SDK's tool definition format. Each tool becomes a function the agent can invoke.

Step 1: Create the CrawlForge Tool Functions

First, create a reusable CrawlForge client and tool definitions:

Typescript

Step 2: Build a Web Research Agent

Create an agent that uses CrawlForge tools to research topics:

Typescript

The agent will autonomously:

  1. Call search_web to find relevant articles (5 credits)
  2. Call extract_content on the top results (2 credits each)
  3. Synthesize a cited summary

Step 3: Add Structured Data Extraction

Build a data extraction agent that pulls specific fields from web pages:

Typescript

Advanced: Multi-Agent Web Pipeline

The Agents SDK supports handoffs between specialized agents. Build a pipeline where a researcher finds sources and hands off to an analyst:

Typescript

This pipeline separates concerns: the collector gathers data (using CrawlForge credits), and the analyst processes it (no credits needed). Total cost depends on sources fetched -- typically 15-25 credits for a 3-source comparison.

Credit Cost Breakdown

Agent WorkflowTools UsedEstimated Credits
Single search + summarysearch_web + extract_content7
3-source researchsearch_web + 3x extract_content11
Structured extraction (1 page)scrape_structured2
Multi-agent comparison (3 sources)search_web + 3x extract_content + scrape_structured15
Deep research reportdeep_research10

The CrawlForge Free tier (1,000 credits) supports roughly 90 search-and-extract workflows per month. The Professional plan ($99/month, 50,000 credits) handles production agent workloads.

Best Practices

Choose the cheapest tool first. The agent's instructions should guide it toward fetch_url (1 credit) when full HTML is acceptable, and extract_content (2 credits) only when clean text is needed. Reserve deep_research (10 credits) for complex multi-source queries.

Limit agent steps. Set a maximum number of tool invocations to control costs. Most research tasks complete in 3-5 tool calls.

Use handoffs for complex pipelines. Rather than one agent with many tools, split responsibilities. The collector agent handles web access (credits), while the analyst agent processes data (no credits).

Cache tool outputs. If your agent repeatedly accesses the same URL, implement response caching to avoid duplicate credit charges.

Monitor usage. Check your credit consumption in the CrawlForge dashboard and set alerts for unexpected spikes.

Next Steps

You now have OpenAI agents that can access live web data. Continue building:

  • 18 CrawlForge tools overview -- register more tools for your agents
  • Stealth mode scraping -- access sites with anti-bot protection
  • Deep research automation -- use the 10-credit deep_research tool for comprehensive reports
  • CrawlForge Quick Start -- full MCP setup guide

Give your OpenAI agents eyes on the web. Start free with 1,000 credits -- no credit card required.

Tags

openaiagents-sdkintegrationweb-scrapingai-agentstutorialfunction-calling

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with LangGraph Agents
Tutorials

How to Use CrawlForge with LangGraph Agents

Build stateful web scraping agents with LangGraph and CrawlForge. TypeScript guide covering graph nodes, state management, and conditional scraping flows.

C
CrawlForge Team
|
Apr 24
|
8m
How to Use CrawlForge with Mastra AI Agents
Tutorials

How to Use CrawlForge with Mastra AI Agents

Build AI agents with web scraping capabilities using Mastra and CrawlForge. TypeScript setup guide with tool integration, workflows, and agent examples.

C
CrawlForge Team
|
Apr 21
|
7m
How to Use CrawlForge with Make and Zapier
Tutorials

How to Use CrawlForge with Make and Zapier

Connect CrawlForge to Make (Integromat) and Zapier for automated web scraping. No-code setup with HTTP modules, webhooks, and workflow examples.

C
CrawlForge Team
|
Apr 23
|
8m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.