CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with LangGraph Agents
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with LangGraph Agents

C
CrawlForge Team
Engineering Team
April 24, 2026
8 min read

On this page

LangGraph is LangChain's framework for building stateful, graph-based AI agents. By integrating CrawlForge tools as graph nodes, you can build agents that make intelligent decisions about what to scrape, when to dig deeper, and how to synthesize web data across multiple steps.

This guide shows you how to build a complete scraping agent with LangGraph and CrawlForge in TypeScript.

Table of Contents

  • What Is LangGraph?
  • Prerequisites
  • Step 1: Project Setup
  • Step 2: Define CrawlForge Tools for LangGraph
  • Step 3: Design the Agent State
  • Step 4: Build Graph Nodes
  • Step 5: Wire the Graph Together
  • Step 6: Run the Agent
  • Credit Cost Reference
  • LangGraph vs Direct LangChain for Scraping
  • Next Steps

What Is LangGraph?

LangGraph is a low-level orchestration framework for building reliable AI agents. Unlike simple chain-based architectures, LangGraph models agent logic as a directed graph where:

  • Nodes represent actions (tool calls, LLM invocations, data processing)
  • Edges define transitions between nodes, including conditional routing
  • State persists across the entire graph execution

This architecture is ideal for scraping agents because web scraping inherently involves decisions: Should I scrape deeper? Is this page blocked? Do I need to switch to stealth mode? LangGraph lets you model these decisions as conditional edges in a graph.

Prerequisites

  • Node.js 18+ and TypeScript 5+
  • A CrawlForge account with an API key (1,000 free credits)
  • Familiarity with LangChain basics

Step 1: Project Setup

Bash

Create tsconfig.json:

Json

Add your API keys to .env:

Bash

Step 2: Define CrawlForge Tools for LangGraph

Create typed tool wrappers that LangGraph can invoke:

Typescript

Step 3: Design the Agent State

LangGraph agents maintain state across graph execution. Define a state shape that tracks scraping progress:

Typescript

Step 4: Build Graph Nodes

Each node in the graph performs a specific action and updates state:

Typescript

Step 5: Wire the Graph Together

Connect nodes with edges and conditional routing:

Typescript

Step 6: Run the Agent

Typescript

Run it:

Bash

The agent will search the web, discover relevant pages, extract content from the most promising results, and synthesize a comparison -- all while tracking credit usage in the graph state.

Credit Cost Reference

CreditsToolsLangGraph Node Role
1fetch_url, extract_text, extract_links, extract_metadataLightweight data-gathering nodes
2scrape_structured, extract_content, summarize_content, generate_llms_txtExtraction and analysis nodes
3map_site, process_document, analyze_content, localizationDiscovery and processing nodes
5search_web, crawl_deep, batch_scrape, scrape_with_actions, stealth_modeResearch and bulk-operation nodes
10deep_researchComprehensive analysis (use as a single-node subgraph)

Typical LangGraph agent run: 5 (search) + 6 (3 extractions) + 0 (LLM analysis) = 11 credits.

LangGraph vs Direct LangChain for Scraping

AspectLangGraphDirect LangChain
State ManagementBuilt-in, typed, persistentManual, requires custom code
Conditional LogicFirst-class conditional edgesIf/else in chain functions
Credit TrackingTrack in graph state automaticallyManual counter
Error RecoveryRoute errors to fallback nodesTry/catch in chain
ComplexityHigher initial setupSimpler for linear workflows
Best ForMulti-step research with branching logicSimple fetch-and-process pipelines

Use LangGraph when your scraping agent needs to make decisions based on intermediate results. Use direct LangChain (see our LangChain integration guide) when the workflow is linear.

Next Steps

  • LangGraph Documentation -- official LangGraph guides
  • 5 Ways to Use CrawlForge with LangChain -- simpler LangChain patterns
  • Build a Research Assistant -- related agent architecture
  • CrawlForge API Reference -- full tool endpoint documentation

Build intelligent scraping agents today. Sign up for CrawlForge with 1,000 free credits, wire the tools into your LangGraph graph, and let your agent decide what to scrape next.

Tags

langgraphlangchainai-agentsmcpintegrationtutorialtypescriptweb-scraping

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with Mastra AI Agents
Tutorials

How to Use CrawlForge with Mastra AI Agents

Build AI agents with web scraping capabilities using Mastra and CrawlForge. TypeScript setup guide with tool integration, workflows, and agent examples.

C
CrawlForge Team
|
Apr 21
|
7m
How to Use CrawlForge with Dify Workflows
Tutorials

How to Use CrawlForge with Dify Workflows

Add CrawlForge as a custom tool in Dify for web scraping in your LLM app workflows. No-code and API integration guide with workflow examples.

C
CrawlForge Team
|
Apr 22
|
7m
How to Use CrawlForge with Cursor Rules
Tutorials

How to Use CrawlForge with Cursor Rules

Create .cursorrules files that teach Cursor AI to use CrawlForge tools effectively. Includes ready-to-use rules for web research and data extraction.

C
CrawlForge Team
|
Apr 20
|
7m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.