CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with Mastra AI Agents
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with Mastra AI Agents

C
CrawlForge Team
Engineering Team
April 21, 2026
7 min read

On this page

Mastra is a TypeScript-first AI agent framework designed for building production-ready AI applications. CrawlForge gives those agents the ability to fetch, extract, and analyze live web data. Together, they let you build agents that can research topics, monitor competitors, and extract structured data from any website.

This guide shows you how to wire CrawlForge tools into Mastra agents with working TypeScript examples.

Table of Contents

  • What Is Mastra?
  • Prerequisites
  • Step 1: Set Up Your Mastra Project
  • Step 2: Create CrawlForge Tool Definitions
  • Step 3: Build a Web Research Agent
  • Step 4: Build a Data Extraction Workflow
  • Step 5: Add Error Handling and Retries
  • Credit Cost Reference
  • Architecture Overview
  • Next Steps

What Is Mastra?

Mastra is the modern TypeScript framework for AI-powered applications and agents. It provides primitives for agent creation, tool integration, workflows, and memory -- all with full type safety. Think of it as the Express.js of AI agents: minimal, composable, and production-oriented.

Mastra agents can use external tools through a standardized tool interface. CrawlForge tools map directly to this interface, giving your agents 18 web scraping capabilities without writing HTTP client code.

Prerequisites

  • Node.js 18+ and TypeScript 5+
  • A CrawlForge account with an API key (1,000 free credits)
  • Basic familiarity with TypeScript and async/await

Step 1: Set Up Your Mastra Project

Create a new Mastra project and install dependencies:

Bash

Add your CrawlForge API key to .env:

Bash

Step 2: Create CrawlForge Tool Definitions

Create a tools file that wraps CrawlForge's API as Mastra-compatible tools:

Typescript

Step 3: Build a Web Research Agent

Create an agent that can search the web and extract content for research tasks:

Typescript

Run the agent:

Typescript

Step 4: Build a Data Extraction Workflow

Mastra workflows let you chain tools into deterministic pipelines. Here is a competitive pricing monitor:

Typescript

Step 5: Add Error Handling and Retries

Production agents need resilient error handling. Here is a pattern for CrawlForge tool calls:

Typescript

Credit Cost Reference

CreditsToolsMastra Use Case
1fetch_url, extract_text, extract_links, extract_metadataQuick data fetching in agent tools
2scrape_structured, extract_content, summarize_content, generate_llms_txtWorkflow extraction steps
3map_site, process_document, analyze_content, localizationSite audits, document processing
5search_web, crawl_deep, batch_scrape, scrape_with_actions, stealth_modeResearch agents, bulk operations
10deep_researchComprehensive analysis agents

Architecture Overview

ComponentRole
Mastra AgentOrchestrates tool calls, maintains conversation context
Mastra ToolsTyped wrappers around CrawlForge API endpoints
Mastra WorkflowDeterministic multi-step pipelines for batch operations
CrawlForge APIExecutes web scraping, returns structured data
Credit SystemTracks usage per API key, enforces limits

The Mastra agent decides which CrawlForge tool to call based on the task. The tool wrapper handles HTTP communication, and CrawlForge executes the actual scraping. Credits are deducted atomically on each successful tool call.

Next Steps

  • Mastra Quickstart Guide -- official Mastra documentation
  • CrawlForge API Reference -- full endpoint documentation
  • Build a Research Assistant -- similar pattern using Claude directly
  • Deep Research Automation -- advanced research workflows

Build your first web-aware AI agent today. Sign up for CrawlForge (1,000 free credits), scaffold a Mastra project, and give your agents the power to scrape the entire web.

Tags

mastraai-agentsmcpintegrationtutorialtypescriptweb-scraping

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with LangGraph Agents
Tutorials

How to Use CrawlForge with LangGraph Agents

Build stateful web scraping agents with LangGraph and CrawlForge. TypeScript guide covering graph nodes, state management, and conditional scraping flows.

C
CrawlForge Team
|
Apr 24
|
8m
How to Use CrawlForge with Dify Workflows
Tutorials

How to Use CrawlForge with Dify Workflows

Add CrawlForge as a custom tool in Dify for web scraping in your LLM app workflows. No-code and API integration guide with workflow examples.

C
CrawlForge Team
|
Apr 22
|
7m
How to Use CrawlForge with Cursor Rules
Tutorials

How to Use CrawlForge with Cursor Rules

Create .cursorrules files that teach Cursor AI to use CrawlForge tools effectively. Includes ready-to-use rules for web research and data extraction.

C
CrawlForge Team
|
Apr 20
|
7m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.