CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
How to Use CrawlForge with Cursor Rules
Tutorials
Back to Blog
Tutorials

How to Use CrawlForge with Cursor Rules

C
CrawlForge Team
Engineering Team
April 20, 2026
7 min read

On this page

Cursor becomes dramatically more useful when you teach it exactly how to use your tools. A .cursorrules file tells Cursor which CrawlForge tools to pick for which tasks, how to optimize credit usage, and what patterns to follow when scraping.

This guide gives you production-ready Cursor rules for CrawlForge, plus the reasoning behind each rule so you can adapt them to your workflows.

Table of Contents

  • What Are Cursor Rules?
  • Prerequisites
  • Step 1: Configure CrawlForge as an MCP Server
  • Step 2: Create Your .cursorrules File
  • Step 3: Web Research Rules
  • Step 4: Data Extraction Rules
  • Step 5: Credit Optimization Rules
  • Step 6: Advanced Workflow Rules
  • Complete .cursorrules Template
  • Credit Cost Reference
  • Next Steps

What Are Cursor Rules?

Cursor rules are project-scoped instructions that tell the Cursor AI assistant how to behave. They live in a .cursorrules file at your project root (or in .cursor/rules/ as individual files). When Cursor processes any request, it reads these rules as system-level context.

Without rules, Cursor will use CrawlForge tools but make suboptimal choices -- like using deep_research (10 credits) when fetch_url (1 credit) would suffice. Rules fix this by encoding your tool selection logic directly.

Prerequisites

  • Cursor installed (v0.45+)
  • CrawlForge MCP server installed: npm install -g crawlforge-mcp-server
  • A CrawlForge API key (free tier: 1,000 credits)

Step 1: Configure CrawlForge as an MCP Server

Add CrawlForge to your Cursor MCP settings. Open Cursor Settings > MCP Servers and add:

Json

Restart Cursor. You should see CrawlForge listed under available MCP tools with all 18 tools accessible.

Step 2: Create Your .cursorrules File

Create .cursorrules at your project root:

Typescript

Now let us build out each rule category.

Step 3: Web Research Rules

These rules teach Cursor when to search the web versus when to fetch a known URL directly:

Markdown

Step 4: Data Extraction Rules

Rules for selecting the right extraction tool based on what the user needs:

Markdown

Step 5: Credit Optimization Rules

These rules prevent Cursor from burning through credits unnecessarily:

Markdown

Step 6: Advanced Workflow Rules

Rules for complex, multi-step scraping workflows:

Markdown

Complete .cursorrules Template

Here is the full, copy-paste-ready template combining all rules above:

Markdown

Credit Cost Reference

CreditsToolsTypical Use Case
1fetch_url, extract_text, extract_links, extract_metadataQuick page fetching, link discovery
2scrape_structured, extract_content, summarize_content, generate_llms_txtTargeted data extraction, content analysis
3map_site, process_document, analyze_content, localizationSite mapping, document processing
5search_web, crawl_deep, batch_scrape, scrape_with_actions, stealth_modeWeb search, multi-page operations
10deep_researchComprehensive multi-source analysis

Next Steps

  • CrawlForge Quick Start -- install CrawlForge in 60 seconds
  • Build a Research Assistant -- full project tutorial with Claude
  • 18 Tools Reference -- complete tool documentation
  • awesome-cursorrules on GitHub -- community Cursor rules collection

Start scraping smarter. Sign up free for 1,000 credits, install CrawlForge, and drop these rules into your .cursorrules file. Your Cursor AI will pick the right tool every time.

Tags

cursorcursor-rulesmcpintegrationtutorialai-codingweb-scraping

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Related Articles

How to Use CrawlForge with Cline (VS Code)
Tutorials

How to Use CrawlForge with Cline (VS Code)

Add web scraping to Cline in VS Code. Configure CrawlForge MCP, fetch live data, and let your AI coding assistant access the entire web.

C
CrawlForge Team
|
Apr 11
|
7m
How to Use CrawlForge with LangGraph Agents
Tutorials

How to Use CrawlForge with LangGraph Agents

Build stateful web scraping agents with LangGraph and CrawlForge. TypeScript guide covering graph nodes, state management, and conditional scraping flows.

C
CrawlForge Team
|
Apr 24
|
8m
How to Use CrawlForge with Dify Workflows
Tutorials

How to Use CrawlForge with Dify Workflows

Add CrawlForge as a custom tool in Dify for web scraping in your LLM app workflows. No-code and API integration guide with workflow examples.

C
CrawlForge Team
|
Apr 22
|
7m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 18 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.