On this page
Cursor becomes dramatically more useful when you teach it exactly how to use your tools. A .cursorrules file tells Cursor which CrawlForge tools to pick for which tasks, how to optimize credit usage, and what patterns to follow when scraping.
This guide gives you production-ready Cursor rules for CrawlForge, plus the reasoning behind each rule so you can adapt them to your workflows.
Table of Contents
- What Are Cursor Rules?
- Prerequisites
- Step 1: Configure CrawlForge as an MCP Server
- Step 2: Create Your .cursorrules File
- Step 3: Web Research Rules
- Step 4: Data Extraction Rules
- Step 5: Credit Optimization Rules
- Step 6: Advanced Workflow Rules
- Complete .cursorrules Template
- Credit Cost Reference
- Next Steps
What Are Cursor Rules?
Cursor rules are project-scoped instructions that tell the Cursor AI assistant how to behave. They live in a .cursorrules file at your project root (or in .cursor/rules/ as individual files). When Cursor processes any request, it reads these rules as system-level context.
Without rules, Cursor will use CrawlForge tools but make suboptimal choices -- like using deep_research (10 credits) when fetch_url (1 credit) would suffice. Rules fix this by encoding your tool selection logic directly.
Prerequisites
- Cursor installed (v0.45+)
- CrawlForge MCP server installed:
npm install -g crawlforge-mcp-server - A CrawlForge API key (free tier: 1,000 credits)
Step 1: Configure CrawlForge as an MCP Server
Add CrawlForge to your Cursor MCP settings. Open Cursor Settings > MCP Servers and add:
Restart Cursor. You should see CrawlForge listed under available MCP tools with all 18 tools accessible.
Step 2: Create Your .cursorrules File
Create .cursorrules at your project root:
Now let us build out each rule category.
Step 3: Web Research Rules
These rules teach Cursor when to search the web versus when to fetch a known URL directly:
Step 4: Data Extraction Rules
Rules for selecting the right extraction tool based on what the user needs:
Step 5: Credit Optimization Rules
These rules prevent Cursor from burning through credits unnecessarily:
Step 6: Advanced Workflow Rules
Rules for complex, multi-step scraping workflows:
Complete .cursorrules Template
Here is the full, copy-paste-ready template combining all rules above:
Credit Cost Reference
| Credits | Tools | Typical Use Case |
|---|---|---|
| 1 | fetch_url, extract_text, extract_links, extract_metadata | Quick page fetching, link discovery |
| 2 | scrape_structured, extract_content, summarize_content, generate_llms_txt | Targeted data extraction, content analysis |
| 3 | map_site, process_document, analyze_content, localization | Site mapping, document processing |
| 5 | search_web, crawl_deep, batch_scrape, scrape_with_actions, stealth_mode | Web search, multi-page operations |
| 10 | deep_research | Comprehensive multi-source analysis |
Next Steps
- CrawlForge Quick Start -- install CrawlForge in 60 seconds
- Build a Research Assistant -- full project tutorial with Claude
- 18 Tools Reference -- complete tool documentation
- awesome-cursorrules on GitHub -- community Cursor rules collection
Start scraping smarter. Sign up free for 1,000 credits, install CrawlForge, and drop these rules into your .cursorrules file. Your Cursor AI will pick the right tool every time.