Today we're launching deep_research - the most powerful tool in the CrawlForge suite. It transforms how AI applications gather and synthesize information from the web.
The Research Problem
Manual research is slow and fragmented:
- Search for sources (5-10 minutes)
- Open and read each result (20-30 minutes)
- Take notes and extract key facts (15-20 minutes)
- Cross-reference and verify (10-15 minutes)
- Synthesize into a coherent summary (15-20 minutes)
Total: 65-95 minutes for a single research topic.
Existing tools help with pieces:
- Search APIs find sources
- Scraping tools extract content
- LLMs can summarize text
But nothing combines them into a unified research workflow. Until now.
Announcing deep_research
deep_research does what a human researcher does, but in seconds:
Response (15-30 seconds later):
How It Works Under the Hood
deep_research runs a multi-stage pipeline:
Stage 1: Query Expansion
Your topic is expanded into multiple search queries:
Input: "Next.js 15 App Router performance"
Expanded:
- "Next.js 15 performance improvements"
- "Next.js App Router optimization"
- "Next.js 15 vs 14 benchmark"
- "Partial Prerendering Next.js"
Stage 2: Source Discovery
Multiple web searches find relevant sources:
- Google Custom Search API integration
- Filters for recency and relevance
- Automatic deduplication
- Domain reputation scoring
Stage 3: Content Extraction
Each source is scraped and processed:
- Main content extraction (removes ads, navigation)
- Metadata capture (author, date, domain)
- Key quote identification
- Readability scoring
Stage 4: Verification
Facts are cross-referenced across sources:
- Claim extraction using NLP
- Source agreement scoring
- Conflict detection
- Confidence assignment (high/medium/low)
Stage 5: Synthesis
AI synthesizes findings into a coherent summary:
- Key findings with citations
- Conflicting viewpoints highlighted
- Source ranking by relevance
- Actionable recommendations
Key Features
Source Verification
Every claim includes confidence scoring:
Conflict Detection
When sources disagree, we tell you:
Configurable Depth
Choose how deep to research:
| Depth | Sources | Queries | Time | Best For |
|---|---|---|---|---|
| shallow | 3-5 | 2 | 5-10s | Quick facts |
| moderate | 8-12 | 4 | 15-25s | General research |
| deep | 15-25 | 8 | 45-90s | Comprehensive analysis |
Real-World Use Cases
Competitor Analysis
Returns feature comparison tables, pricing breakdowns, and user sentiment analysis.
Market Research
Aggregates market data from multiple analyst reports with citations.
Technical Documentation
Synthesizes best practices from official docs, Stack Overflow, and tutorials.
News Aggregation
Latest news with source diversity and credibility scoring.
Pricing and Credits
deep_research costs 10 credits per query.
Compared to doing it manually:
search_web(5 credits) × 4 queries = 20 creditsextract_content(2 credits) × 12 sources = 24 credits- Manual total: 44 credits
deep_research saves 77% while providing better results.
Plan Capacity
| Plan | Credits/Month | Research Queries |
|---|---|---|
| Free | 1,000 | 100 |
| Hobby | 5,000 | 500 |
| Professional | 50,000 | 5,000 |
| Business | 250,000 | 25,000 |
Getting Started
1. Basic Research
2. With Source Filtering
3. In Claude Desktop
Just ask naturally:
Research the latest developments in WebAssembly support
for machine learning and summarize the key findings
Claude will automatically use deep_research and present synthesized results.
What's Next
We're actively improving deep_research:
- Real-time sources - Include live news and social media
- Custom source lists - Research only from your approved domains
- Export formats - PDF reports, Markdown, structured JSON
- Scheduled research - Run recurring research jobs
Start Researching
Sign up at crawlforge.dev and get 1,000 free credits - enough for 100 research queries.
Have feedback? We'd love to hear it. Reach out on GitHub or Twitter.
API Reference: /docs/api-reference/tools/deep-research