CrawlForge
HomeUse CasesIntegrationsPricingDocumentationBlog
Introducing deep_research: AI-Powered Multi-Source Analysis
Product Updates
Back to Blog
Product Updates

Introducing deep_research: AI-Powered Multi-Source Analysis

C
CrawlForge Team
Engineering Team
January 3, 2026
9 min read
Updated April 14, 2026

On this page

Quick Answer

deep_research is CrawlForge's multi-source research tool that replaces a 65-95 minute manual workflow (search, read, extract, verify, synthesize) with a single API call. It combines web search, content extraction, and AI synthesis, returning a cited summary in seconds for 10 credits per query.

Today we're launching deep_research - the most powerful tool in the CrawlForge suite. It transforms how AI applications gather and synthesize information from the web.

The Research Problem

Manual research is slow and fragmented:

  1. Search for sources (5-10 minutes)
  2. Open and read each result (20-30 minutes)
  3. Take notes and extract key facts (15-20 minutes)
  4. Cross-reference and verify (10-15 minutes)
  5. Synthesize into a coherent summary (15-20 minutes)

Total: 65-95 minutes for a single research topic.

Existing tools help with pieces:

  • Search APIs find sources
  • Scraping tools extract content
  • LLMs can summarize text

But nothing combines them into a unified research workflow. Until now.

Announcing deep_research

deep_research does what a human researcher does, but in seconds:

Bash

Response (15-30 seconds later):

Json

How It Works Under the Hood

deep_research runs a multi-stage pipeline:

Stage 1: Query Expansion

Your topic is expanded into multiple search queries:

Input: "Next.js 15 App Router performance" Expanded: - "Next.js 15 performance improvements" - "Next.js App Router optimization" - "Next.js 15 vs 14 benchmark" - "Partial Prerendering Next.js"

Stage 2: Source Discovery

Multiple web searches find relevant sources:

  • Google Custom Search API integration
  • Filters for recency and relevance
  • Automatic deduplication
  • Domain reputation scoring

Stage 3: Content Extraction

Each source is scraped and processed:

  • Main content extraction (removes ads, navigation)
  • Metadata capture (author, date, domain)
  • Key quote identification
  • Readability scoring

Stage 4: Verification

Facts are cross-referenced across sources:

  • Claim extraction using NLP
  • Source agreement scoring
  • Conflict detection
  • Confidence assignment (high/medium/low)

Stage 5: Synthesis

AI synthesizes findings into a coherent summary:

  • Key findings with citations
  • Conflicting viewpoints highlighted
  • Source ranking by relevance
  • Actionable recommendations

Key Features

Source Verification

Every claim includes confidence scoring:

Json

Conflict Detection

When sources disagree, we tell you:

Json

Configurable Depth

Choose how deep to research:

DepthSourcesQueriesTimeBest For
shallow3-525-10sQuick facts
moderate8-12415-25sGeneral research
deep15-25845-90sComprehensive analysis

Real-World Use Cases

Competitor Analysis

Json

Returns feature comparison tables, pricing breakdowns, and user sentiment analysis.

Market Research

Json

Aggregates market data from multiple analyst reports with citations.

Technical Documentation

Json

Synthesizes best practices from official docs, Stack Overflow, and tutorials.

News Aggregation

Json

Latest news with source diversity and credibility scoring.

Pricing and Credits

deep_research costs 10 credits per query.

Compared to doing it manually:

  • search_web (5 credits) × 4 queries = 20 credits
  • extract_content (2 credits) × 12 sources = 24 credits
  • Manual total: 44 credits

deep_research saves 77% while providing better results.

Plan Capacity

PlanCredits/MonthResearch Queries
Free1,000100
Hobby5,000500
Professional50,0005,000
Business250,00025,000

Getting Started

1. Basic Research

Typescript

2. With Source Filtering

Typescript

3. In Claude Desktop

Just ask naturally:

Research the latest developments in WebAssembly support for machine learning and summarize the key findings

Claude will automatically use deep_research and present synthesized results.

What's Next

We're actively improving deep_research:

  • Real-time sources - Include live news and social media
  • Custom source lists - Research only from your approved domains
  • Export formats - PDF reports, Markdown, structured JSON
  • Scheduled research - Run recurring research jobs

Start Researching

Sign up at crawlforge.dev and get 1,000 free credits - enough for 100 research queries. See the pricing page for plan details once you scale beyond the free tier.

Have feedback? We'd love to hear it. Reach out on GitHub or Twitter.


API Reference: /docs/api-reference/tools/deep-research

Tags

deep-researchNew FeatureAIResearch Automation

About the Author

C

CrawlForge Team

Engineering Team

Building the most comprehensive web scraping MCP server. We create tools that help developers extract, analyze, and transform web data for AI applications.

On this page

Frequently Asked Questions

What is deep_research in CrawlForge?+

deep_research is CrawlForge's multi-source research tool that replaces a 65-95 minute manual workflow (search, read, extract, verify, synthesize) with a single API call. It combines web search, content extraction, and AI synthesis, returning a cited summary in seconds for 10 credits per query.

How does deep_research work under the hood?+

deep_research runs a five-stage pipeline: query expansion (topic into multiple search queries), source discovery (web searches with deduplication and reputation scoring), content extraction (scraping and metadata capture), verification (claim extraction and cross-referencing), and synthesis (AI summary with citations and conflict detection).

How much does a deep_research query cost compared to doing it manually?+

A single deep_research call costs 10 credits. Doing the equivalent work manually with CrawlForge tools -- search_web (5 credits) times 4 queries plus extract_content (2 credits) times 12 sources -- totals 44 credits. deep_research saves 77% while providing better results through cross-source verification.

How many deep_research queries can I run on each plan?+

The Free plan includes 1,000 credits (100 research queries), Hobby at $19/mo provides 5,000 credits (500 queries), Professional at $99/mo gives 50,000 credits (5,000 queries), and Business at $399/mo includes 250,000 credits (25,000 queries).

Related Articles

CrawlForge MCP Is Now Live: Free Web Scraping for AI Agents
Product Updates

CrawlForge MCP Is Now Live: Free Web Scraping for AI Agents

CrawlForge MCP launches today with 20 web scraping tools, MCP integration for Claude and Cursor, and a free tier with 1,000 credits. Build agents faster.

C
CrawlForge Team
|
Mar 31
|
6m
18 Web Scraping Tools in One MCP Server: The Complete CrawlForge Guide
Product Updates

18 Web Scraping Tools in One MCP Server: The Complete CrawlForge Guide

Discover all 20 web scraping tools in CrawlForge MCP - from basic URL fetching to AI-powered research. A complete reference for AI agent developers.

C
CrawlForge Team
|
Jan 7
|
10m
Welcome to CrawlForge: Enterprise Web Scraping for AI
Product Updates

Welcome to CrawlForge: Enterprise Web Scraping for AI

Introducing CrawlForge MCP - a suite of 20 specialized web scraping API tools built for modern AI apps that makes web data extraction simple and scalable.

C
CrawlForge Team
|
Dec 20
|
3m

Footer

CrawlForge

Enterprise web scraping for AI Agents. 20 specialized MCP tools designed for modern developers building intelligent systems.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • Changelog

Resources

  • Getting Started
  • API Reference
  • Templates
  • Guides
  • Blog
  • FAQ

Developers

  • MCP Protocol
  • Claude Desktop
  • Cursor IDE
  • LangChain
  • LlamaIndex

Company

  • About
  • Contact
  • Privacy
  • Terms

Stay updated

Get the latest updates on new tools and features.

Built with Next.js and MCP protocol

© 2025-2026 CrawlForge. All rights reserved.