CrawlForge
Api Reference
...
Tools
Fetch Url
Basic Tool1 credit

fetch_url

Fetch and parse web pages with automatic redirect handling, timeout control, and custom headers. Perfect for retrieving HTML content from any publicly accessible URL.

Use Cases

Page Content Retrieval

Fetch HTML content from any webpage for further processing or analysis

API Data Fetching

Make GET requests to REST APIs and retrieve JSON responses

Health Checks

Monitor website availability and response times

Simple Downloads

Download static assets, documents, or page content

Endpoint

POST/api/v1/tools/fetch_url
Auth Required
2 req/s on Free plan
1 credit

Parameters

NameTypeRequiredDefaultDescription
url
stringRequired-
The URL to fetch (must include protocol: http:// or https://)
Example: https://example.com
headers
objectOptional-
Custom HTTP headers to include in the request
Example: {"Accept": "text/html", "User-Agent": "MyBot/1.0"}
timeout
numberOptional10000
Request timeout in milliseconds (1000-30000)
Example: 15000
follow_redirects
booleanOptionaltrue
Whether to follow HTTP redirects automatically
Example: true
user_agent
stringOptional-
Custom User-Agent header (overrides default)
Example: Mozilla/5.0 (compatible; CrawlBot/1.0)

Request Examples

cURL

terminalBash

TypeScript

fetchUrl.tsTypescript

Python

fetch_url.pyPython

Response Example

200 OK245ms
{
"success": true,
"data": {
"url": "https://example.com",
"status": 200,
"status_text": "OK",
"headers": {
"content-type": "text/html; charset=UTF-8",
"content-length": "1256",
"server": "nginx"
},
"content": "Example Domain......",
"content_length": 1256,
"content_type": "text/html; charset=UTF-8",
"redirected": false,
"final_url": "https://example.com"
},
"credits_used": 1,
"credits_remaining": 999,
"processing_time": 245
}
Field Descriptions
data.urlThe original URL that was requested
data.statusHTTP status code of the response
data.contentThe full HTML content of the page
data.content_lengthSize of the content in bytes
data.final_urlFinal URL after following redirects
credits_usedCredits deducted for this request (1 per fetch)
credits_remainingYour remaining credit balance

Error Handling

Invalid URL (400 Bad Request)

The URL format is invalid. Ensure it includes the protocol (http:// or https://)

Timeout Error (500 Internal Server Error)

The request took longer than the specified timeout. Try increasing the timeout parameter.

Insufficient Credits (402 Payment Required)

Your account doesn't have enough credits. Purchase more credits or upgrade your plan.

Rate Limit Exceeded (429 Too Many Requests)

You've exceeded your plan's rate limit. Wait a moment or upgrade your plan for higher limits.

Pro Tip: Always implement retry logic with exponential backoff for production applications. See our Error Handling Guide for best practices.

Credit Cost

1 credit
1 credit per request
Each successful fetch_url request costs 1 credit, regardless of the page size or response time.

Free Plan: 1,000 credits/month = 1,000 requests

Hobby Plan: 5,000 credits/month = 5,000 requests ($19/mo)

Professional Plan: 50,000 credits/month = 50,000 requests ($99/mo)

Business Plan: 250,000 credits/month = 250,000 requests ($399/mo)

Related Tools

extract_text
Extract clean text from the fetched HTML content (1 credit)
extract_links
Discover all links from the fetched page (1 credit)
extract_metadata
Extract OpenGraph, Twitter Card, and meta tags (1 credit)
scrape_structured
Extract structured data using CSS selectors (2 credits)
Ready to try fetch_url? Sign up for free and get 1,000 credits to start building.