CrawlForge
Api Reference
...
Tools
Generate Llms Txt
AI Tool5 credits

generate_llms_txt

Crawl a site, analyze its structure, and emit a standard-compliant llms.txt (and optional llms-full.txt) file defining how AI models should interact with your content. Compliance levels from permissive to strict.

Use Cases

Ship AI-Ready Documentation

Publish llms.txt alongside your docs so Claude, ChatGPT, and other crawlers read clean guidelines.

AI Compliance Publishing

Use strict compliance to set training-data, caching, and attribution rules in one place.

Bot Policy Generation

Add custom guidelines and restrictions for specific AI user agents on your domain.

Endpoint

POST/api/v1/tools/generate_llms_txt
Auth Required
2 req/s on Free plan
5 credits

Parameters

NameTypeRequiredDefaultDescription
url
stringRequired-
The website URL to generate llms.txt for
Example: https://example.com
format
stringOptionalboth
Output format: "both" | "llms-txt" | "llms-full-txt"
Example: both
complianceLevel
stringOptionalstandard
Compliance level for generated guidelines: "basic" | "standard" | "strict"
Example: standard
analysisOptions
objectOptional-
Website analysis options (maxDepth 1-5, maxPages 10-500, respectRobots, detectAPIs, analyzeContent, checkSecurity)
Example: {"maxDepth": 3, "maxPages": 100, "detectAPIs": true}
outputOptions
objectOptional-
Output customization (organizationName, contactEmail, customGuidelines, customRestrictions, includeDetailed, includeAnalysis)
Example: {"organizationName": "Example Inc.", "contactEmail": "ai@example.com"}
Heavy operation: This tool may crawl up to 500 pages. It uses the reservation system so credits are held for the duration of the job.

Request Examples

cURL — both formats, standard compliance

terminalBash

TypeScript — strict with custom guidelines

generateLlmsTxt.tsTypescript

Python

generate_llms_txt.pyPython

Response Example

200 OK4.1s
{
"success": true,
"data": {
"url": "https://example.com",
"hostname": "example.com",
"compliance_level": "standard",
"files": {
"llms.txt": "# llms.txt for Example Inc.\n# Generated by CrawlForge — compliance: standard\n\nUser-Agent: *\nAllow: /\n\nContact: ai@example.com",
"llms-full.txt": "# llms.txt for Example Inc.\n..."
}
},
"credits_used": 5,
"credits_remaining": 995,
"processing_time": 4100
}
Field Descriptions
data.filesReady-to-publish text content for each file
data.compliance_levelEchoes the level you requested
credits_usedFlat 5 credits per call regardless of pages crawled

Credit Cost

5 credits
5 credits per request
Flat 5 credits no matter how many pages the crawler visits.

Tip: Pair with map_site (2 credits) when you just need the URL inventory before generating guidelines.

Related Tools

map_site
Discover URLs before generating llms.txt (2 credits)
crawl_deep
Deep BFS crawl with content extraction (4 credits)
Ready to publish AI interaction guidelines? Sign up for free and get 1,000 credits.