Crawl Rules
Sitemaps
Generated robots.txt
# robots.txt generated by WebsiteAudit.com
User-agent: *
Disallow: /admin/
Robots.txt Pro Tips & Validation
- Test first: Use validator before deploying
- Be specific: Target exact paths to block
- Add sitemap: Include sitemap URL at bottom
- Check syntax: Common typos break rules
How to Upload Robots.txt File
Location: Place at root: example.com/robots.txt
Format: Plain text file, UTF-8 encoding
Testing: Use validator tab above
Updates: Changes take effect immediately
Why Robots.txt Is Important for SEO & Crawling
Robots.txt controls which pages search engines can crawl, helping manage crawl budget and prevent indexing issues.
Robots.txt SEO Benefits
- Crawl budget: Focus bots on important pages
- Block admin areas: Prevent indexing sensitive paths
- Avoid duplicates: Block filter and search pages
- Save resources: Reduce server load from bots
Robots.txt Directives & Syntax Guide
Key directives for robots.txt files.
- User-agent: Specify which bot (*, Googlebot)
- Disallow: Block paths from crawling
- Allow: Override disallow for specific paths
- Sitemap: Point to XML sitemap location
- Crawl-delay: Seconds between requests
Related Audit Categories
See how we check these areas in a full website audit: