robots.txt Generator — Build Crawler Rules in Seconds

Generate a valid robots.txt file with user-agent groups, Allow, Disallow, Crawl-delay, and Sitemap entries. Copy or download instantly.

Rule group 1
About robots.txt Generator

Build a standards-compliant robots.txt without memorizing the syntax. Add multiple user-agent groups, list Allow and Disallow paths, set a Crawl-delay, include one or more Sitemap URLs, and preview the output live. Preset templates cover the most common scenarios: allow all, block all, block admin paths, and Next.js defaults.

robots.txt is the first file a search-engine crawler requests when it visits your site. The format is simple but fussy: each rule group starts with a User-agent line, followed by one or more Allow or Disallow directives. Blank lines separate groups, and Sitemap declarations are global. A missing colon, a wrong indentation, or the wrong order can quietly break crawling for weeks before you notice the traffic drop.

This generator builds the file piece by piece so you can focus on policy instead of syntax. Add a group per user agent (Googlebot, Bingbot, a specific bot you want to throttle, or the wildcard * for everyone), type the paths to block or allow one per line, and optionally set a Crawl-delay in seconds. Sitemap URLs go at the bottom and apply globally. The preview pane shows the exact bytes that will be written, and the download button produces a plain-text file you can drop into your site root.

How to use the robots.txt Generator
  1. 1

    Pick a template or start blank

    Click a preset (Allow all, Block all, Allow all except admin, Next.js defaults) or build rules from scratch with the default wildcard group.

  2. 2

    Add Allow and Disallow paths

    Type paths in the textarea, one per line. Add extra rule groups for specific user agents like Googlebot or Bingbot with their own rules.

  3. 3

    Copy or download

    The preview updates live. Copy to clipboard for a quick paste, or download robots.txt to upload to your site root.

Common use cases

Blocking staging environments

Use the Block all template to keep crawlers out of preview domains so duplicate content doesn't hurt your production rankings.

Hiding admin and API paths

Disallow /admin/, /api/, and /wp-admin/ so search engines don't index sensitive routes or waste crawl budget on them.

Throttling aggressive bots

Give specific user agents a Crawl-delay to reduce server load without fully blocking them.

Next.js site launch

Apply the Next.js defaults preset to block /api/ and /_next/ internal routes while pointing crawlers at your sitemap.

Frequently asked questions
Where do I put the robots.txt file?

In the root of your domain, so it's reachable at https://example.com/robots.txt. Crawlers only check that exact location. Subdirectories do not apply site-wide.

Does robots.txt enforce security?

No. It's a polite request, not access control. Well-behaved crawlers follow it, but anyone can read the file and see which paths you consider private. Use authentication or IP rules for real access control.

What's the difference between Allow and Disallow?

Disallow blocks a path; Allow re-permits a subpath inside a blocked directory. For example, Disallow /private/ then Allow /private/public/ lets crawlers into one subfolder.

Should I include a Sitemap directive?

Yes, if you have a sitemap. Adding Sitemap: https://example.com/sitemap.xml helps search engines find all indexable URLs even if some pages aren't linked from your homepage.

Does Google respect Crawl-delay?

Googlebot ignores Crawl-delay and uses Search Console's crawl rate setting instead. Bingbot, Yandex, and many other crawlers honor it.

webgeneratordeveloper